ID
int64 1
21k
| TITLE
stringlengths 7
239
| ABSTRACT
stringlengths 7
2.76k
| Computer Science
int64 0
1
| Physics
int64 0
1
| Mathematics
int64 0
1
| Statistics
int64 0
1
| Quantitative Biology
int64 0
1
| Quantitative Finance
int64 0
1
|
---|---|---|---|---|---|---|---|---|
17,601 | Ordinary differential equations in algebras of generalized functions | A local existence and uniqueness theorem for ODEs in the special algebra of
generalized functions is established, as well as versions including parameters
and dependence on initial values in the generalized sense. Finally, a Frobenius
theorem is proved. In all these results, composition of generalized functions
is based on the notion of c-boundedness.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,602 | Interesting Paths in the Mapper | The Mapper produces a compact summary of high dimensional data as a
simplicial complex. We study the problem of quantifying the interestingness of
subpopulations in a Mapper, which appear as long paths, flares, or loops.
First, we create a weighted directed graph G using the 1-skeleton of the
Mapper. We use the average values at the vertices of a target function to
direct edges (from low to high). The difference between the average values at
vertices (high-low) is set as the edge's weight. Covariation of the remaining h
functions (independent variables) is captured by a h-bit binary signature
assigned to the edge. An interesting path in G is a directed path whose edges
all have the same signature. We define the interestingness score of such a path
as a sum of its edge weights multiplied by a nonlinear function of their ranks
in the path.
Second, we study three optimization problems on this graph G. In the problem
Max-IP, we seek an interesting path in G with the maximum interestingness
score. We show that Max-IP is NP-complete. For the special case when G is a
directed acyclic graph (DAG), we show that Max-IP can be solved in polynomial
time - in O(mnd_i) where d_i is the maximum indegree of a vertex in G.
In the more general problem IP, the goal is to find a collection of
edge-disjoint interesting paths such that the overall sum of their
interestingness scores is maximized. We also study a variant of IP termed k-IP,
where the goal is to identify a collection of edge-disjoint interesting paths
each with k edges, and their total interestingness score is maximized. While
k-IP can be solved in polynomial time for k <= 2, we show k-IP is NP-complete
for k >= 3 even when G is a DAG. We develop polynomial time heuristics for IP
and k-IP on DAGs.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,603 | On a three dimensional vision based collision avoidance model | This paper presents a three dimensional collision avoidance approach for
aerial vehicles inspired by coordinated behaviors in biological groups. The
proposed strategy aims to enable a group of vehicles to converge to a common
destination point avoiding collisions with each other and with moving obstacles
in their environment. The interaction rules lead the agents to adapt their
velocity vectors through a modification of the relative bearing angle and the
relative elevation. Moreover the model satisfies the limited field of view
constraints resulting from individual perception sensitivity. From the proposed
individual based model, a mean-field kinetic model is derived. Simulations are
performed to show the effectiveness of the proposed model.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,604 | Algorithms for Covering Multiple Barriers | In this paper, we consider the problems for covering multiple intervals on a
line. Given a set $B$ of $m$ line segments (called "barriers") on a horizontal
line $L$ and another set $S$ of $n$ horizontal line segments of the same length
in the plane, we want to move all segments of $S$ to $L$ so that their union
covers all barriers and the maximum movement of all segments of $S$ is
minimized. Previously, an $O(n^3\log n)$-time algorithm was given for the case
$m=1$. In this paper, we propose an $O(n^2\log n\log \log n+nm\log m)$-time
algorithm for a more general setting with any $m\geq 1$, which also improves
the previous work when $m=1$. We then consider a line-constrained version of
the problem in which the segments of $S$ are all initially on the line $L$.
Previously, an $O(n\log n)$-time algorithm was known for the case $m=1$. We
present an algorithm of $O(m\log m+n\log m \log n)$ time for any $m\geq 1$.
These problems may have applications in mobile sensor barrier coverage in
wireless sensor networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,605 | Shattering the glass ceiling? How the institutional context mitigates the gender gap in entrepreneurship | We examine how the institutional context affects the relationship between
gender and opportunity entrepreneurship. To do this, we develop a multi-level
model that connects feminist theory at the micro-level to institutional theory
at the macro-level. It is hypothesized that the gender gap in opportunity
entrepreneurship is more pronounced in low-quality institutional contexts and
less pronounced in high-quality institutional contexts. Using data from the
Global Entrepreneurship Monitor (GEM) and regulation data from the economic
freedom of the world index (EFW), we test our predictions and find evidence in
support of our model. Our findings suggest that, while there is a gender gap in
entrepreneurship, these disparities are reduced as the quality of the
institutional context improves.
| 0 | 0 | 0 | 0 | 0 | 1 |
17,606 | An Automated Text Categorization Framework based on Hyperparameter Optimization | A great variety of text tasks such as topic or spam identification, user
profiling, and sentiment analysis can be posed as a supervised learning problem
and tackle using a text classifier. A text classifier consists of several
subprocesses, some of them are general enough to be applied to any supervised
learning problem, whereas others are specifically designed to tackle a
particular task, using complex and computational expensive processes such as
lemmatization, syntactic analysis, etc. Contrary to traditional approaches, we
propose a minimalistic and wide system able to tackle text classification tasks
independent of domain and language, namely microTC. It is composed by some easy
to implement text transformations, text representations, and a supervised
learning algorithm. These pieces produce a competitive classifier even in the
domain of informally written text. We provide a detailed description of microTC
along with an extensive experimental comparison with relevant state-of-the-art
methods. mircoTC was compared on 30 different datasets. Regarding accuracy,
microTC obtained the best performance in 20 datasets while achieves competitive
results in the remaining 10. The compared datasets include several problems
like topic and polarity classification, spam detection, user profiling and
authorship attribution. Furthermore, it is important to state that our approach
allows the usage of the technology even without knowledge of machine learning
and natural language processing.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,607 | Abdominal aortic aneurysms and endovascular sealing: deformation and dynamic response | Endovascular sealing is a new technique for the repair of abdominal aortic
aneurysms. Commercially available in Europe since~2013, it takes a
revolutionary approach to aneurysm repair through minimally invasive
techniques. Although aneurysm sealing may be thought as more stable than
conventional endovascular stent graft repairs, post-implantation movement of
the endoprosthesis has been described, potentially leading to late
complications. The paper presents for the first time a model, which explains
the nature of forces, in static and dynamic regimes, acting on sealed abdominal
aortic aneurysms, with references to real case studies. It is shown that
elastic deformation of the aorta and of the endoprosthesis induced by static
forces and vibrations during daily activities can potentially promote undesired
movements of the endovascular sealing structure.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,608 | Affinity Scheduling and the Applications on Data Center Scheduling with Data Locality | MapReduce framework is the de facto standard in Hadoop. Considering the data
locality in data centers, the load balancing problem of map tasks is a special
case of affinity scheduling problem. There is a huge body of work on affinity
scheduling, proposing heuristic algorithms which try to increase data locality
in data centers like Delay Scheduling and Quincy. However, not enough attention
has been put on theoretical guarantees on throughput and delay optimality of
such algorithms. In this work, we present and compare different algorithms and
discuss their shortcoming and strengths. To the best of our knowledge, most
data centers are using static load balancing algorithms which are not efficient
in any ways and results in wasting the resources and causing unnecessary delays
for users.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,609 | Multivariate Regression with Gross Errors on Manifold-valued Data | We consider the topic of multivariate regression on manifold-valued output,
that is, for a multivariate observation, its output response lies on a
manifold. Moreover, we propose a new regression model to deal with the presence
of grossly corrupted manifold-valued responses, a bottleneck issue commonly
encountered in practical scenarios. Our model first takes a correction step on
the grossly corrupted responses via geodesic curves on the manifold, and then
performs multivariate linear regression on the corrected data. This results in
a nonconvex and nonsmooth optimization problem on manifolds. To this end, we
propose a dedicated approach named PALMR, by utilizing and extending the
proximal alternating linearized minimization techniques. Theoretically, we
investigate its convergence property, where it is shown to converge to a
critical point under mild conditions. Empirically, we test our model on both
synthetic and real diffusion tensor imaging data, and show that our model
outperforms other multivariate regression models when manifold-valued responses
contain gross errors, and is effective in identifying gross errors.
| 1 | 0 | 1 | 1 | 0 | 0 |
17,610 | Computing an Approximately Optimal Agreeable Set of Items | We study the problem of finding a small subset of items that is
\emph{agreeable} to all agents, meaning that all agents value the subset at
least as much as its complement. Previous work has shown worst-case bounds,
over all instances with a given number of agents and items, on the number of
items that may need to be included in such a subset. Our goal in this paper is
to efficiently compute an agreeable subset whose size approximates the size of
the smallest agreeable subset for a given instance. We consider three
well-known models for representing the preferences of the agents: ordinal
preferences on single items, the value oracle model, and additive utilities. In
each of these models, we establish virtually tight bounds on the approximation
ratio that can be obtained by algorithms running in polynomial time.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,611 | 3D Sketching using Multi-View Deep Volumetric Prediction | Sketch-based modeling strives to bring the ease and immediacy of drawing to
the 3D world. However, while drawings are easy for humans to create, they are
very challenging for computers to interpret due to their sparsity and
ambiguity. We propose a data-driven approach that tackles this challenge by
learning to reconstruct 3D shapes from one or more drawings. At the core of our
approach is a deep convolutional neural network (CNN) that predicts occupancy
of a voxel grid from a line drawing. This CNN provides us with an initial 3D
reconstruction as soon as the user completes a single drawing of the desired
shape. We complement this single-view network with an updater CNN that refines
an existing prediction given a new drawing of the shape created from a novel
viewpoint. A key advantage of our approach is that we can apply the updater
iteratively to fuse information from an arbitrary number of viewpoints, without
requiring explicit stroke correspondences between the drawings. We train both
CNNs by rendering synthetic contour drawings from hand-modeled shape
collections as well as from procedurally-generated abstract shapes. Finally, we
integrate our CNNs in a minimal modeling interface that allows users to
seamlessly draw an object, rotate it to see its 3D reconstruction, and refine
it by re-drawing from another vantage point using the 3D reconstruction as
guidance. The main strengths of our approach are its robustness to freehand
bitmap drawings, its ability to adapt to different object categories, and the
continuum it offers between single-view and multi-view sketch-based modeling.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,612 | Inductive Pairwise Ranking: Going Beyond the n log(n) Barrier | We study the problem of ranking a set of items from nonactively chosen
pairwise preferences where each item has feature information with it. We
propose and characterize a very broad class of preference matrices giving rise
to the Feature Low Rank (FLR) model, which subsumes several models ranging from
the classic Bradley-Terry-Luce (BTL) (Bradley and Terry 1952) and Thurstone
(Thurstone 1927) models to the recently proposed blade-chest (Chen and Joachims
2016) and generic low-rank preference (Rajkumar and Agarwal 2016) models. We
use the technique of matrix completion in the presence of side information to
develop the Inductive Pairwise Ranking (IPR) algorithm that provably learns a
good ranking under the FLR model, in a sample-efficient manner. In practice,
through systematic synthetic simulations, we confirm our theoretical findings
regarding improvements in the sample complexity due to the use of feature
information. Moreover, on popular real-world preference learning datasets, with
as less as 10% sampling of the pairwise comparisons, our method recovers a good
ranking.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,613 | SAGA and Restricted Strong Convexity | SAGA is a fast incremental gradient method on the finite sum problem and its
effectiveness has been tested on a vast of applications. In this paper, we
analyze SAGA on a class of non-strongly convex and non-convex statistical
problem such as Lasso, group Lasso, Logistic regression with $\ell_1$
regularization, linear regression with SCAD regularization and Correct Lasso.
We prove that SAGA enjoys the linear convergence rate up to the statistical
estimation accuracy, under the assumption of restricted strong convexity (RSC).
It significantly extends the applicability of SAGA in convex and non-convex
optimization.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,614 | Characterization of Traps at Nitrided SiO$_2$/SiC Interfaces near the Conduction Band Edge by using Hall Effect Measurements | The effects of nitridation on the density of traps at SiO$_2$/SiC interfaces
near the conduction band edge were qualitatively examined by a simple, newly
developed characterization method that utilizes Hall effect measurements and
split capacitance-voltage measurements. The results showed a significant
reduction in the density of interface traps near the conduction band edge by
nitridation, as well as the high density of interface traps that was not
eliminated by nitridation.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,615 | Response theory of the ergodic many-body delocalized phase: Keldysh Finkel'stein sigma models and the 10-fold way | We derive the finite temperature Keldysh response theory for interacting
fermions in the presence of quenched disorder, as applicable to any of the 10
Altland-Zirnbauer classes in an Anderson delocalized phase with at least a U(1)
continuous symmetry. In this formulation of the interacting Finkel'stein
nonlinear sigma model, the statistics of one-body wave functions are encoded by
the constrained matrix field, while physical correlations follow from the
hydrodynamic density or spin response field, which decouples the interactions.
Integrating out the matrix field first, we obtain weak (anti)localization and
Altshuler-Aronov quantum conductance corrections from the hydrodynamic response
function. This procedure automatically incorporates the correct infrared
physics, and in particular gives the Altshuler-Aronov-Khmelnitsky (AAK)
equations for dephasing of weak (anti)localization due to electron-electron
collisions. We explicate the method by deriving known quantum corrections in
two dimensions for the symplectic metal class AII, as well as the spin-SU(2)
invariant superconductor classes C and CI. We show that conductance corrections
due to the special modes at zero energy in nonstandard classes are
automatically cut off by temperature, as previously expected, while the
Wigner-Dyson class Cooperon modes that persist to all energies are cut by
dephasing. We also show that for short-ranged interactions, the standard
self-consistent solution for the dephasing rate is equivalent to a diagrammatic
summation via the self-consistent Born approximation. This should be compared
to the AAK solution for long-ranged Coulomb interactions, which exploits the
Markovian noise correlations induced by thermal fluctuations of the
electromagnetic field. We discuss prospects for exploring the many-body
localization transition from the ergodic side as a dephasing catastrophe in
short-range interacting models.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,616 | Classification via Tensor Decompositions of Echo State Networks | This work introduces a tensor-based method to perform supervised
classification on spatiotemporal data processed in an echo state network.
Typically when performing supervised classification tasks on data processed in
an echo state network, the entire collection of hidden layer node states from
the training dataset is shaped into a matrix, allowing one to use standard
linear algebra techniques to train the output layer. However, the collection of
hidden layer states is multidimensional in nature, and representing it as a
matrix may lead to undesirable numerical conditions or loss of spatial and
temporal correlations in the data.
This work proposes a tensor-based supervised classification method on echo
state network data that preserves and exploits the multidimensional nature of
the hidden layer states. The method, which is based on orthogonal Tucker
decompositions of tensors, is compared with the standard linear output weight
approach in several numerical experiments on both synthetic and natural data.
The results show that the tensor-based approach tends to outperform the
standard approach in terms of classification accuracy.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,617 | Inference on Breakdown Frontiers | Given a set of baseline assumptions, a breakdown frontier is the boundary
between the set of assumptions which lead to a specific conclusion and those
which do not. In a potential outcomes model with a binary treatment, we
consider two conclusions: First, that ATE is at least a specific value (e.g.,
nonnegative) and second that the proportion of units who benefit from treatment
is at least a specific value (e.g., at least 50\%). For these conclusions, we
derive the breakdown frontier for two kinds of assumptions: one which indexes
relaxations of the baseline random assignment of treatment assumption, and one
which indexes relaxations of the baseline rank invariance assumption. These
classes of assumptions nest both the point identifying assumptions of random
assignment and rank invariance and the opposite end of no constraints on
treatment selection or the dependence structure between potential outcomes.
This frontier provides a quantitative measure of robustness of conclusions to
relaxations of the baseline point identifying assumptions. We derive
$\sqrt{N}$-consistent sample analog estimators for these frontiers. We then
provide two asymptotically valid bootstrap procedures for constructing lower
uniform confidence bands for the breakdown frontier. As a measure of
robustness, estimated breakdown frontiers and their corresponding confidence
bands can be presented alongside traditional point estimates and confidence
intervals obtained under point identifying assumptions. We illustrate this
approach in an empirical application to the effect of child soldiering on
wages. We find that sufficiently weak conclusions are robust to simultaneous
failures of rank invariance and random assignment, while some stronger
conclusions are fairly robust to failures of rank invariance but not
necessarily to relaxations of random assignment.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,618 | Sequential Detection of Three-Dimensional Signals under Dependent Noise | We study detection methods for multivariable signals under dependent noise.
The main focus is on three-dimensional signals, i.e. on signals in the
space-time domain. Examples for such signals are multifaceted. They include
geographic and climatic data as well as image data, that are observed over a
fixed time horizon. We assume that the signal is observed as a finite block of
noisy samples whereby we are interested in detecting changes from a given
reference signal. Our detector statistic is based on a sequential partial sum
process, related to classical signal decomposition and reconstruction
approaches applied to the sampled signal. We show that this detector process
converges weakly under the no change null hypothesis that the signal coincides
with the reference signal, provided that the spatial-temporal partial sum
process associated to the random field of the noise terms disturbing the
sampled signal con- verges to a Brownian motion. More generally, we also
establish the limiting distribution under a wide class of local alternatives
that allows for smooth as well as discontinuous changes. Our results also cover
extensions to the case that the reference signal is unknown. We conclude with
an extensive simulation study of the detection algorithm.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,619 | T-Branes at the Limits of Geometry | Singular limits of 6D F-theory compactifications are often captured by
T-branes, namely a non-abelian configuration of intersecting 7-branes with a
nilpotent matrix of normal deformations. The long distance approximation of
such 7-branes is a Hitchin-like system in which simple and irregular poles
emerge at marked points of the geometry. When multiple matter fields localize
at the same point in the geometry, the associated Higgs field can exhibit
irregular behavior, namely poles of order greater than one. This provides a
geometric mechanism to engineer wild Higgs bundles. Physical constraints such
as anomaly cancellation and consistent coupling to gravity also limit the order
of such poles. Using this geometric formulation, we unify seemingly different
wild Hitchin systems in a single framework in which orders of poles become
adjustable parameters dictated by tuning gauge singlet moduli of the F-theory
model.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,620 | Space-time crystal and space-time group | Crystal structures and the Bloch theorem play a fundamental role in condensed
matter physics. We extend the static crystal to the dynamic "space-time"
crystal characterized by the general intertwined space-time periodicities in
$D+1$ dimensions, which include both the static crystal and the Floquet crystal
as special cases. A new group structure dubbed "space-time" group is
constructed to describe the discrete symmetries of space-time crystal. Compared
to space and magnetic groups, space-time group is augmented by "time-screw"
rotations and "time-glide" reflections involving fractional translations along
the time direction. A complete classification of the 13 space-time groups in
1+1D is performed. The Kramers-type degeneracy can arise from the glide
time-reversal symmetry without the half-integer spinor structure, which
constrains the winding number patterns of spectral dispersions. In 2+1D,
non-symmorphic space-time symmetries enforce spectral degeneracies, leading to
protected Floquet semi-metal states. Our work provides a general framework for
further studying topological properties of the $D+1$ dimensional space-time
crystal.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,621 | On the Performance of Zero-Forcing Processing in Multi-Way Massive MIMO Relay Networks | We consider a multi-way massive multiple-input multiple-output relay network
with zero-forcing processing at the relay. By taking into account the
time-division duplex protocol with channel estimation, we derive an analytical
approximation of the spectral efficiency. This approximation is very tight and
simple which enables us to analyze the system performance, as well as, to
compare the spectral efficiency with zero-forcing and maximum-ratio processing.
Our results show that by using a very large number of relay antennas and with
the zero-forcing technique, we can simultaneously serve many active users in
the same time-frequency resource, each with high spectral efficiency.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,622 | Motivic rational homotopy type | In this paper we introduce and study motives for rational homotopy types.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,623 | Preconditioner-free Wiener filtering with a dense noise matrix | This work extends the Elsner & Wandelt (2013) iterative method for efficient,
preconditioner-free Wiener filtering to cases in which the noise covariance
matrix is dense, but can be decomposed into a sum whose parts are sparse in
convenient bases. The new method, which uses multiple messenger fields,
reproduces Wiener filter solutions for test problems, and we apply it to a case
beyond the reach of the Elsner & Wandelt (2013) method. We compute the Wiener
filter solution for a simulated Cosmic Microwave Background map that contains
spatially-varying, uncorrelated noise, isotropic $1/f$ noise, and large-scale
horizontal stripes (like those caused by the atmospheric noise). We discuss
simple extensions that can filter contaminated modes or inverse-noise filter
the data. These techniques help to address complications in the noise
properties of maps from current and future generations of ground-based
Microwave Background experiments, like Advanced ACTPol, Simons Observatory, and
CMB-S4.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,624 | Order-unity argument for structure-generated "extra" expansion | Self-consistent treatment of cosmological structure formation and expansion
within the context of classical general relativity may lead to "extra"
expansion above that expected in a structureless universe. We argue that in
comparison to an early-epoch, extrapolated Einstein-de Sitter model, about
10-15% "extra" expansion is sufficient at the present to render superfluous the
"dark energy" 68% contribution to the energy density budget, and that this is
observationally realistic.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,625 | The least unramified prime which does not split completely | Let $K/F$ be a finite extension of number fields of degree $n \geq 2$. We
establish effective field-uniform unconditional upper bounds for the least norm
of a prime ideal of $F$ which is degree 1 over $\mathbb{Q}$ and does not ramify
or split completely in $K$. We improve upon the previous best known general
estimates due to X. Li when $F = \mathbb{Q}$ and Murty-Patankar when $K/F$ is
Galois. Our bounds are the first when $K/F$ is not assumed to be Galois and $F
\neq \mathbb{Q}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,626 | Crosscorrelation of Rudin-Shapiro-Like Polynomials | We consider the class of Rudin-Shapiro-like polynomials, whose $L^4$ norms on
the complex unit circle were studied by Borwein and Mossinghoff. The polynomial
$f(z)=f_0+f_1 z + \cdots + f_d z^d$ is identified with the sequence
$(f_0,f_1,\ldots,f_d)$ of its coefficients. From the $L^4$ norm of a
polynomial, one can easily calculate the autocorrelation merit factor of its
associated sequence, and conversely. In this paper, we study the
crosscorrelation properties of pairs of sequences associated to
Rudin-Shapiro-like polynomials. We find an explicit formula for the
crosscorrelation merit factor. A computer search is then used to find pairs of
Rudin-Shapiro-like polynomials whose autocorrelation and crosscorrelation merit
factors are simultaneously high. Pursley and Sarwate proved a bound that limits
how good this combined autocorrelation and crosscorrelation performance can be.
We find infinite families of polynomials whose performance approaches quite
close to this fundamental limit.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,627 | An Application of Rubi: Series Expansion of the Quark Mass Renormalization Group Equation | We highlight how Rule-based Integration (Rubi) is an enhanced method of
symbolic integration which allows for the integration of many difficult
integrals not accomplished by other computer algebra systems. Using Rubi, many
integration techniques become tractable. Integrals are approached using
step-wise simplification, hence distilling an integral (if the solution is
unknown) into composite integrals which highlight yet undiscovered integration
rules. The motivating example we use is the derivation of the updated series
expansion of the quark mass renormalization group equation (RGE) to five-loop
order. This series provides the relation between a light quark mass in the
modified minimal subtraction ($\overline{\text{MS}}$) scheme defined at some
given scale, e.g. at the tau-lepton mass scale, and another chosen energy
scale, $s$. This relation explicitly depicts the renormalization scheme
dependence of the running quark mass on the scale parameter, $s$, and is
important in accurately determining a light quark mass at a chosen scale. The
five-loop QCD $\beta(a_s)$ and $\gamma(a_s)$ functions are used in this
determination.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,628 | On the Performance of Wireless Powered Communication With Non-linear Energy Harvesting | In this paper, we analyze the performance of a time-slotted multi-antenna
wireless powered communication (WPC) system, where a wireless device first
harvests radio frequency (RF) energy from a power station (PS) in the downlink
to facilitate information transfer to an information receiving station (IRS) in
the uplink. The main goal of this paper is to provide insights and guidelines
for the design of practical WPC systems. To this end, we adopt a recently
proposed parametric non-linear RF energy harvesting (EH) model, which has been
shown to accurately model the end-to-end non-linearity of practical RF EH
circuits. In order to enhance the RF power transfer efficiency, maximum ratio
transmission is adopted at the PS to focus the energy signals on the wireless
device. Furthermore, at the IRS, maximum ratio combining is used. We analyze
the outage probability and the average throughput of information transfer,
assuming Nakagami-$m$ fading uplink and downlink channels. Moreover, we study
the system performance as a function of the number of PS transmit antennas, the
number of IRS receive antennas, the transmit power of the PS, the fading
severity, the transmission rate of the wireless device, and the EH time
duration. In addition, we obtain a fixed point equation for the optimal
transmission rate and the optimal EH time duration that maximize the asymptotic
throughput for high PS transmit powers. All analytical results are corroborated
by simulations.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,629 | Realizing polarization conversion and unidirectional transmission by using a uniaxial crystal plate | We show that polarization states of electromagnetic waves can be manipulated
easily using a single thin uniaxial crystal plate. By performing a rotational
transformation of the coordinates and controlling the thickness of the plate,
we can achieve a complete polarization conversion between TE wave and TM wave
in a spectral band. We show that the off-diagonal element of the permittivity
is the key for polarization conversion. Our analysis can explain clearly the
results found in experiments with metamaterials. Finally, we propose a simple
device to realize unidirectional transmission based on polarization conversion
and excitation of surface plasmon polaritons.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,630 | When Should You Adjust Standard Errors for Clustering? | In empirical work in economics it is common to report standard errors that
account for clustering of units. Typically, the motivation given for the
clustering adjustments is that unobserved components in outcomes for units
within clusters are correlated. However, because correlation may occur across
more than one dimension, this motivation makes it difficult to justify why
researchers use clustering in some dimensions, such as geographic, but not
others, such as age cohorts or gender. It also makes it difficult to explain
why one should not cluster with data from a randomized experiment. In this
paper, we argue that clustering is in essence a design problem, either a
sampling design or an experimental design issue. It is a sampling design issue
if sampling follows a two stage process where in the first stage, a subset of
clusters were sampled randomly from a population of clusters, while in the
second stage, units were sampled randomly from the sampled clusters. In this
case the clustering adjustment is justified by the fact that there are clusters
in the population that we do not see in the sample. Clustering is an
experimental design issue if the assignment is correlated within the clusters.
We take the view that this second perspective best fits the typical setting in
economics where clustering adjustments are used. This perspective allows us to
shed new light on three questions: (i) when should one adjust the standard
errors for clustering, (ii) when is the conventional adjustment for clustering
appropriate, and (iii) when does the conventional adjustment of the standard
errors matter.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,631 | Approximate homomorphisms on lattices | We prove two results concerning an Ulam-type stability problem for
homomorphisms between lattices. One of them involves estimates by quite general
error functions; the other deals with approximate (join) homomorphisms in terms
of certain systems of lattice neighborhoods. As a corollary, we obtain a
stability result for approximately monotone functions.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,632 | Learning Latent Events from Network Message Logs: A Decomposition Based Approach | In this communication, we describe a novel technique for event mining using a
decomposition based approach that combines non-parametric change-point
detection with LDA. We prove theoretical guarantees about sample-complexity and
consistency of the approach. In a companion paper, we will perform a thorough
evaluation of our approach with detailed experiments.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,633 | Generating retinal flow maps from structural optical coherence tomography with artificial intelligence | Despite significant advances in artificial intelligence (AI) for computer
vision, its application in medical imaging has been limited by the burden and
limits of expert-generated labels. We used images from optical coherence
tomography angiography (OCTA), a relatively new imaging modality that measures
perfusion of the retinal vasculature, to train an AI algorithm to generate
vasculature maps from standard structural optical coherence tomography (OCT)
images of the same retinae, both exceeding the ability and bypassing the need
for expert labeling. Deep learning was able to infer perfusion of
microvasculature from structural OCT images with similar fidelity to OCTA and
significantly better than expert clinicians (P < 0.00001). OCTA suffers from
need of specialized hardware, laborious acquisition protocols, and motion
artifacts; whereas our model works directly from standard OCT which are
ubiquitous and quick to obtain, and allows unlocking of large volumes of
previously collected standard OCT data both in existing clinical trials and
clinical practice. This finding demonstrates a novel application of AI to
medical imaging, whereby subtle regularities between different modalities are
used to image the same body part and AI is used to generate detailed and
accurate inferences of tissue function from structure imaging.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,634 | Varieties with Ample Tangent Sheaves | This paper generalises Mori's famous theorem about "Projective manifolds with
ample tangent bundles" to normal projective varieties in the following way:
A normal projective variety over $\mathbb{C}$ with ample tangent sheaf is
isomorphic to the complex projective space.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,635 | Water sub-diffusion in membranes for fuel cells | We investigate the dynamics of water confined in soft ionic nano-assemblies,
an issue critical for a general understanding of the multi-scale
structure-function interplay in advanced materials. We focus in particular on
hydrated perfluoro-sulfonic acid compounds employed as electrolytes in fuel
cells. These materials form phase-separated morphologies that show outstanding
proton-conducting properties, directly related to the state and dynamics of the
absorbed water. We have quantified water motion and ion transport by combining
Quasi Elastic Neutron Scattering, Pulsed Field Gradient Nuclear Magnetic
Resonance, and Molecular Dynamics computer simulation. Effective water and ion
diffusion coefficients have been determined together with their variation upon
hydration at the relevant atomic, nanoscopic and macroscopic scales, providing
a complete picture of transport. We demonstrate that confinement at the
nanoscale and direct interaction with the charged interfaces produce anomalous
sub-diffusion, due to a heterogeneous space-dependent dynamics within the ionic
nanochannels. This is irrespective of the details of the chemistry of the
hydrophobic confining matrix, confirming the statistical significance of our
conclusions. Our findings turn out to indicate interesting connections and
possibilities of cross-fertilization with other domains, including biophysics.
They also establish fruitful correspondences with advanced topics in
statistical mechanics, resulting in new possibilities for the analysis of
Neutron scattering data.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,636 | Global Strong Solution of a 2D coupled Parabolic-Hyperbolic Magnetohydrodynamic System | The main objective of this paper is to study the global strong solution of
the parabolic-hyperbolic incompressible magnetohydrodynamic (MHD) model in two
dimensional space. Based on Agmon, Douglis and Nirenberg's estimates for the
stationary Stokes equation and the Solonnikov's theorem of
$L^p$-$L^q$-estimates for the evolution Stokes equation, it is shown that the
mixed-type MHD equations exist a global strong solution.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,637 | Subcritical thermal convection of liquid metals in a rapidly rotating sphere | Planetary cores consist of liquid metals (low Prandtl number $Pr$) that
convect as the core cools. Here we study nonlinear convection in a rotating
(low Ekman number $Ek$) planetary core using a fully 3D direct numerical
simulation. Near the critical thermal forcing (Rayleigh number $Ra$),
convection onsets as thermal Rossby waves, but as the $Ra$ increases, this
state is superceded by one dominated by advection. At moderate rotation, these
states (here called the weak branch and strong branch, respectively) are
smoothly connected. As the planetary core rotates faster, the smooth transition
is replaced by hysteresis cycles and subcriticality until the weak branch
disappears entirely and the strong branch onsets in a turbulent state at $Ek <
10^{-6}$. Here the strong branch persists even as the thermal forcing drops
well below the linear onset of convection ($Ra=0.7Ra_{crit}$ in this study). We
highlight the importance of the Reynolds stress, which is required for
convection to subsist below the linear onset. In addition, the Péclet number
is consistently above 10 in the strong branch. We further note the presence of
a strong zonal flow that is nonetheless unimportant to the convective state.
Our study suggests that, in the asymptotic regime of rapid rotation relevant
for planetary interiors, thermal convection of liquid metals in a sphere onsets
through a subcritical bifurcation.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,638 | Decision-making processes underlying pedestrian behaviours at signalised crossings: Part 2. Do pedestrians show cultural herding behaviour ? | Followership is generally defined as a strategy that evolved to solve social
coordination problems, and particularly those involved in group movement.
Followership behaviour is particularly interesting in the context of
road-crossing behaviour because it involves other principles such as
risk-taking and evaluating the value of social information. This study sought
to identify the cognitive mechanisms underlying decision-making by pedestrians
who follow another person across the road at the green or at the red light in
two different countries (France and Japan). We used agent-based modelling to
simulate the road-crossing behaviours of pedestrians. This study showed that
modelling is a reliable means to test different hypotheses and find the exact
processes underlying decision-making when crossing the road. We found that two
processes suffice to simulate pedestrian behaviours. Importantly, the study
revealed differences between the two nationalities and between sexes in the
decision to follow and cross at the green and at the red light. Japanese
pedestrians are particularly attentive to the number of already departed
pedestrians and the number of waiting pedestrians at the red light, whilst
their French counterparts only consider the number of pedestrians that have
already stepped off the kerb, thus showing the strong conformism of Japanese
people. Finally, the simulations are revealed to be similar to observations,
not only for the departure latencies but also for the number of crossing
pedestrians and the rates of illegal crossings. The conclusion suggests new
solutions for safety in transportation research.
| 0 | 0 | 0 | 0 | 1 | 0 |
17,639 | Driven by Excess? Climatic Implications of New Global Mapping of Near-Surface Water-Equivalent Hydrogen on Mars | We present improved Mars Odyssey Neutron Spectrometer (MONS) maps of
near-surface Water-Equivalent Hydrogen (WEH) on Mars that have intriguing
implications for the global distribution of "excess" ice, which occurs when the
mass fraction of water ice exceeds the threshold amount needed to saturate the
pore volume in normal soils. We have refined the crossover technique of Feldman
et al. (2011) by using spatial deconvolution and Gaussian weighting to create
the first globally self-consistent map of WEH. At low latitudes, our new maps
indicate that WEH exceeds 15% in several near-equatorial regions, such as
Arabia Terra, which has important implications for the types of hydrated
minerals present at low latitudes. At high latitudes, we demonstrate that the
disparate MONS and Phoenix Robotic Arm (RA) observations of near surface WEH
can be reconciled by a three-layer model incorporating dry soil over fully
saturated pore ice over pure excess ice: such a three-layer model can also
potentially explain the strong anticorrelation of subsurface ice content and
ice table depth observed at high latitudes. At moderate latitudes, we show that
the distribution of recently formed impact craters is also consistent with our
latest MONS results, as both the shallowest ice-exposing crater and deepest
non-ice-exposing crater at each impact site are in good agreement with our
predictions of near-surface WEH. Overall, we find that our new mapping is
consistent with the widespread presence at mid-to-high Martian latitudes of
recently deposited shallow excess ice reservoirs that are not yet in
equilibrium with the atmosphere.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,640 | On distribution of points with conjugate algebraic integer coordinates close to planar curves | Let $\varphi:\mathbb{R}\rightarrow \mathbb{R}$ be a continuously
differentiable function on an interval $J\subset\mathbb{R}$ and let
$\boldsymbol{\alpha}=(\alpha_1,\alpha_2)$ be a point with algebraic conjugate
integer coordinates of degree $\leq n$ and of height $\leq Q$. Denote by
$\tilde{M}^n_\varphi(Q,\gamma, J)$ the set of points $\boldsymbol{\alpha}$ such
that $|\varphi(\alpha_1)-\alpha_2|\leq c_1 Q^{-\gamma}$. In this paper we show
that for a real $0<\gamma<1$ and any sufficiently large $Q$ there exist
positive values $c_2<c_3$, which are independent of $Q$, such that $c_2\cdot
Q^{n-\gamma}<# \tilde{M}^n_\varphi(Q,\gamma, J)< c_3\cdot Q^{n-\gamma}$.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,641 | A Critical Investigation of Deep Reinforcement Learning for Navigation | The navigation problem is classically approached in two steps: an exploration
step, where map-information about the environment is gathered; and an
exploitation step, where this information is used to navigate efficiently. Deep
reinforcement learning (DRL) algorithms, alternatively, approach the problem of
navigation in an end-to-end fashion. Inspired by the classical approach, we ask
whether DRL algorithms are able to inherently explore, gather and exploit
map-information over the course of navigation. We build upon Mirowski et al.
[2017] work and introduce a systematic suite of experiments that vary three
parameters: the agent's starting location, the agent's target location, and the
maze structure. We choose evaluation metrics that explicitly measure the
algorithm's ability to gather and exploit map-information. Our experiments show
that when trained and tested on the same maps, the algorithm successfully
gathers and exploits map-information. However, when trained and tested on
different sets of maps, the algorithm fails to transfer the ability to gather
and exploit map-information to unseen maps. Furthermore, we find that when the
goal location is randomized and the map is kept static, the algorithm is able
to gather and exploit map-information but the exploitation is far from optimal.
We open-source our experimental suite in the hopes that it serves as a
framework for the comparison of future algorithms and leads to the discovery of
robust alternatives to classical navigation methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,642 | Performance Evaluation of 3D Correspondence Grouping Algorithms | This paper presents a thorough evaluation of several widely-used 3D
correspondence grouping algorithms, motived by their significance in vision
tasks relying on correct feature correspondences. A good correspondence
grouping algorithm is desired to retrieve as many as inliers from initial
feature matches, giving a rise in both precision and recall. Towards this rule,
we deploy the experiments on three benchmarks respectively addressing shape
retrieval, 3D object recognition and point cloud registration scenarios. The
variety in application context brings a rich category of nuisances including
noise, varying point densities, clutter, occlusion and partial overlaps. It
also results to different ratios of inliers and correspondence distributions
for comprehensive evaluation. Based on the quantitative outcomes, we give a
summarization of the merits/demerits of the evaluated algorithms from both
performance and efficiency perspectives.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,643 | DeepPicar: A Low-cost Deep Neural Network-based Autonomous Car | We present DeepPicar, a low-cost deep neural network based autonomous car
platform. DeepPicar is a small scale replication of a real self-driving car
called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN),
which takes images from a front-facing camera as input and produces car
steering angles as output. DeepPicar uses the same network architecture---9
layers, 27 million connections and 250K parameters---and can drive itself in
real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using
DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end
deep learning based real-time control of autonomous vehicles. We also
systematically compare other contemporary embedded computing platforms using
the DeepPicar's CNN-based real-time control workload. We find that all tested
platforms, including the Pi 3, are capable of supporting the CNN-based
real-time control, from 20 Hz up to 100 Hz, depending on hardware platform.
However, we find that shared resource contention remains an important issue
that must be considered in applying CNN models on shared memory based embedded
computing platforms; we observe up to 11.6X execution time increase in the CNN
based control loop due to shared resource contention. To protect the CNN
workload, we also evaluate state-of-the-art cache partitioning and memory
bandwidth throttling techniques on the Pi 3. We find that cache partitioning is
ineffective, while memory bandwidth throttling is an effective solution.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,644 | Statistics of turbulence in the energy-containing range of Taylor-Couette compared to canonical wall-bounded flows | Considering structure functions of the streamwise velocity component in a
framework akin to the extended self-similarity hypothesis (ESS), de Silva
\textit{et al.} (\textit{J. Fluid Mech.}, vol. 823,2017, pp. 498-510) observed
that remarkably the \textit{large-scale} (energy-containing range) statistics
in canonical wall bounded flows exhibit universal behaviour. In the present
study, we extend this universality, which was seen to encompass also flows at
moderate Reynolds number, to Taylor-Couette flow. In doing so, we find that
also the transversal structure function of the spanwise velocity component
exhibits the same universal behaviour across all flow types considered. We
further demonstrate that these observations are consistent with predictions
developed based on an attached-eddy hypothesis. These considerations also yield
a possible explanation for the efficacy of the ESS framework by showing that it
relaxes the self-similarity assumption for the attached eddy contributions. By
taking the effect of streamwise alignment into account, the attached eddy model
predicts different behaviour for structure functions in the streamwise and in
the spanwise directions and that this effect cancels in the ESS-framework ---
both consistent with the data. Moreover, it is demonstrated here that also the
additive constants, which were previously believed to be flow dependent, are
indeed universal at least in turbulent boundary layers and pipe flow where
high-Reynolds number data are currently available.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,645 | Physics-Based Modeling of TID Induced Global Static Leakage in Different CMOS Circuits | Compact modeling of inter-device radiation-induced leakage underneath the
gateless thick STI oxide is presented and validated taking into account CMOS
technology and hardness parameters, dose-rate and annealing effects, and
dependence on electric modes under irradiation. It was shown that proposed
approach can be applied for description of dose dependent static leakage
currents in complex FPGA circuits.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,646 | A multiplier inclusion theorem on product domains | In this note it is shown that the class of all multipliers from the
$d$-parameter Hardy space $H^1_{\mathrm{prod}} (\mathbb{T}^d)$ to $L^2
(\mathbb{T}^d)$ is properly contained in the class of all multipliers from $L
\log^{d/2} L (\mathbb{T}^d)$ to $L^2(\mathbb{T}^d)$.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,647 | Using Battery Storage for Peak Shaving and Frequency Regulation: Joint Optimization for Superlinear Gains | We consider using a battery storage system simultaneously for peak shaving
and frequency regulation through a joint optimization framework which captures
battery degradation, operational constraints and uncertainties in customer load
and regulation signals. Under this framework, using real data we show the
electricity bill of users can be reduced by up to 15\%. Furthermore, we
demonstrate that the saving from joint optimization is often larger than the
sum of the optimal savings when the battery is used for the two individual
applications. A simple threshold real-time algorithm is proposed and achieves
this super-linear gain. Compared to prior works that focused on using battery
storage systems for single applications, our results suggest that batteries can
achieve much larger economic benefits than previously thought if they jointly
provide multiple services.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,648 | Information Bottleneck in Control Tasks with Recurrent Spiking Neural Networks | The nervous system encodes continuous information from the environment in the
form of discrete spikes, and then decodes these to produce smooth motor
actions. Understanding how spikes integrate, represent, and process information
to produce behavior is one of the greatest challenges in neuroscience.
Information theory has the potential to help us address this challenge.
Informational analyses of deep and feed-forward artificial neural networks
solving static input-output tasks, have led to the proposal of the
\emph{Information Bottleneck} principle, which states that deeper layers encode
more relevant yet minimal information about the inputs. Such an analyses on
networks that are recurrent, spiking, and perform control tasks is relatively
unexplored. Here, we present results from a Mutual Information analysis of a
recurrent spiking neural network that was evolved to perform the classic
pole-balancing task. Our results show that these networks deviate from the
\emph{Information Bottleneck} principle prescribed for feed-forward networks.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,649 | Physical Properties of Sub-galactic Clumps at 0.5 $\leq z \leq$ 1.5 in the UVUDF | We present an investigation of clumpy galaxies in the Hubble Ultra Deep Field
at 0.5 $\leq z \leq$ 1.5 in the rest-frame far-ultraviolet (FUV) using HST WFC3
broadband imaging in F225W, F275W, and F336W. An analysis of 1,404 galaxies
yields 209 galaxies that host 403 kpc-scale clumps. These host galaxies appear
to be typical star-forming galaxies, with an average of 2 clumps per galaxy and
reaching a maximum of 8 clumps. We measure the photometry of the clumps, and
determine the mass, age, and star formation rates (SFR) utilizing the
SED-fitting code FAST. We find that clumps make an average contribution of 19%
to the total rest-frame FUV flux of their host galaxy. Individually, clumps
contribute a median of 5% to the host galaxy SFR and an average of $\sim$4% to
the host galaxy mass, with total clump contributions to the host galaxy stellar
mass ranging widely from less than 1% up to 93%. Clumps in the outskirts of
galaxies are typically younger, with higher star formation rates, than clumps
in the inner regions. The results are consistent with clump migration theories
in which clumps form through violent gravitational instabilities in gas-rich
turbulent disks, eventually migrate toward the center of the galaxies, and
coalesce into the bulge.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,650 | Incomplete Dot Products for Dynamic Computation Scaling in Neural Network Inference | We propose the use of incomplete dot products (IDP) to dynamically adjust the
number of input channels used in each layer of a convolutional neural network
during feedforward inference. IDP adds monotonically non-increasing
coefficients, referred to as a "profile", to the channels during training. The
profile orders the contribution of each channel in non-increasing order. At
inference time, the number of channels used can be dynamically adjusted to
trade off accuracy for lowered power consumption and reduced latency by
selecting only a beginning subset of channels. This approach allows for a
single network to dynamically scale over a computation range, as opposed to
training and deploying multiple networks to support different levels of
computation scaling. Additionally, we extend the notion to multiple profiles,
each optimized for some specific range of computation scaling. We present
experiments on the computation and accuracy trade-offs of IDP for popular image
classification models and datasets. We demonstrate that, for MNIST and
CIFAR-10, IDP reduces computation significantly, e.g., by 75%, without
significantly compromising accuracy. We argue that IDP provides a convenient
and effective means for devices to lower computation costs dynamically to
reflect the current computation budget of the system. For example, VGG-16 with
50% IDP (using only the first 50% of channels) achieves 70% in accuracy on the
CIFAR-10 dataset compared to the standard network which achieves only 35%
accuracy when using the reduced channel set.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,651 | Graded components of Local cohomology modules II | Let $A$ be a commutative Noetherian ring containing a field $K$ of
characteristic zero and let $R= A[X_1, \ldots, X_m]$. Consider $R$ as standard
graded with $°A=0$ and $°X_i=1$ for all $i$. We present a few results
about the behavior of the graded components of local cohomology modules
$H_I^i(R)$ where $I$ is an arbitrary homogeneous ideal in $R$. We mostly
restrict our attention to the Vanishing, Tameness and Rigidity problems.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,652 | Four-Dimensional Painlevé-Type Equations Associated with Ramified Linear Equations III: Garnier Systems and Fuji-Suzuki Systems | This is the last part of a series of three papers entitled "Four-dimensional
Painlevé-type equations associated with ramified linear equations". In this
series of papers we aim to construct the complete degeneration scheme of
four-dimensional Painlevé-type equations. In the present paper, we consider
the degeneration of the Garnier system in two variables and the Fuji-Suzuki
system.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,653 | You Must Have Clicked on this Ad by Mistake! Data-Driven Identification of Accidental Clicks on Mobile Ads with Applications to Advertiser Cost Discounting and Click-Through Rate Prediction | In the cost per click (CPC) pricing model, an advertiser pays an ad network
only when a user clicks on an ad; in turn, the ad network gives a share of that
revenue to the publisher where the ad was impressed. Still, advertisers may be
unsatisfied with ad networks charging them for "valueless" clicks, or so-called
accidental clicks. [...] Charging advertisers for such clicks is detrimental in
the long term as the advertiser may decide to run their campaigns on other ad
networks. In addition, machine-learned click models trained to predict which ad
will bring the highest revenue may overestimate an ad click-through rate, and
as a consequence negatively impacting revenue for both the ad network and the
publisher. In this work, we propose a data-driven method to detect accidental
clicks from the perspective of the ad network. We collect observations of time
spent by users on a large set of ad landing pages - i.e., dwell time. We notice
that the majority of per-ad distributions of dwell time fit to a mixture of
distributions, where each component may correspond to a particular type of
clicks, the first one being accidental. We then estimate dwell time thresholds
of accidental clicks from that component. Using our method to identify
accidental clicks, we then propose a technique that smoothly discounts the
advertiser's cost of accidental clicks at billing time. Experiments conducted
on a large dataset of ads served on Yahoo mobile apps confirm that our
thresholds are stable over time, and revenue loss in the short term is
marginal. We also compare the performance of an existing machine-learned click
model trained on all ad clicks with that of the same model trained only on
non-accidental clicks. There, we observe an increase in both ad click-through
rate (+3.9%) and revenue (+0.2%) on ads served by the Yahoo Gemini network when
using the latter. [...]
| 0 | 0 | 0 | 1 | 0 | 0 |
17,654 | Building Models for Biopathway Dynamics Using Intrinsic Dimensionality Analysis | An important task for many if not all the scientific domains is efficient
knowledge integration, testing and codification. It is often solved with model
construction in a controllable computational environment. In spite of that, the
throughput of in-silico simulation-based observations become similarly
intractable for thorough analysis. This is especially the case in molecular
biology, which served as a subject for this study. In this project, we aimed to
test some approaches developed to deal with the curse of dimensionality. Among
these we found dimension reduction techniques especially appealing. They can be
used to identify irrelevant variability and help to understand critical
processes underlying high-dimensional datasets. Additionally, we subjected our
data sets to nonlinear time series analysis, as those are well established
methods for results comparison. To investigate the usefulness of dimension
reduction methods, we decided to base our study on a concrete sample set. The
example was taken from the domain of systems biology concerning dynamic
evolution of sub-cellular signaling. Particularly, the dataset relates to the
yeast pheromone pathway and is studied in-silico with a stochastic model. The
model reconstructs signal propagation stimulated by a mating pheromone. In the
paper, we elaborate on the reason of multidimensional analysis problem in the
context of molecular signaling, and next, we introduce the model of choice,
simulation details and obtained time series dynamics. A description of used
methods followed by a discussion of results and their biological interpretation
finalize the paper.
| 0 | 0 | 0 | 1 | 1 | 0 |
17,655 | Equilibrium points and basins of convergence in the linear restricted four-body problem with angular velocity | The planar linear restricted four-body problem is used in order to determine
the Newton-Raphson basins of convergence associated with the equilibrium
points. The parametric variation of the position as well as of the stability of
the libration points is monitored when the values of the mass parameter $b$ as
well as of the angular velocity $\omega$ vary in predefined intervals. The
regions on the configuration $(x,y)$ plane occupied by the basins of attraction
are revealed using the multivariate version of the Newton-Raphson iterative
scheme. The correlations between the attracting domains of the equilibrium
points and the corresponding number of iterations needed for obtaining the
desired accuracy are also illustrated. We perform a thorough and systematic
numerical investigation by demonstrating how the parameters $b$ and $\omega$
influence the shape, the geometry and of course the fractality of the
converging regions. Our numerical outcomes strongly indicate that these two
parameters are indeed two of the most influential factors in this dynamical
system.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,656 | Nonmonotonic dependence of polymer glass mechanical response on chain bending stiffness | We investigate the mechanical properties of amorphous polymers by means of
coarse-grained simulations and nonaffine lattice dynamics theory. A small
increase of polymer chain bending stiffness leads first to softening of the
material, while hardening happens only upon further strengthening of the
backbones. This nonmonotonic variation of the storage modulus $G'$ with bending
stiffness is caused by a competition between additional resistance to
deformation offered by stiffer backbones and decreased density of the material
due to a necessary decrease in monomer-monomer coordination. This
counter-intuitive finding suggests that the strength of polymer glasses may in
some circumstances be enhanced by softening the bending of constituent chains.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,657 | Convolutional Neural Network Committees for Melanoma Classification with Classical And Expert Knowledge Based Image Transforms Data Augmentation | Skin cancer is a major public health problem, as is the most common type of
cancer and represents more than half of cancer diagnoses worldwide. Early
detection influences the outcome of the disease and motivates our work. We
investigate the composition of CNN committees and data augmentation for the the
ISBI 2017 Melanoma Classification Challenge (named Skin Lesion Analysis towards
Melanoma Detection) facing the peculiarities of dealing with such a small,
unbalanced, biological database. For that, we explore committees of
Convolutional Neural Networks trained over the ISBI challenge training dataset
artificially augmented by both classical image processing transforms and image
warping guided by specialist knowledge about the lesion axis and improve the
final classifier invariance to common melanoma variations.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,658 | Arbitrage and Geometry | This article introduces the notion of arbitrage for a situation involving a
collection of investments and a payoff matrix describing the return to an
investor of each investment under each of a set of possible scenarios. We
explain the Arbitrage Theorem, discuss its geometric meaning, and show its
equivalence to Farkas' Lemma. We then ask a seemingly innocent question: given
a random payoff matrix, what is the probability of an arbitrage opportunity?
This question leads to some interesting geometry involving hyperplane
arrangements and related topics.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,659 | ADE surfaces and their moduli | We define a class of surfaces and surface pairs corresponding to the ADE root
lattices and construct compactifications of their moduli spaces, generalizing
Losev-Manin spaces of curves.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,660 | An Efficient Descriptor Model for Designing Materials for Solar Cells | An efficient descriptor model for fast screening of potential materials for
solar cell applications is presented. It works for both excitonic and
non-excitonic solar cells materials, and in addition to the energy gap it
includes the absorption spectrum ($\alpha(E)$) of the material. The charge
transport properties of the explored materials are modeled using the
characteristic diffusion length ($L_{d}$) determined for the respective family
of compounds. The presented model surpasses the widely used Scharber model
developed for bulk-heterojunction solar cells [Scharber \textit{et al.,
Advanced Materials}, 2006, Vol. 18, 789]. Using published experimental data, we
show that the presented model is more accurate in predicting the achievable
efficiencies. Although the focus of this work is on organic photovoltaics
(OPV), for which the original Scharber model was developed, the model presented
here is applicable also to other solar cell technologies. To model both
excitonic and non-excitonic systems, two different sets of parameters are used
to account for the different modes of operation. The analysis of the presented
descriptor model clearly shows the benefit of including $\alpha(E)$ and $L_{d}$
in view of improved screening results.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,661 | Multiplication of a Schubert polynomial by a Stanley symmetric polynomial | We prove, combinatorially, that the product of a Schubert polynomial by a
Stanley symmetric polynomial is a truncated Schubert polynomial. Using Monk's
rule, we derive a nonnegative combinatorial formula for the Schubert polynomial
expansion of a truncated Schubert polynomial. Combining these results, we give
a nonnegative combinatorial rule for the product of a Schubert and a Schur
polynomial in the Schubert basis.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,662 | Topological nodal line states and a potential catalyst of hydrogen evolution in the TiSi family | Topological nodal line (DNL) semimetals, formed by a closed loop of the
inverted bands in the bulk, result in the nearly flat drumhead-like surface
states with a high electronic density near the Fermi level. The high catalytic
active sites associated with the high electronic densities, the good carrier
mobility, and the proper thermodynamic stabilities with $\Delta
G_{H^*}$$\approx$0 are currently the prerequisites to seek the alternative
candidates to precious platinum for catalyzing electrochemical hydrogen (HER)
production from water. Within this context, it is natural to consider whether
or not the DNLs are a good candidate for the HER because its non-trivial
surface states provide a robust platform to activate possibly chemical
reactions. Here, through first-principles calculations we reported on a new DNL
TiSi-type family with a closed Dirac nodal line consisting of the linear band
crossings in the $k_y$ = 0 plane. The hydrogen adsorption on the (010) and
(110) surfaces yields the $\Delta G_{H^*}$ to be almost zero. The topological
charge carries have been revealed to participate in this HER. The results are
highlighting that TiSi not only is a promising catalyst for the HER but also
paves a new routine to design topological quantum catalyst utilizing the
topological DNL-induced surface bands as active sites, rather than edge sites-,
vacancy-, dopant-, strain-, or heterostructure-created active sites.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,663 | Boosting Variational Inference: an Optimization Perspective | Variational inference is a popular technique to approximate a possibly
intractable Bayesian posterior with a more tractable one. Recently, boosting
variational inference has been proposed as a new paradigm to approximate the
posterior by a mixture of densities by greedily adding components to the
mixture. However, as is the case with many other variational inference
algorithms, its theoretical properties have not been studied. In the present
work, we study the convergence properties of this approach from a modern
optimization viewpoint by establishing connections to the classic Frank-Wolfe
algorithm. Our analyses yields novel theoretical insights regarding the
sufficient conditions for convergence, explicit rates, and algorithmic
simplifications. Since a lot of focus in previous works for variational
inference has been on tractability, our work is especially important as a much
needed attempt to bridge the gap between probabilistic models and their
corresponding theoretical properties.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,664 | Nature of carrier injection in metal/2D semiconductor interface and its implications to the limits of contact resistance | Monolayers of transition metal dichalcogenides (TMDCs) exhibit excellent
electronic and optical properties. However, the performance of these
two-dimensional (2D) devices are often limited by the large resistance offered
by the metal contact interface. Till date, the carrier injection mechanism from
metal to 2D TMDC layers remains unclear, with widely varying reports of
Schottky barrier height (SBH) and contact resistance (Rc), particularly in the
monolayer limit. In this work, we use a combination of theory and experiments
in Au and Ni contacted monolayer MoS2 device to conclude the following points:
(i) the carriers are injected at the source contact through a cascade of two
potential barriers - the barrier heights being determined by the degree of
interaction between the metal and the TMDC layer; (ii) the conventional
Richardson equation becomes invalid due to the multi-dimensional nature of the
injection barriers, and using Bardeen-Tersoff theory, we derive the appropriate
form of the Richardson equation that describes such composite barrier; (iii) we
propose a novel transfer length method (TLM) based SBH extraction methodology,
to reliably extract SBH by eliminating any confounding effect of temperature
dependent channel resistance variation; (iv) we derive the Landauer limit of
the contact resistance achievable in such devices. A comparison of the limits
with the experimentally achieved contact resistance reveals plenty of room for
technological improvements.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,665 | Characteristics of stratified flows of Newtonian/non-Newtonian shear-thinning fluids | Exact solutions for laminar stratified flows of Newtonian/non-Newtonian
shear-thinning fluids in horizontal and inclined channels are presented. An
iterative algorithm is proposed to compute the laminar solution for the general
case of a Carreau non-Newtonian fluid. The exact solution is used to study the
effect of the rheology of the shear-thinning liquid on two-phase flow
characteristics considering both gas/liquid and liquid/liquid systems.
Concurrent and counter-current inclined systems are investigated, including the
mapping of multiple solution boundaries. Aspects relevant to practical
applications are discussed, such as the insitu hold-up, or lubrication effects
achieved by adding a less viscous phase. A characteristic of this family of
systems is that, even if the liquid has a complex rheology (Carreau fluid), the
two-phase stratified flow can behave like the liquid is Newtonian for a wide
range of operational conditions. The capability of the two-fluid model to yield
satisfactory predictions in the presence of shear-thinning liquids is tested,
and an algorithm is proposed to a priori predict if the Newtonian (zero shear
rate viscosity) behaviour arises for a given operational conditions in order to
avoid large errors in the predictions of flow characteristics when the
power-law is considered for modelling the shear-thinning behaviour. Two-fluid
model closures implied by the exact solution and the effect of a turbulent gas
layer are also addressed.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,666 | Stochastic Chebyshev Gradient Descent for Spectral Optimization | A large class of machine learning techniques requires the solution of
optimization problems involving spectral functions of parametric matrices, e.g.
log-determinant and nuclear norm. Unfortunately, computing the gradient of a
spectral function is generally of cubic complexity, as such gradient descent
methods are rather expensive for optimizing objectives involving the spectral
function. Thus, one naturally turns to stochastic gradient methods in hope that
they will provide a way to reduce or altogether avoid the computation of full
gradients. However, here a new challenge appears: there is no straightforward
way to compute unbiased stochastic gradients for spectral functions. In this
paper, we develop unbiased stochastic gradients for spectral-sums, an important
subclass of spectral functions. Our unbiased stochastic gradients are based on
combining randomized trace estimators with stochastic truncation of the
Chebyshev expansions. A careful design of the truncation distribution allows us
to offer distributions that are variance-optimal, which is crucial for fast and
stable convergence of stochastic gradient methods. We further leverage our
proposed stochastic gradients to devise stochastic methods for objective
functions involving spectral-sums, and rigorously analyze their convergence
rate. The utility of our methods is demonstrated in numerical experiments.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,667 | Poisson Bracket and Symplectic Structure of Covariant Canonical Formalism of Fields | The covariant canonical formalism is a covariant extension of the traditional
canonical formalism of fields. In contrast to the traditional canonical theory,
it has a remarkable feature that canonical equations of gauge theories or
gravity are not only manifestly Lorentz covariant but also gauge covariant or
diffeomorphism covariant. A mathematical peculiarity of the covariant canonical
formalism is that its canonical coordinates are differential forms on a
manifold. In the present paper, we find a natural Poisson bracket of this new
canonical theory, and study symplectic structure behind it. The phase space of
the theory is identified with a ringed space with the structure sheaf of the
graded algebra of "differentiable" differential forms on the manifold. The
Poisson and the symplectic structure we found can be even or odd, depending on
the dimension of the manifold. Our Poisson structure is an example of physical
application of Poisson structure defined on the graded algebra of differential
forms.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,668 | Cascading Failures in Interdependent Systems: Impact of Degree Variability and Dependence | We study cascading failures in a system comprising interdependent
networks/systems, in which nodes rely on other nodes both in the same system
and in other systems to perform their function. The (inter-)dependence among
nodes is modeled using a dependence graph, where the degree vector of a node
determines the number of other nodes it can potentially cause to fail in each
system through aforementioned dependency. In particular, we examine the impact
of the variability and dependence properties of node degrees on the probability
of cascading failures. We show that larger variability in node degrees hampers
widespread failures in the system, starting with random failures. Similarly,
positive correlations in node degrees make it harder to set off an epidemic of
failures, thereby rendering the system more robust against random failures.
| 1 | 1 | 0 | 1 | 0 | 0 |
17,669 | Critical exponent for geodesic currents | For any geodesic current we associated a quasi-metric space. For a subclass
of geodesic currents, called filling, it defines a metric and we study the
critical exponent associated to this space. We show that is is equal to the
exponential growth rate of the intersection function for closed curves.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,670 | Human and Machine Speaker Recognition Based on Short Trivial Events | Trivial events are ubiquitous in human to human conversations, e.g., cough,
laugh and sniff. Compared to regular speech, these trivial events are usually
short and unclear, thus generally regarded as not speaker discriminative and so
are largely ignored by present speaker recognition research. However, these
trivial events are highly valuable in some particular circumstances such as
forensic examination, as they are less subjected to intentional change, so can
be used to discover the genuine speaker from disguised speech. In this paper,
we collect a trivial event speech database that involves 75 speakers and 6
types of events, and report preliminary speaker recognition results on this
database, by both human listeners and machines. Particularly, the deep feature
learning technique recently proposed by our group is utilized to analyze and
recognize the trivial events, which leads to acceptable equal error rates
(EERs) despite the extremely short durations (0.2-0.5 seconds) of these events.
Comparing different types of events, 'hmm' seems more speaker discriminative.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,671 | AdiosStMan: Parallelizing Casacore Table Data System Using Adaptive IO System | In this paper, we investigate the Casacore Table Data System (CTDS) used in
the casacore and CASA libraries, and methods to parallelize it. CTDS provides a
storage manager plugin mechanism for third-party devel- opers to design and
implement their own CTDS storage managers. Hav- ing this in mind, we looked
into various storage backend techniques that can possibly enable parallel I/O
for CTDS by implementing new storage managers. After carrying on benchmarks
showing the excellent parallel I/O throughput of the Adaptive IO System
(ADIOS), we implemented an ADIOS based parallel CTDS storage manager. We then
applied the CASA MSTransform frequency split task to verify the ADIOS Storage
Manager. We also ran a series of performance tests to examine the I/O
throughput in a massively parallel scenario.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,672 | Direct Experimental Observation of the Gas Filamentation Effect using a Two-bunch X-ray FEL Beam | We report the experimental observation of the filamentation effect in gas
devices designed for X-ray Free-electron Lasers. The measurements were carried
out at the Linac Coherent Light Source on the X-ray Correlation Spectroscopy
(XCS) instrument using a Two-bunch FEL beam at 6.5 keV with 122.5 ns separation
passing through an Argon gas cell. The relative intensities of the two pulses
of the Two-bunch beam were measured, after and before the gas cell, from the
X-ray scattering off thin targets by using fast diodes with sufficient temporal
resolution. It was found that the after-to-before ratio of the intensities of
the second pulse was consistently higher than that of the first pulse,
revealing lower effective attenuation of the gas cell due to the heating and
subsequent gas density reduction in the beam path by the first pulse. This
measurement is important in guiding the design and/or mitigating the adverse
effect in gas devices for high repetition-rate FELs such as the LCLS-II and the
European XFEL or other future high repetition-rate upgrade to existing FEL
facilities
| 0 | 1 | 0 | 0 | 0 | 0 |
17,673 | Coherence measurements of scattered incoherent light for lensless identification of an object's location and size | In absence of a lens to form an image, incoherent or partially coherent light
scattering off an obstructive or reflective object forms a broad intensity
distribution in the far field with only feeble spatial features. We show here
that measuring the complex spatial coherence function can help in the
identification of the size and location of a one-dimensional object placed in
the path of a partially coherent light source. The complex coherence function
is measured in the far field through wavefront sampling, which is performed via
dynamically reconfigurable slits implemented on a digital micromirror device
(DMD). The impact of an object -- parameterized by size and location -- that
either intercepts or reflects incoherent light is studied. The experimental
results show that measuring the spatial coherence function as a function of the
separation between two slits located symmetrically around the optical axis can
identify the object transverse location and angle subtended from the detection
plane (the ratio of the object width to the axial distance from the detector).
The measurements are in good agreement with numerical simulations of a forward
model based on Fresnel propagators. The rapid refresh rate of DMDs may enable
real-time operation of such a lensless coherency imaging scheme.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,674 | MVP2P: Layer-Dependency-Aware Live MVC Video Streaming over Peer-to-Peer Networks | Multiview video supports observing a scene from different viewpoints. The
Joint Video Team (JVT) developed H.264/MVC to enhance the compression
efficiency for multiview video, however, MVC encoded multiview video (MVC
video) still requires high bitrates for transmission. This paper investigates
live MVC video streaming over Peer-to-Peer (P2P) networks. The goal is to
minimize the server bandwidth costs whist ensuring high streaming quality to
peers. MVC employs intra-view and inter-view prediction structures, which leads
to a complicated layer dependency relationship. As the peers' outbound
bandwidth is shared while supplying all the MVC video layers, the bandwidth
allocation to one MVC layer affects the available outbound bandwidth of the
other layers. To optimise the utilisation of the peers' outbound bandwidth for
providing video layers, a maximum flow based model is proposed which considers
the MVC video layer dependency and the layer supplying relationship between
peers. Based on the model, a layer dependency aware live MVC video streaming
method over a BitTorrent-like P2P network is proposed, named MVP2P. The key
components of MVP2P include a chunk scheduling strategy and a peer selection
strategy for receiving peers, and a bandwidth scheduling algorithm for
supplying peers. To evaluate the efficiency of the proposed solution, MVP2P is
compared with existing methods considering the constraints of peer bandwidth,
peer numbers, view switching rates, and peer churns. The test results show that
MVP2P significantly outperforms the existing methods.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,675 | High order finite element simulations for fluid dynamics validated by experimental data from the fda benchmark nozzle model | The objective of the present work is to construct a sound mathematical,
numerical and computational framework relevant to blood flow simulations and to
assess it through a careful validation against experimental data. We perform
simulations of a benchmark proposed by the FDA for fluid flow in an idealized
medical device, under different flow regimes. The results are evaluated using
metrics proposed in the literature and the findings are in very good agreement
with the validation experiment.
| 0 | 1 | 1 | 0 | 0 | 0 |
17,676 | Twin Networks: Matching the Future for Sequence Generation | We propose a simple technique for encouraging generative RNNs to plan ahead.
We train a "backward" recurrent network to generate a given sequence in reverse
order, and we encourage states of the forward model to predict cotemporal
states of the backward model. The backward network is used only during
training, and plays no role during sampling or inference. We hypothesize that
our approach eases modeling of long-term dependencies by implicitly forcing the
forward states to hold information about the longer-term future (as contained
in the backward states). We show empirically that our approach achieves 9%
relative improvement for a speech recognition task, and achieves significant
improvement on a COCO caption generation task.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,677 | Structure and Randomness of Continuous-Time Discrete-Event Processes | Loosely speaking, the Shannon entropy rate is used to gauge a stochastic
process' intrinsic randomness; the statistical complexity gives the cost of
predicting the process. We calculate, for the first time, the entropy rate and
statistical complexity of stochastic processes generated by finite unifilar
hidden semi-Markov models---memoryful, state-dependent versions of renewal
processes. Calculating these quantities requires introducing novel mathematical
objects ({\epsilon}-machines of hidden semi-Markov processes) and new
information-theoretic methods to stochastic processes.
| 0 | 1 | 1 | 1 | 0 | 0 |
17,678 | In situ high resolution real-time quantum efficiency imaging for photocathodes | Aspects of the preparation process and performance degradation are two major
problems of photocathodes. The lack of a means for dynamic quantum efficiency
measurements results in the inability to observe the inhomogeneity of the
cathode surface by fine structural analysis and in real time.Here we present a
simple and scalable technique for in situ real-time quantum efficiency
diagnosis. An incoherent light source provides uniform illumination on the
cathode surface, and solenoid magnets are used as lens for focusing and imaging
the emitted electron beam on a downstream scintillator screen, which converts
the quantum efficiency information into fluorescence intensity distribution.
The microscopic discontinuity and the dynamic changes of the quantum efficiency
of a gallium arsenide photocathode are observed at a resolution of a few
microns. An unexpected uneven decrease of the quantum efficiency is also
recorded. The work demonstrates a new observation method for photoemission
materials research.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,679 | Competitive division of a mixed manna | A mixed manna contains goods (that everyone likes), bads (that everyone
dislikes), as well as items that are goods to some agents, but bads or satiated
to others.
If all items are goods and utility functions are homothetic, concave (and
monotone), the Competitive Equilibrium with Equal Incomes maximizes the Nash
product of utilities: hence it is welfarist (determined utility-wise by the
feasible set of profiles), single-valued and easy to compute.
We generalize the Gale-Eisenberg Theorem to a mixed manna. The Competitive
division is still welfarist and related to the product of utilities or
disutilities. If the zero utility profile (before any manna) is Pareto
dominated, the competitive profile is unique and still maximizes the product of
utilities. If the zero profile is unfeasible, the competitive profiles are the
critical points of the product of disutilities on the efficiency frontier, and
multiplicity is pervasive. In particular the task of dividing a mixed manna is
either good news for everyone, or bad news for everyone.
We refine our results in the practically important case of linear
preferences, where the axiomatic comparison between the division of goods and
that of bads is especially sharp. When we divide goods and the manna improves,
everyone weakly benefits under the competitive rule; but no reasonable rule to
divide bads can be similarly Resource Monotonic. Also, the much larger set of
Non Envious and Efficient divisions of bads can be disconnected so that it will
admit no continuous selection.
| 1 | 0 | 1 | 0 | 0 | 0 |
17,680 | Understanding Black-box Predictions via Influence Functions | How can we explain the predictions of a black-box model? In this paper, we
use influence functions -- a classic technique from robust statistics -- to
trace a model's prediction through the learning algorithm and back to its
training data, thereby identifying training points most responsible for a given
prediction. To scale up influence functions to modern machine learning
settings, we develop a simple, efficient implementation that requires only
oracle access to gradients and Hessian-vector products. We show that even on
non-convex and non-differentiable models where the theory breaks down,
approximations to influence functions can still provide valuable information.
On linear models and convolutional neural networks, we demonstrate that
influence functions are useful for multiple purposes: understanding model
behavior, debugging models, detecting dataset errors, and even creating
visually-indistinguishable training-set attacks.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,681 | Linear and nonlinear photonic Jackiw-Rebbi states in waveguide arrays | We study analytically and numerically the optical analogue of the
Jackiw-Rebbi states in quantum field theory. These solutions exist at the
interface of two binary waveguide arrays which are described by two Dirac
equations with opposite sign masses. We show that these special states are
topologically robust not only in the linear regime, but also in nonlinear
regimes (with both focusing and de-focusing nonlinearity). We also reveal that
one can generate the Jackiw-Rebbi states starting from Dirac solitons.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,682 | A Century of Science: Globalization of Scientific Collaborations, Citations, and Innovations | Progress in science has advanced the development of human society across
history, with dramatic revolutions shaped by information theory, genetic
cloning, and artificial intelligence, among the many scientific achievements
produced in the 20th century. However, the way that science advances itself is
much less well-understood. In this work, we study the evolution of scientific
development over the past century by presenting an anatomy of 89 million
digitalized papers published between 1900 and 2015. We find that science has
benefited from the shift from individual work to collaborative effort, with
over 90% of the world-leading innovations generated by collaborations in this
century, nearly four times higher than they were in the 1900s. We discover that
rather than the frequent myopic- and self-referencing that was common in the
early 20th century, modern scientists instead tend to look for literature
further back and farther around. Finally, we also observe the globalization of
scientific development from 1900 to 2015, including 25-fold and 7-fold
increases in international collaborations and citations, respectively, as well
as a dramatic decline in the dominant accumulation of citations by the US, the
UK, and Germany, from ~95% to ~50% over the same period. Our discoveries are
meant to serve as a starter for exploring the visionary ways in which science
has developed throughout the past century, generating insight into and an
impact upon the current scientific innovations and funding policies.
| 1 | 1 | 0 | 0 | 0 | 0 |
17,683 | Interval-type theorems concerning means | Each family $\mathcal{M}$ of means has a natural, partial order (point-wise
order), that is $M \le N$ iff $M(x) \le N(x)$ for all admissible $x$.
In this setting we can introduce the notion of interval-type set (a subset
$\mathcal{I} \subset \mathcal{M}$ such that whenever $M \le P \le N$ for some
$M,\,N \in \mathcal{I}$ and $P \in \mathcal{M}$ then $P \in \mathcal{I}$). For
example, in the case of power means there exists a natural isomorphism between
interval-type sets and intervals contained in real numbers. Nevertheless there
appear a number of interesting objects for a families which cannot be linearly
ordered.
In the present paper we consider this property for Gini means and Hardy
means. Moreover some results concerning $L^\infty$ metric among (abstract)
means will be obtained.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,684 | Diff-DAC: Distributed Actor-Critic for Average Multitask Deep Reinforcement Learning | We propose a fully distributed actor-critic algorithm approximated by deep
neural networks, named \textit{Diff-DAC}, with application to single-task and
to average multitask reinforcement learning (MRL). Each agent has access to
data from its local task only, but it aims to learn a policy that performs well
on average for the whole set of tasks. During the learning process, agents
communicate their value-policy parameters to their neighbors, diffusing the
information across the network, so that they converge to a common policy, with
no need for a central node. The method is scalable, since the computational and
communication costs per agent grow with its number of neighbors. We derive
Diff-DAC's from duality theory and provide novel insights into the standard
actor-critic framework, showing that it is actually an instance of the dual
ascent method that approximates the solution of a linear program. Experiments
suggest that Diff-DAC can outperform the single previous distributed MRL
approach (i.e., Dist-MTLPS) and even the centralized architecture.
| 1 | 0 | 0 | 1 | 0 | 0 |
17,685 | On locally compact semitopological $0$-bisimple inverse $ω$-semigroups | We describe the structure of Hausdorff locally compact semitopological
$0$-bisimple inverse $\omega$-semigroups with compact maximal subgroups. In
particular, we show that a Hausdorff locally compact semitopological
$0$-bisimple inverse $\omega$-semigroup with a compact maximal subgroup is
either compact or topologically isomorphic to the topological sum of its
$\mathscr{H}$-classes. We describe the structure of Hausdorff locally compact
semitopological $0$-bisimple inverse $\omega$-semigroups with a monothetic
maximal subgroups. In particular we prove the dichotomy for $T_1$ locally
compact semitopological Reilly semigroup
$\left(\textbf{B}(\mathbb{Z}_{+},\theta)^0,\tau\right)$ with adjoined zero and
with a non-annihilating homomorphism $\theta\colon \mathbb{Z}_{+}\to
\mathbb{Z}_{+}$: $\left(\textbf{B}(\mathbb{Z}_{+},\theta)^0,\tau\right)$ is
either compact or discrete. At the end we discuss on the remainder under the
closure of the discrete Reilly semigroup $\textbf{B}(\mathbb{Z}_{+},\theta)^0$
in a semitopological semigroup.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,686 | Theory of Disorder-Induced Half-Integer Thermal Hall Conductance | Electrons that are confined to a single Landau level in a two dimensional
electron gas realize the effects of strong electron-electron repulsion in its
purest form. The kinetic energy of individual electrons is completely quenched
and all physical properties are dictated solely by many-body effects. A
remarkable consequence is the emergence of new quasiparticles with fractional
charge and exotic quantum statistics of which the most exciting ones are
non-Abelian quasiparticles. A non-integer quantized thermal Hall conductance
$\kappa_{xy}$ (in units of temperature times the universal constant $\pi^2
k_B^2 /3 h$; $h$ is the Planck constant and $k_B$ the Boltzmann constant)
necessitates the existence of such quasiparticles. It has been predicted, and
verified numerically, that such states are realized in the clean half-filled
first Landau level of electrons with Coulomb repulsion, with $\kappa_{xy}$
being either $3/2$ or $7/2$. Excitingly, a recent experiment has indeed
observed a half-integer value, which was measured, however, to be
$\kappa_{xy}=5/2$. We resolve this contradiction within a picture where smooth
disorder results in the formation of mesoscopic puddles with locally
$\kappa_{xy}=3/2$ or $7/2$. Interactions between these puddles generate a
coherent macroscopic state, which is reflected in an extended plateau with
quantized $\kappa_{xy}=5/2$. The topological properties of quasiparticles at
large distances are determined by the macroscopic phase, and not by the
microscopic puddle where they reside. In principle, the same mechanism might
also allow non-Abelian quasiparticles to emerge from a system comprised of
microscopic Abelian puddles.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,687 | On Security Research Towards Future Mobile Network Generations | Over the last decades, numerous security and privacy issues in all three
active mobile network generations have been revealed that threaten users as
well as network providers. In view of the newest generation (5G) currently
under development, we now have the unique opportunity to identify research
directions for the next generation based on existing security and privacy
issues as well as already proposed defenses. This paper aims to unify security
knowledge on mobile phone networks into a comprehensive overview and to derive
pressing open research questions. To achieve this systematically, we develop a
methodology that categorizes known attacks by their aim, proposed defenses,
underlying causes, and root causes. Further, we assess the impact and the
efficacy of each attack and defense. We then apply this methodology to existing
literature on attacks and defenses in all three network generations. By doing
so, we identify ten causes and four root causes of attacks. Mapping the attacks
to proposed defenses and suggestions for the 5G specification enables us to
uncover open research questions and challenges for the development of
next-generation mobile networks. The problems of unsecured pre-authentication
traffic and jamming attacks exist across all three mobile generations. They
should be addressed in the future, in particular, to wipe out the class of
downgrade attacks and, thereby, strengthen the users' privacy. Further advances
are needed in the areas of inter-operator protocols as well as secure baseband
implementations. Additionally, mitigations against denial-of-service attacks by
smart protocol design represent an open research question.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,688 | Channel Simulation in Quantum Metrology | In this review we discuss how channel simulation can be used to simplify the
most general protocols of quantum parameter estimation, where unlimited
entanglement and adaptive joint operations may be employed. Whenever the
unknown parameter encoded in a quantum channel is completely transferred in an
environmental program state simulating the channel, the optimal adaptive
estimation cannot beat the standard quantum limit. In this setting, we
elucidate the crucial role of quantum teleportation as a primitive operation
which allows one to completely reduce adaptive protocols over suitable
teleportation-covariant channels and derive matching upper and lower bounds for
parameter estimation. For these channels, we may express the quantum Cramér
Rao bound directly in terms of their Choi matrices. Our review considers both
discrete- and continuous-variable systems, also presenting some new results for
bosonic Gaussian channels using an alternative sub-optimal simulation. It is an
open problem to design simulations for quantum channels that achieve the
Heisenberg limit.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,689 | Ranking Causal Influence of Financial Markets via Directed Information Graphs | A non-parametric method for ranking stock indices according to their mutual
causal influences is presented. Under the assumption that indices reflect the
underlying economy of a country, such a ranking indicates which countries exert
the most economic influence in an examined subset of the global economy. The
proposed method represents the indices as nodes in a directed graph, where the
edges' weights are estimates of the pair-wise causal influences, quantified
using the directed information functional. This method facilitates using a
relatively small number of samples from each index. The indices are then ranked
according to their net-flow in the estimated graph (sum of the incoming weights
subtracted from the sum of outgoing weights). Daily and minute-by-minute data
from nine indices (three from Asia, three from Europe and three from the US)
were analyzed. The analysis of daily data indicates that the US indices are the
most influential, which is consistent with intuition that the indices
representing larger economies usually exert more influence. Yet, it is also
shown that an index representing a small economy can strongly influence an
index representing a large economy if the smaller economy is indicative of a
larger phenomenon. Finally, it is shown that while inter-region interactions
can be captured using daily data, intra-region interactions require more
frequent samples.
| 0 | 0 | 0 | 0 | 0 | 1 |
17,690 | Shear banding in metallic glasses described by alignments of Eshelby quadrupoles | Plastic deformation of metallic glasses performed well below the glass
transition temperature leads to the formation of shear bands as a result of
shear localization. It is believed that shear banding originates from
individual stress concentrators having quadrupolar symmetry. To elucidate the
underlying mechanisms of shear band formation, microstructural investigations
were carried out on sheared zones using transmission electron microscopy. Here
we show evidence of a characteristic signature present in shear bands
manifested in the form of sinusoidal density variations. We present an
analytical solution for the observed post-deformation state derived from
continuum mechanics using an alignment of quadrupolar stress field
perturbations for the plastic events. Since we observe qualitatively similar
features for three different types of metallic glasses that span the entire
range of characteristic properties of metallic glasses, we conclude that the
reported deformation behavior is generic for all metallic glasses, and thus has
far-reaching consequences for the deformation behavior of amorphous solids in
general.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,691 | Detection of low dimensionality and data denoising via set estimation techniques | This work is closely related to the theories of set estimation and manifold
estimation.
Our object of interest is a, possibly lower-dimensional, compact set $S
\subset {\mathbb R}^d$.
The general aim is to identify (via stochastic procedures) some qualitative
or quantitative features of $S$, of geometric or topological character. The
available information is just a random sample of points drawn on $S$.
The term "to identify" means here to achieve a correct answer almost surely
(a.s.) when the sample size tends to infinity. More specifically the paper aims
at giving some partial answers to the following questions: is $S$ full
dimensional? Is $S$ "close to a lower dimensional set" $\mathcal{M}$? If so,
can we estimate $\mathcal{M}$ or some functionals of $\mathcal{M}$ (in
particular, the Minkowski content of $\mathcal{M}$)? As an important auxiliary
tool in the answers of these questions, a denoising procedure is proposed in
order to partially remove the noise in the original data. The theoretical
results are complemented with some simulations and graphical illustrations.
| 0 | 0 | 1 | 1 | 0 | 0 |
17,692 | Deep Networks with Shape Priors for Nucleus Detection | Detection of cell nuclei in microscopic images is a challenging research
topic, because of limitations in cellular image quality and diversity of
nuclear morphology, i.e. varying nuclei shapes, sizes, and overlaps between
multiple cell nuclei. This has been a topic of enduring interest with promising
recent success shown by deep learning methods. These methods train for example
convolutional neural networks (CNNs) with a training set of input images and
known, labeled nuclei locations. Many of these methods are supplemented by
spatial or morphological processing. We develop a new approach that we call
Shape Priors with Convolutional Neural Networks (SP-CNN) to perform
significantly enhanced nuclei detection. A set of canonical shapes is prepared
with the help of a domain expert. Subsequently, we present a new network
structure that can incorporate `expected behavior' of nucleus shapes via two
components: {\em learnable} layers that perform the nucleus detection and a
{\em fixed} processing part that guides the learning with prior information.
Analytically, we formulate a new regularization term that is targeted at
penalizing false positives while simultaneously encouraging detection inside
cell nucleus boundary. Experimental results on a challenging dataset reveal
that SP-CNN is competitive with or outperforms several state-of-the-art
methods.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,693 | A sheaf-theoretic model for SL(2,C) Floer homology | Given a Heegaard splitting of a three-manifold Y, we consider the SL(2,C)
character variety of the Heegaard surface, and two complex Lagrangians
associated to the handlebodies. We focus on the smooth open subset
corresponding to irreducible representations. On that subset, the intersection
of the Lagrangians is an oriented d-critical locus in the sense of Joyce. Bussi
associates to such an intersection a perverse sheaf of vanishing cycles. We
prove that in our setting, the perverse sheaf is an invariant of Y, i.e., it is
independent of the Heegaard splitting. The hypercohomology of this sheaf can be
viewed as a model for (the dual of) SL(2,C) instanton Floer homology. We also
present a framed version of this construction, which takes into account
reducible representations. We give explicit computations for lens spaces and
Brieskorn spheres, and discuss the connection to the Kapustin-Witten equations
and Khovanov homology.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,694 | Discriminative models for multi-instance problems with tree-structure | Modeling network traffic is gaining importance in order to counter modern
threats of ever increasing sophistication. It is though surprisingly difficult
and costly to construct reliable classifiers on top of telemetry data due to
the variety and complexity of signals that no human can manage to interpret in
full. Obtaining training data with sufficiently large and variable body of
labels can thus be seen as prohibitive problem. The goal of this work is to
detect infected computers by observing their HTTP(S) traffic collected from
network sensors, which are typically proxy servers or network firewalls, while
relying on only minimal human input in model training phase. We propose a
discriminative model that makes decisions based on all computer's traffic
observed during predefined time window (5 minutes in our case). The model is
trained on collected traffic samples over equally sized time window per large
number of computers, where the only labels needed are human verdicts about the
computer as a whole (presumed infected vs. presumed clean). As part of training
the model itself recognizes discriminative patterns in traffic targeted to
individual servers and constructs the final high-level classifier on top of
them. We show the classifier to perform with very high precision, while the
learned traffic patterns can be interpreted as Indicators of Compromise. In the
following we implement the discriminative model as a neural network with
special structure reflecting two stacked multi-instance problems. The main
advantages of the proposed configuration include not only improved accuracy and
ability to learn from gross labels, but also automatic learning of server types
(together with their detectors) which are typically visited by infected
computers.
| 1 | 0 | 0 | 0 | 0 | 0 |
17,695 | Functional advantages offered by many-body coherences in biochemical systems | Quantum coherence phenomena driven by electronic-vibrational (vibronic)
interactions, are being reported in many pulse (e.g. laser) driven chemical and
biophysical systems. But what systems-level advantage(s) do such many-body
coherences offer to future technologies? We address this question for pulsed
systems of general size N, akin to the LHCII aggregates found in green plants.
We show that external pulses generate vibronic states containing particular
multipartite entanglements, and that such collective vibronic states increase
the excitonic transfer efficiency. The strength of these many-body coherences
and their robustness to decoherence, increase with aggregate size N and do not
require strong electronic-vibrational coupling. The implications for energy and
information transport are discussed.
| 0 | 1 | 0 | 0 | 0 | 0 |
17,696 | Privacy Mining from IoT-based Smart Homes | Recently, a wide range of smart devices are deployed in a variety of
environments to improve the quality of human life. One of the important
IoT-based applications is smart homes for healthcare, especially for elders.
IoT-based smart homes enable elders' health to be properly monitored and taken
care of. However, elders' privacy might be disclosed from smart homes due to
non-fully protected network communication or other reasons. To demonstrate how
serious this issue is, we introduce in this paper a Privacy Mining Approach
(PMA) to mine privacy from smart homes by conducting a series of deductions and
analyses on sensor datasets generated by smart homes. The experimental results
demonstrate that PMA is able to deduce a global sensor topology for a smart
home and disclose elders' privacy in terms of their house layouts.
| 0 | 0 | 0 | 1 | 0 | 0 |
17,697 | Idempotent ordered semigroup | An element e of an ordered semigroup $(S,\cdot,\leq)$ is called an ordered
idempotent if $e\leq e^2$. We call an ordered semigroup $S$ idempotent ordered
semigroup if every element of $S$ is an ordered idempotent. Every idempotent
semigroup is a complete semilattice of rectangular idempotent semigroups and in
this way we arrive to many other important classes of idempotent ordered
semigroups.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,698 | Hybrid bounds for twists of $GL(3)$ $L$-functions | Let $\pi$ be a Hecke-Maass cusp form for $SL(3,\mathbb{Z})$ and
$\chi=\chi_1\chi_2$ a Dirichlet character with $\chi_i$ primitive modulo $M_i$.
Suppose that $M_1$, $M_2$ are primes such that
$\max\{(M|t|)^{1/3+2\delta/3},M^{2/5}|t|^{-9/20},
M^{1/2+2\delta}|t|^{-3/4+2\delta}\}(M|t|)^{\varepsilon}<M_1< \min\{
(M|t|)^{2/5},(M|t|)^{1/2-8\delta}\}(M|t|)^{-\varepsilon}$ for any
$\varepsilon>0$, where $M=M_1M_2$, $|t|\geq 1$ and $0<\delta< 1/52$. Then we
have $$ L\left(\frac{1}{2}+it,\pi\otimes \chi\right)\ll_{\pi,\varepsilon}
(M|t|)^{3/4-\delta+\varepsilon}. $$
| 0 | 0 | 1 | 0 | 0 | 0 |
17,699 | Modeling stochastic skew of FX options using SLV models with stochastic spot/vol correlation and correlated jumps | It is known that the implied volatility skew of FX options demonstrates a
stochastic behavior which is called stochastic skew. In this paper we create
stochastic skew by assuming the spot/instantaneous variance correlation to be
stochastic. Accordingly, we consider a class of SLV models with stochastic
correlation where all drivers - the spot, instantaneous variance and their
correlation are modeled by Levy processes. We assume all diffusion components
to be fully correlated as well as all jump components. A new fully implicit
splitting finite-difference scheme is proposed for solving forward PIDE which
is used when calibrating the model to market prices of the FX options with
different strikes and maturities. The scheme is unconditionally stable, of
second order of approximation in time and space, and achieves a linear
complexity in each spatial direction. The results of simulation obtained by
using this model demonstrate capacity of the presented approach in modeling
stochastic skew.
| 0 | 0 | 1 | 0 | 0 | 0 |
17,700 | Computational Study of Amplitude-to-Phase Conversion in a Modified Uni-Traveling Carrier (MUTC) Photodetector | We calculate the amplitude-to-phase (AM-to-PM) noise conversion in a modified
unitraveling carrier (MUTC) photodetector. We obtained two nulls as measured in
the experiments, and we explain their origin. The nulls appear due to the
transit time variation when the average photocurrent varies, and the transit
time variation is due to the change of electron velocity when the average
photocurrent varies. We also show that the AM-to-PM conversion coefficient
depends only on the pulse energy and is independent of the pulse duration when
the duration is less than 500 fs. When the pulse duration is larger than 500
fs, the nulls of the AM-to-PM conversion coefficient shift to larger average
photocurrents. This shift occurs because the increase in that pulse duration
leads to a decrease in the peak photocurrent. The AM-to-PM noise conversion
coefficient changes as the repetition rate varies. However, the repetition rate
does not change the AM-to-PM conversion coefficient as a function of input
optical pulse energy. The repetition rate changes the average photocurrent. We
propose a design that would in theory improve the performance of the device.
| 0 | 1 | 0 | 0 | 0 | 0 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.