title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Accurate Real Time Localization Tracking in A Clinical Environment using Bluetooth Low Energy and Deep Learning
Deep learning has started to revolutionize several different industries, and the applications of these methods in medicine are now becoming more commonplace. This study focuses on investigating the feasibility of tracking patients and clinical staff wearing Bluetooth Low Energy (BLE) tags in a radiation oncology clinic using artificial neural networks (ANNs) and convolutional neural networks (CNNs). The performance of these networks was compared to relative received signal strength indicator (RSSI) thresholding and triangulation. By utilizing temporal information, a combined CNN+ANN network was capable of correctly identifying the location of the BLE tag with an accuracy of 99.9%. It outperformed a CNN model (accuracy = 94%), a thresholding model employing majority voting (accuracy = 95%), and a triangulation classifier utilizing majority voting (accuracy = 95%). Future studies will seek to deploy this affordable real time location system in hospitals to improve clinical workflow, efficiency, and patient safety.
1
1
0
0
0
0
Metropolis Sampling
Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desired invariant distribution. In this document, we focus on the Metropolis-Hastings (MH) sampler, which can be considered as the atom of the MCMC techniques, introducing the basic notions and different properties. We describe in details all the elements involved in the MH algorithm and the most relevant variants. Several improvements and recent extensions proposed in the literature are also briefly discussed, providing a quick but exhaustive overview of the current Metropolis-based sampling's world.
0
0
0
1
0
0
Approximate Structure Construction Using Large Statistical Swarms
In this paper we describe a novel local algorithm for large statistical swarms using "harmonic attractor dynamics", by means of which a swarm can construct harmonics of the environment. This in turn allows the swarm to approximately reconstruct desired structures in the environment. The robots navigate in a discrete environment, completely free of localization, being able to communicate with other robots in its own discrete cell only, and being able to sense or take reliable action within a disk of radius $r$ around itself. We present the mathematics that underlie such dynamics and present initial results demonstrating the proposed algorithm.
1
0
0
1
0
0
Search for Food of Birds, Fish and Insects
This book chapter introduces to the problem to which extent search strategies of foraging biological organisms can be identified by statistical data analysis and mathematical modeling. A famous paradigm in this field is the Levy Flight Hypothesis: It states that under certain mathematical conditions Levy flights, which are a key concept in the theory of anomalous stochastic processes, provide an optimal search strategy. This hypothesis may be understood biologically as the claim that Levy flights represent an evolutionary adaptive optimal search strategy for foraging organisms. Another interpretation, however, is that Levy flights emerge from the interaction between a forager and a given (scale-free) distribution of food sources. These hypotheses are discussed controversially in the current literature. We give examples and counterexamples of experimental data and their analyses supporting and challenging them.
0
0
0
0
1
0
Poverty Prediction with Public Landsat 7 Satellite Imagery and Machine Learning
Obtaining detailed and reliable data about local economic livelihoods in developing countries is expensive, and data are consequently scarce. Previous work has shown that it is possible to measure local-level economic livelihoods using high-resolution satellite imagery. However, such imagery is relatively expensive to acquire, often not updated frequently, and is mainly available for recent years. We train CNN models on free and publicly available multispectral daytime satellite images of the African continent from the Landsat 7 satellite, which has collected imagery with global coverage for almost two decades. We show that despite these images' lower resolution, we can achieve accuracies that exceed previous benchmarks.
1
0
0
1
0
0
Implicit Regularization in Matrix Factorization
We study implicit regularization when optimizing an underdetermined quadratic objective over a matrix $X$ with gradient descent on a factorization of $X$. We conjecture and provide empirical and theoretical evidence that with small enough step sizes and initialization close enough to the origin, gradient descent on a full dimensional factorization converges to the minimum nuclear norm solution.
1
0
0
1
0
0
A general theory of singular values with applications to signal denoising
We study the Pareto frontier for two competing norms $\|\cdot\|_X$ and $\|\cdot\|_Y$ on a vector space. For a given vector $c$, the pareto frontier describes the possible values of $(\|a\|_X,\|b\|_Y)$ for a decomposition $c=a+b$. The singular value decomposition of a matrix is closely related to the Pareto frontier for the spectral and nuclear norm. We will develop a general theory that extends the notion of singular values of a matrix to arbitrary finite dimensional euclidean vector spaces equipped with dual norms. This also generalizes the diagonal singular value decompositions for tensors introduced by the author in previous work. We can apply the results to denoising, where $c$ is a noisy signal, $a$ is a sparse signal and $b$ is noise. Applications include 1D total variation denoising, 2D total variation Rudin-Osher-Fatemi image denoising, LASSO, basis pursuit denoising and tensor decompositions.
1
0
1
1
0
0
A Hybrid Deep Learning Architecture for Privacy-Preserving Mobile Analytics
Deep Neural Networks are increasingly being used in a variety of machine learning applications applied to user data on the cloud. However, this approach introduces a number of privacy and efficiency challenges, as the cloud operator can perform secondary inferences on the available data. Recently, advances in edge processing have paved the way for more efficient, and private, data processing at the source for simple tasks and lighter models, though they remain a challenge for larger, and more complicated models. In this paper, we present a hybrid approach for breaking down large, complex deep models for cooperative, privacy-preserving analytics. We do this by breaking down the popular deep architectures and fine-tune them in a suitable way. We then evaluate the privacy benefits of this approach based on the information exposed to the cloud service. We also assess the local inference cost of different layers on a modern handset for mobile applications. Our evaluations show that by using certain kind of fine-tuning and embedding techniques and at a small processing cost, we can greatly reduce the level of information available to unintended tasks applied to the data features on the cloud, and hence achieving the desired tradeoff between privacy and performance.
1
0
0
0
0
0
Exploiting Apache Spark platform for CMS computing analytics
The CERN IT provides a set of Hadoop clusters featuring more than 5 PBytes of raw storage with different open-source, user-level tools available for analytical purposes. The CMS experiment started collecting a large set of computing meta-data, e.g. dataset, file access logs, since 2015. These records represent a valuable, yet scarcely investigated, set of information that needs to be cleaned, categorized and analyzed. CMS can use this information to discover useful patterns and enhance the overall efficiency of the distributed data, improve CPU and site utilization as well as tasks completion time. Here we present evaluation of Apache Spark platform for CMS needs. We discuss two main use-cases CMS analytics and ML studies where efficient process billions of records stored on HDFS plays an important role. We demonstrate that both Scala and Python (PySpark) APIs can be successfully used to execute extremely I/O intensive queries and provide valuable data insight from collected meta-data.
0
1
0
0
0
0
Scalable Co-Optimization of Morphology and Control in Embodied Machines
Evolution sculpts both the body plans and nervous systems of agents together over time. In contrast, in AI and robotics, a robot's body plan is usually designed by hand, and control policies are then optimized for that fixed design. The task of simultaneously co-optimizing the morphology and controller of an embodied robot has remained a challenge. In psychology, the theory of embodied cognition posits that behavior arises from a close coupling between body plan and sensorimotor control, which suggests why co-optimizing these two subsystems is so difficult: most evolutionary changes to morphology tend to adversely impact sensorimotor control, leading to an overall decrease in behavioral performance. Here, we further examine this hypothesis and demonstrate a technique for "morphological innovation protection", which temporarily reduces selection pressure on recently morphologically-changed individuals, thus enabling evolution some time to "readapt" to the new morphology with subsequent control policy mutations. We show the potential for this method to avoid local optima and converge to similar highly fit morphologies across widely varying initial conditions, while sustaining fitness improvements further into optimization. While this technique is admittedly only the first of many steps that must be taken to achieve scalable optimization of embodied machines, we hope that theoretical insight into the cause of evolutionary stagnation in current methods will help to enable the automation of robot design and behavioral training -- while simultaneously providing a testbed to investigate the theory of embodied cognition.
1
0
0
0
0
0
Normalizing the Taylor expansion of non-deterministic λ-terms, via parallel reduction of resource vectors
It has been known since Ehrhard and Regnier's seminal work on the Taylor expansion of {\lambda}-terms that this operation commutes with normalization: the expansion of a {\lambda}-term is always normalizable and its normal form is the expansion of the Böhm tree of the term. We generalize this result to the non-uniform setting of the algebraic {\lambda}-calculus, i.e. {\lambda}-calculus extended with linear combinations of terms. This requires us to tackle two difficulties: foremost is the fact that Ehrhard and Regnier's techniques rely heavily on the uniform, deterministic nature of the ordinary {\lambda}-calculus, and thus cannot be adapted; second is the absence of any satisfactory generic extension of the notion of Böhm tree in presence of quantitative non-determinism, which is reflected by the fact that the Taylor expansion of an algebraic {\lambda}-term is not always normalizable. Our solution is to provide a fine grained study of the dynamics of {\beta}-reduction under Taylor expansion, by introducing a notion of reduction on resource vectors, i.e. infinite linear combinations of resource {\lambda}-terms. The latter form the multilinear fragment of the differential {\lambda}-calculus, and resource vectors are the target of the Taylor expansion of {\lambda}-terms. We show the reduction of resource vectors contains the image of any {\beta}-reduction step, from which we deduce that Taylor expansion and normalization commute on the nose. We moreover identify a class of algebraic {\lambda}-terms, encompassing both normalizable algebraic {\lambda}-terms and arbitrary ordinary {\lambda}-terms: the expansion of these is always normalizable, which guides the definition of a generalization of Böhm trees to this setting.
1
0
0
0
0
0
Quantifying telescope phase discontinuities external to AO-systems by use of Phase Diversity and Focal Plane Sharpening
We propose and apply two methods to estimate pupil plane phase discontinuities for two realistic scenarios on VLT and Keck. The methods use both Phase Diversity and a form of image sharpening. For the case of VLT, we simulate the `low wind effect' (LWE) which is responsible for focal plane errors in the SPHERE system in low wind and good seeing conditions. We successfully estimate the simulated LWE using both methods, and show that they are complimentary to one another. We also demonstrate that single image Phase Diversity (also known as Phase Retrieval with diversity) is also capable of estimating the simulated LWE when using the natural de-focus on the SPHERE/DTTS imager. We demonstrate that Phase Diversity can estimate the LWE to within 30 nm RMS WFE, which is within the allowable tolerances to achieve a target SPHERE contrast of 10$^{-6}$. Finally, we simulate 153 nm RMS of piston errors on the mirror segments of Keck and produce NIRC2 images subject to these effects. We show that a single, diverse image with 1.5 waves (PV) of focus can be used to estimate this error to within 29 nm RMS WFE, and a perfect correction of our estimation would increase the Strehl ratio of a NIRC2 image by 12\%
0
1
0
0
0
0
Adaptive Multilevel Monte Carlo Approximation of Distribution Functions
We analyse a multilevel Monte Carlo method for the approximation of distribution functions of univariate random variables. Since, by assumption, the target distribution is not known explicitly, approximations have to be used. We provide an asymptotic analysis of the error and the cost of the algorithm. Furthermore we construct an adaptive version of the algorithm that does not require any a priori knowledge on weak or strong convergence rates. We apply the adaptive algorithm to smooth path-independent and path-dependent functionals and to stopped exit times of SDEs.
0
0
1
1
0
0
Constraints on the pre-impact orbits of Solar System giant impactors
We provide a fast method for computing constraints on impactor pre-impact orbits, applying this to the late giant impacts in the Solar System. These constraints can be used to make quick, broad comparisons of different collision scenarios, identifying some immediately as low-probability events, and narrowing the parameter space in which to target follow-up studies with expensive N-body simulations. We benchmark our parameter space predictions, finding good agreement with existing N-body studies for the Moon. We suggest that high-velocity impact scenarios in the inner Solar System, including all currently proposed single impact scenarios for the formation of Mercury, should be disfavoured. This leaves a multiple hit-and-run scenario as the most probable currently proposed for the formation of Mercury.
0
1
0
0
0
0
Dark matter in the Reticulum II dSph: a radio search
We present a deep radio search in the Reticulum II dwarf spheroidal (dSph) galaxy performed with the Australia Telescope Compact Array. Observations were conducted at 16 cm wavelength, with an rms sensitivity of 0.01 mJy/beam, and with the goal of searching for synchrotron emission induced by annihilation or decay of weakly interacting massive particles (WIMPs). Data were complemented with observations on large angular scales taken with the KAT-7 telescope. We find no evidence for a diffuse emission from the dSph and we derive competitive bounds on the WIMP properties. In addition, we detect more than 200 new background radio sources. Among them, we show there are two compelling candidates for being the radio counterpart of the possible gamma-ray emission reported by other groups using Fermi-LAT data.
0
1
0
0
0
0
Relevant change points in high dimensional time series
This paper investigates the problem of detecting relevant change points in the mean vector, say $\mu_t =(\mu_{1,t},\ldots ,\mu_{d,t})^T$ of a high dimensional time series $(Z_t)_{t\in \mathbb{Z}}$. While the recent literature on testing for change points in this context considers hypotheses for the equality of the means $\mu_h^{(1)}$ and $\mu_h^{(2)}$ before and after the change points in the different components, we are interested in a null hypothesis of the form $$ H_0: |\mu^{(1)}_{h} - \mu^{(2)}_{h} | \leq \Delta_h ~~~\mbox{ for all } ~~h=1,\ldots ,d $$ where $\Delta_1, \ldots , \Delta_d$ are given thresholds for which a smaller difference of the means in the $h$-th component is considered to be non-relevant. We propose a new test for this problem based on the maximum of squared and integrated CUSUM statistics and investigate its properties as the sample size $n$ and the dimension $d$ both converge to infinity. In particular, using Gaussian approximations for the maximum of a large number of dependent random variables, we show that on certain points of the boundary of the null hypothesis a standardised version of the maximum converges weakly to a Gumbel distribution.
0
0
1
1
0
0
Disentangling in Variational Autoencoders with Natural Clustering
Learning representations that disentangle the underlying factors of variability in data is an intuitive precursor to AI with human-like reasoning. Consequently, it has been the object of many efforts of the machine learning community. This work takes a step further in this direction by addressing the scenario where generative factors present a multimodal distribution due to the existence of class distinction in the data. We formulate a lower bound on the joint distribution of inputs and class labels and present N-VAE, a model which is capable of separating factors of variation which are exclusive to certain classes from factors that are shared among classes. This model implements the natural clustering prior through the use of a class-conditioned latent space and a shared latent space. We show its usefulness for detecting and disentangling class-dependent generative factors as well as for generating rich artificial samples.
1
0
0
1
0
0
Calibrated Boosting-Forest
Excellent ranking power along with well calibrated probability estimates are needed in many classification tasks. In this paper, we introduce a technique, Calibrated Boosting-Forest that captures both. This novel technique is an ensemble of gradient boosting machines that can support both continuous and binary labels. While offering superior ranking power over any individual regression or classification model, Calibrated Boosting-Forest is able to preserve well calibrated posterior probabilities. Along with these benefits, we provide an alternative to the tedious step of tuning gradient boosting machines. We demonstrate that tuning Calibrated Boosting-Forest can be reduced to a simple hyper-parameter selection. We further establish that increasing this hyper-parameter improves the ranking performance under a diminishing return. We examine the effectiveness of Calibrated Boosting-Forest on ligand-based virtual screening where both continuous and binary labels are available and compare the performance of Calibrated Boosting-Forest with logistic regression, gradient boosting machine and deep learning. Calibrated Boosting-Forest achieved an approximately 48% improvement compared to a state-of-art deep learning model. Moreover, it achieved around 95% improvement on probability quality measurement compared to the best individual gradient boosting machine. Calibrated Boosting-Forest offers a benchmark demonstration that in the field of ligand-based virtual screening, deep learning is not the universally dominant machine learning model and good calibrated probabilities can better facilitate virtual screening process.
1
0
0
1
0
0
Phase Diagram of $α$-RuCl$_3$ in an in-plane Magnetic Field
The low-temperature magnetic phases in the layered honeycomb lattice material $\alpha$-RuCl$_3$ have been studied as a function of in-plane magnetic field. In zero field this material orders magnetically below 7 K with so-called zigzag order within the honeycomb planes. Neutron diffraction data show that a relatively small applied field of 2 T is sufficient to suppress the population of the magnetic domain in which the zigzag chains run along the field direction. We found that the intensity of the magnetic peaks due to zigzag order is continuously suppressed with increasing field until their disappearance at $\mu_o$H$_c$=8 T. At still higher fields (above 8 T) the zigzag order is destroyed, while bulk magnetization and heat capacity measurements suggest that the material enters a state with gapped magnetic excitations. We discuss the magnetic phase diagram obtained in our study in the context of a quantum phase transition.
0
1
0
0
0
0
Post hoc inference via joint family-wise error rate control
We introduce a general methodology for post hoc inference in a large-scale multiple testing framework. The approach is called "user-agnostic" in the sense that the statistical guarantee on the number of correct rejections holds for any set of candidate items selected by the user (after having seen the data). This task is investigated by defining a suitable criterion, named the joint-family-wise-error rate (JER for short). We propose several procedures for controlling the JER, with a special focus on incorporating dependencies while adapting to the unknown quantity of signal (via a step-down approach). We show that our proposed setting incorporates as particular cases a version of the higher criticism as well as the closed testing based approach of Goeman and Solari (2011). Our theoretical statements are supported by numerical experiments.
0
0
1
1
0
0
Jamming Resistant Receivers for Massive MIMO
We design jamming resistant receivers to enhance the robustness of a massive MIMO uplink channel against jamming. In the pilot phase, we estimate not only the desired channel, but also the jamming channel by exploiting purposely unused pilot sequences. The jamming channel estimate is used to construct the linear receive filter to reduce impact that jamming has on the achievable rates. The performance of the proposed scheme is analytically and numerically evaluated. These results show that the proposed scheme greatly improves the rates, as compared to conventional receivers. Moreover, the proposed schemes still work well with stronger jamming power.
1
0
0
0
0
0
Thickening and sickening the SYK model
We discuss higher dimensional generalizations of the 0+1-dimensional Sachdev-Ye-Kitaev (SYK) model that has recently become the focus of intensive interdisciplinary studies by, both, the condensed matter and field-theoretical communities. Unlike the previous constructions where multiple SYK copies would be coupled to each other and/or hybridized with itinerant fermions via spatially short-ranged random hopping processes, we study algebraically varying long-range (spatially and/or temporally) correlated random couplings in the general d+1 dimensions. Such pertinent topics as translationally-invariant strong-coupling solutions, emergent reparametrization symmetry, effective action for fluctuations, chaotic behavior, and diffusive transport (or a lack thereof) are all addressed. We find that the most appealing properties of the original SYK model that suggest the existence of its 1+1-dimensional holographic gravity dual do not survive the aforementioned generalizations, thus lending no additional support to the hypothetical broad (including 'non-AdS/non-CFT') holographic correspondence.
0
1
0
0
0
0
Hidden chiral symmetries in BDI multichannel Kitaev chains
Realistic implementations of the Kitaev chain require, in general, the introduction of extra internal degrees of freedom. In the present work, we discuss the presence of hidden BDI symmetries for free Hamiltonians describing systems with an arbitrary number of internal degrees of freedom. We generalize results of a spinfull Kitaev chain to construct a Hamiltonian with $n$ internal degrees of freedom and obtain the corresponding hidden chiral symmetry. As an explicit application of this generalized result, we exploit by analytical and numerical calculations the case of a spinful 2-band Kitaev chain, which can host up to 4 Majorana bound states. We also observe the appearence of minigap states, when chiral symmetry is broken.
0
1
0
0
0
0
Charge Berezinskii-Kosterlitz-Thouless transition in superconducting NbTiN films
A half-century after the discovery of the superconductor-insulator transition (SIT), one of the fundamental predictions of the theory, the charge Berezinskii-Kosterlitz-Thouless (BKT) transition that is expected to occur at the insulating side of the SIT, has remained unobserved. The charge BKT transition is a phenomenon dual to the vortex BKT transition, which is at the heart of the very existence of two-dimensional superconductivity as a zero-resistance state appearing at finite temperatures. The dual picture points to the possibility of the existence of a superinsulating state endowed with zero conductance at finite temperature. Here, we report the observation of the charge BKT transition on the insulating side of the SIT, identified by the critical behavior of the resistance. We find that the critical temperature of the charge BKT transition depends on the magnetic field exhibiting first the fast growth and then passing through the maximum at fields much less than the upper critical field. Finally, we ascertain the effects of the finite electrostatic screening length and its divergence at the magnetic field-tuned approach to the superconductor-insulator transition.
0
1
0
0
0
0
Seed-Driven Geo-Social Data Extraction - Full Version
Geo-social data has been an attractive source for a variety of problems such as mining mobility patterns, link prediction, location recommendation, and influence maximization. However, new geo-social data is increasingly unavailable and suffers several limitations. In this paper, we aim to remedy the problem of effective data extraction from geo-social data sources. We first identify and categorize the limitations of extracting geo-social data. In order to overcome the limitations, we propose a novel seed-driven approach that uses the points of one source as the seed to feed as queries for the others. We additionally handle differences between, and dynamics within the sources by proposing three variants for optimizing search radius. Furthermore, we provide an optimization based on recursive clustering to minimize the number of requests and an adaptive procedure to learn the specific data distribution of each source. Our comprehensive experiments with six popular sources show that our seed-driven approach yields 14.3 times more data overall, while our request-optimized algorithm retrieves up to 95% of the data with less than 16% of the requests. Thus, our proposed seed-driven approach set new standards for effective and efficient extraction of geo-social data.
1
0
0
0
0
0
Bayesian random-effects meta-analysis using the bayesmeta R package
The random-effects or normal-normal hierarchical model is commonly utilized in a wide range of meta-analysis applications. A Bayesian approach to inference is very attractive in this context, especially when a meta-analysis is based only on few studies. The bayesmeta R package provides readily accessible tools to perform Bayesian meta-analyses and generate plots and summaries, without having to worry about computational details. It allows for flexible prior specification and instant access to the resulting posterior distributions, including prediction and shrinkage estimation, and facilitating for example quick sensitivity checks. The present paper introduces the underlying theory and showcases its usage.
0
0
0
1
0
0
Dirichlet Mixture Model based VQ Performance Prediction for Line Spectral Frequency
In this paper, we continue our previous work on the Dirichlet mixture model (DMM)-based VQ to derive the performance bound of the LSF VQ. The LSF parameters are transformed into the $\Delta$LSF domain and the underlying distribution of the $\Delta$LSF parameters are modelled by a DMM with finite number of mixture components. The quantization distortion, in terms of the mean squared error (MSE), is calculated with the high rate theory. The mapping relation between the perceptually motivated log spectral distortion (LSD) and the MSE is empirically approximated by a polynomial. With this mapping function, the minimum required bit rate for transparent coding of the LSF is estimated.
1
0
0
1
0
0
Query K-means Clustering and the Double Dixie Cup Problem
We consider the problem of approximate $K$-means clustering with outliers and side information provided by same-cluster queries and possibly noisy answers. Our solution shows that, under some mild assumptions on the smallest cluster size, one can obtain an $(1+\epsilon)$-approximation for the optimal potential with probability at least $1-\delta$, where $\epsilon>0$ and $\delta\in(0,1)$, using an expected number of $O(\frac{K^3}{\epsilon \delta})$ noiseless same-cluster queries and comparison-based clustering of complexity $O(ndK + \frac{K^3}{\epsilon \delta})$, here, $n$ denotes the number of points and $d$ the dimension of space. Compared to a handful of other known approaches that perform importance sampling to account for small cluster sizes, the proposed query technique reduces the number of queries by a factor of roughly $O(\frac{K^6}{\epsilon^3})$, at the cost of possibly missing very small clusters. We extend this settings to the case where some queries to the oracle produce erroneous information, and where certain points, termed outliers, do not belong to any clusters. Our proof techniques differ from previous methods used for $K$-means clustering analysis, as they rely on estimating the sizes of the clusters and the number of points needed for accurate centroid estimation and subsequent nontrivial generalizations of the double Dixie cup problem. We illustrate the performance of the proposed algorithm both on synthetic and real datasets, including MNIST and CIFAR $10$.
0
0
0
1
0
0
On the spectrum of directed uniform and non-uniform hypergraphs
Here, we suggest a method to represent general directed uniform and non-uniform hypergraphs by different connectivity tensors. We show many results on spectral properties of undirected hypergraphs also hold for general directed uniform hypergraphs. Our representation of a connectivity tensor will be very useful for the further development in spectral theory of directed hypergraphs. At the end, we have also introduced the concept of weak* irreducible hypermatrix to better explain connectivity of a directed hypergraph.
0
0
1
0
0
0
Inference for partial correlation when data are missing not at random
We introduce uncertainty regions to perform inference on partial correlations when data are missing not at random. These uncertainty regions are shown to have a desired asymptotic coverage. Their finite sample performance is illustrated via simulations and real data example.
0
0
1
1
0
0
Interstitial Content Detection
Interstitial content is online content which grays out, or otherwise obscures the main page content. In this technical report, we discuss exploratory research into detecting the presence of interstitial content in web pages. We discuss the use of computer vision techniques to detect interstitials, and the potential use of these techniques to provide a labelled dataset for machine learning.
1
0
0
0
0
0
Perfect spike detection via time reversal
Spiking neuronal networks are usually simulated with three main simulation schemes: the classical time-driven and event-driven schemes, and the more recent hybrid scheme. All three schemes evolve the state of a neuron through a series of checkpoints: equally spaced in the first scheme and determined neuron-wise by spike events in the latter two. The time-driven and the hybrid scheme determine whether the membrane potential of a neuron crosses a threshold at the end of of the time interval between consecutive checkpoints. Threshold crossing can, however, occur within the interval even if this test is negative. Spikes can therefore be missed. The present work derives, implements, and benchmarks a method for perfect retrospective spike detection. This method can be applied to neuron models with affine or linear subthreshold dynamics. The idea behind the method is to propagate the threshold with a time-inverted dynamics, testing whether the threshold crosses the neuron state to be evolved, rather than vice versa. Algebraically this translates into a set of inequalities necessary and sufficient for threshold crossing. This test is slower than the imperfect one, but faster than an alternative perfect tests based on bisection or root-finding methods. Comparison confirms earlier results that the imperfect test rarely misses spikes (less than a fraction $1/10^8$ of missed spikes) in biologically relevant settings. This study offers an alternative geometric point of view on neuronal dynamics.
0
1
1
0
0
0
A response to: "NIST experts urge caution in use of courtroom evidence presentation method"
A press release from the National Institute of Standards and Technology (NIST)could potentially impede progress toward improving the analysis of forensic evidence and the presentation of forensic analysis results in courts in the United States and around the world. "NIST experts urge caution in use of courtroom evidence presentation method" was released on October 12, 2017, and was picked up by the phys.org news service. It argues that, except in exceptional cases, the results of forensic analyses should not be reported as "likelihood ratios". The press release, and the journal article by NIST researchers Steven P. Lund & Harri Iyer on which it is based, identifies some legitimate points of concern, but makes a strawman argument and reaches an unjustified conclusion that throws the baby out with the bathwater.
0
0
0
1
0
0
Safe Model-based Reinforcement Learning with Stability Guarantees
Reinforcement learning is a powerful paradigm for learning optimal policies from experimental data. However, to find optimal policies, most reinforcement learning algorithms explore all possible actions, which may be harmful for real-world systems. As a consequence, learning algorithms are rarely applied on safety-critical systems in the real world. In this paper, we present a learning algorithm that explicitly considers safety, defined in terms of stability guarantees. Specifically, we extend control-theoretic results on Lyapunov stability verification and show how to use statistical models of the dynamics to obtain high-performance control policies with provable stability certificates. Moreover, under additional regularity assumptions in terms of a Gaussian process prior, we prove that one can effectively and safely collect data in order to learn about the dynamics and thus both improve control performance and expand the safe region of the state space. In our experiments, we show how the resulting algorithm can safely optimize a neural network policy on a simulated inverted pendulum, without the pendulum ever falling down.
1
0
0
1
0
0
Refining Trace Abstraction using Abstract Interpretation
The CEGAR loop in software model checking notoriously diverges when the abstraction refinement procedure does not derive a loop invariant. An abstraction refinement procedure based on an SMT solver is applied to a trace, i.e., a restricted form of a program (without loops). In this paper, we present a new abstraction refinement procedure that aims at circumventing this restriction whenever possible. We apply abstract interpretation to a program that we derive from the given trace. If the program contains a loop, we are guaranteed to obtain a loop invariant. We call an SMT solver only in the case where the abstract interpretation returns an indefinite answer. That is, the idea is to use abstract interpretation and an SMT solver in tandem. An experimental evaluation in the setting of trace abstraction indicates the practical potential of this idea.
1
0
0
0
0
0
A partial inverse problem for the Sturm-Liouville operator on the graph with a loop
The Sturm-Liouville operator with singular potentials on the lasso graph is considered. We suppose that the potential is known a priori on the boundary edge, and recover the potential on the loop from a part of the spectrum and some additional data. We prove the uniqueness theorem and provide a constructive algorithm for the solution of this partial inverse problem.
0
0
1
0
0
0
BPjs --- a framework for modeling reactive systems using a scripting language and BP
We describe some progress towards a new common framework for model driven engineering, based on behavioral programming. The tool we have developed unifies almost all of the work done in behavioral programming so far, under a common set of interfaces. Its architecture supports pluggable event selection strategies, which can make models more intuitive and compact. Program state space can be traversed using various algorithms, such as DFS and A*. Furthermore, program state is represented in a way that enables scanning a state space using parallel and distributed algorithms. Executable models created with this tool can be directly embedded in Java applications, enabling a model-first approach to system engineering, where initially a model is created and verified, and then a working application is gradually built around the model. The model itself consists of a collection of small scripts written in JavaScript (hence "BPjs"). Using a variety of case-studies, this paper shows how the combination of a lenient programming language with formal model analysis tools creates an efficient way of developing robust complex systems. Additionally, as we learned from an experimental course we ran, the usage of JavaScript make practitioners more amenable to using this system and, thus, model checking and model driven engineering. In addition to providing infrastructure for development and case-studies in behavioral programming, the tool is designed to serve as a common platform for research and innovation in behavioral programming and in model driven engineering in general.
1
0
0
0
0
0
Design of Quantum Circuits for Galois Field Squaring and Exponentiation
This work presents an algorithm to generate depth, quantum gate and qubit optimized circuits for $GF(2^m)$ squaring in the polynomial basis. Further, to the best of our knowledge the proposed quantum squaring circuit algorithm is the only work that considers depth as a metric to be optimized. We compared circuits generated by our proposed algorithm against the state of the art and determine that they require $50 \%$ fewer qubits and offer gates savings that range from $37 \%$ to $68 \%$. Further, existing quantum exponentiation are based on either modular or integer arithmetic. However, Galois arithmetic is a useful tool to design resource efficient quantum exponentiation circuit applicable in quantum cryptanalysis. Therefore, we present the quantum circuit implementation of Galois field exponentiation based on the proposed quantum Galois field squaring circuit. We calculated a qubit savings ranging between $44\%$ to $50\%$ and quantum gate savings ranging between $37 \%$ to $68 \%$ compared to identical quantum exponentiation circuit based on existing squaring circuits.
1
0
0
0
0
0
Nearest-Neighbor Sample Compression: Efficiency, Consistency, Infinite Dimensions
We examine the Bayes-consistency of a recently proposed 1-nearest-neighbor-based multiclass learning algorithm. This algorithm is derived from sample compression bounds and enjoys the statistical advantages of tight, fully empirical generalization bounds, as well as the algorithmic advantages of a faster runtime and memory savings. We prove that this algorithm is strongly Bayes-consistent in metric spaces with finite doubling dimension --- the first consistency result for an efficient nearest-neighbor sample compression scheme. Rather surprisingly, we discover that this algorithm continues to be Bayes-consistent even in a certain infinite-dimensional setting, in which the basic measure-theoretic conditions on which classic consistency proofs hinge are violated. This is all the more surprising, since it is known that $k$-NN is not Bayes-consistent in this setting. We pose several challenging open problems for future research.
1
0
1
1
0
0
AdaGrad stepsizes: Sharp convergence over nonconvex landscapes, from any initialization
Adaptive gradient methods such as AdaGrad and its variants update the stepsize in stochastic gradient descent on the fly according to the gradients received along the way; such methods have gained widespread use in large-scale optimization for their ability to converge robustly, without the need to fine tune parameters such as the stepsize schedule. Yet, the theoretical guarantees to date for AdaGrad are for online and convex optimization, which is quite different from the offline and nonconvex setting where adaptive gradient methods shine in practice. We bridge this gap by providing strong theoretical guarantees in batch and stochastic setting, for the convergence of AdaGrad over smooth, nonconvex landscapes, from any initialization of the stepsize, without knowledge of Lipschitz constant of the gradient. We show in the stochastic setting that AdaGrad converges to a stationary point at the optimal $O(1/\sqrt{N})$ rate (up to a $\log(N)$ factor), and in the batch setting, at the optimal $O(1/N)$ rate. Moreover, in both settings, the constant in the rate matches the constant obtained as if the variance of the gradient noise and Lipschitz constant of the gradient were known in advance and used to tune the stepsize, up to a logarithmic factor of the mismatch between the optimal stepsize and the stepsize used to initialize AdaGrad. In particular, our results imply that AdaGrad is robust to both the unknown Lipschitz constant and level of stochastic noise on the gradient, in a near-optimal sense. When there is noise, AdaGrad converges at the rate of $O(1/\sqrt{N})$ with well-tuned stepsize, and when there is not noise, the same algorithm converges at the rate of $O(1/N)$ like well-tuned batch gradient descent.
0
0
0
1
0
0
On polynomially integrable convex bodies
An infinitely smooth convex body in $\mathbb R^n$ is called polynomially integrable of degree $N$ if its parallel section functions are polynomials of degree $N$. We prove that the only smooth convex bodies with this property in odd dimensions are ellipsoids, if $N\ge n-1$. This is in contrast with the case of even dimensions and the case of odd dimensions with $N<n-1$, where such bodies do not exist, as it was recently shown by Agranovsky.
0
0
1
0
0
0
Percentile Policies for Tracking of Markovian Random Processes with Asymmetric Cost and Observation
Motivated by wide-ranging applications such as video delivery over networks using Multiple Description Codes, congestion control, and inventory management, we study the state-tracking of a Markovian random process with a known transition matrix and a finite ordered state set. The decision-maker must select a state as an action at each time step to minimize the total expected cost. The decision-maker is faced with asymmetries both in cost and observation: in case the selected state is less than the actual state of the Markovian process, an under-utilization cost occurs and only partial observation about the actual state is revealed; otherwise, the decision incurs an over-utilization cost and reveals full information about the actual state. We can formulate this problem as a Partially Observable Markov Decision Process which can be expressed as a dynamic program based on the last full observed state and the time of full observation. This formulation determines the sequence of actions to be taken between any two consecutive full observations of the actual state. However, this DP grows exponentially in the number of states, with little hope for a computationally feasible solution. We present an interesting class of computationally tractable policies with a percentile structure. A generalization of binary search, this class of policies attempt at any given time to reduce the uncertainty by a given percentage. Among all percentile policies, we search for the one with the minimum expected cost. The result of this search is a heuristic policy which we evaluate through numerical simulations. We show that it outperforms the myopic policies and under some conditions performs close to the optimal policies. Furthermore, we derive a lower bound on the cost of the optimal policy which can be computed with low complexity and give a measure for how close our heuristic policy is to the optimal policy.
1
0
0
0
0
0
On Information Transfer Based Characterization of Power System Stability
In this paper, we present a novel approach to identify the generators and states responsible for the small-signal stability of power networks. To this end, the newly developed notion of information transfer between the states of a dynamical system is used. In particular, using the concept of information transfer, which characterizes influence between the various states and a linear combination of states of a dynamical system, we identify the generators and states which are responsible for causing instability of the power network. While characterizing influence from state to state, information transfer can also describe influence from state to modes thereby generalizing the well-known notion of participation factor while at the same time overcoming some of the limitations of the participation factor. The developed framework is applied to study the three bus system identifying various cause of instabilities in the system. The simulation study is extended to IEEE 39 bus system.
1
0
0
0
0
0
Using Session Types for Reasoning About Boundedness in the Pi-Calculus
The classes of depth-bounded and name-bounded processes are fragments of the pi-calculus for which some of the decision problems that are undecidable for the full calculus become decidable. P is depth-bounded at level k if every reduction sequence for P contains successor processes with at most k active nested restrictions. P is name-bounded at level k if every reduction sequence for P contains successor processes with at most k active bound names. Membership of these classes of processes is undecidable. In this paper we use binary session types to decise two type systems that give a sound characterization of the properties: If a process is well-typed in our first system, it is depth-bounded. If a process is well-typed in our second, more restrictive type system, it will also be name-bounded.
1
0
0
0
0
0
Combinatorial Secretary Problems with Ordinal Information
The secretary problem is a classic model for online decision making. Recently, combinatorial extensions such as matroid or matching secretary problems have become an important tool to study algorithmic problems in dynamic markets. Here the decision maker must know the numerical value of each arriving element, which can be a demanding informational assumption. In this paper, we initiate the study of combinatorial secretary problems with ordinal information, in which the decision maker only needs to be aware of a preference order consistent with the values of arrived elements. The goal is to design online algorithms with small competitive ratios. For a variety of combinatorial problems, such as bipartite matching, general packing LPs, and independent set with bounded local independence number, we design new algorithms that obtain constant competitive ratios. For the matroid secretary problem, we observe that many existing algorithms for special matroid structures maintain their competitive ratios even in the ordinal model. In these cases, the restriction to ordinal information does not represent any additional obstacle. Moreover, we show that ordinal variants of the submodular matroid secretary problems can be solved using algorithms for the linear versions by extending [Feldman and Zenklusen, 2015]. In contrast, we provide a lower bound of $\Omega(\sqrt{n}/(\log n))$ for algorithms that are oblivious to the matroid structure, where $n$ is the total number of elements. This contrasts an upper bound of $O(\log n)$ in the cardinal model, and it shows that the technique of thresholding is not sufficient for good algorithms in the ordinal model.
1
0
0
0
0
0
Modeling the Formation of Social Conventions in Multi-Agent Populations
In order to understand the formation of social conventions we need to know the specific role of control and learning in multi-agent systems. To advance in this direction, we propose, within the framework of the Distributed Adaptive Control (DAC) theory, a novel Control-based Reinforcement Learning architecture (CRL) that can account for the acquisition of social conventions in multi-agent populations that are solving a benchmark social decision-making problem. Our new CRL architecture, as a concrete realization of DAC multi-agent theory, implements a low-level sensorimotor control loop handling the agent's reactive behaviors (pre-wired reflexes), along with a layer based on model-free reinforcement learning that maximizes long-term reward. We apply CRL in a multi-agent game-theoretic task in which coordination must be achieved in order to find an optimal solution. We show that our CRL architecture is able to both find optimal solutions in discrete and continuous time and reproduce human experimental data on standard game-theoretic metrics such as efficiency in acquiring rewards, fairness in reward distribution and stability of convention formation.
0
0
0
1
1
0
A class of C*-algebraic locally compact quantum groupoids Part I. Motivation and definition
In this series of papers, we develop the theory of a class of locally compact quantum groupoids, which is motivated by the purely algebraic notion of weak multiplier Hopf algebras. In this Part I, we provide motivation and formulate the definition in the C*-algebra framework. Existence of a certain canonical idempotent element is required and it plays a fundamental role, including the establishment of the coassociativity of the comultiplication. This class contains locally compact quantum groups as a subclass.
0
0
1
0
0
0
Symmetry-enforced quantum spin Hall insulators in $π$-flux models
We prove a Lieb-Schultz-Mattis theorem for the quantum spin Hall effect (QSHE) in two-dimensional $\pi$-flux models. In the presence of time reversal, $U(1)$ charge conservation and magnetic translation (with $\pi$-flux per unit cell) symmetries, if a generic interacting Hamiltonian has a unique gapped symmetric ground state at half filling (i.e. an odd number of electrons per unit cell), it can only be a QSH insulator. In other words, a trivial Mott insulator is forbidden by symmetries at half filling. We further show that such a symmetry-enforced QSHE can be realized in cold atoms, by shaking an optical lattice and applying a time-dependent Zeeman field.
0
1
0
0
0
0
Stabilized microwave-frequency transfer using optical phase sensing and actuation
We present a stabilized microwave-frequency transfer technique that is based on optical phase-sensing and optical phase-actuation. This technique shares several attributes with optical-frequency transfer and therefore exhibits several advantages over other microwave-frequency transfer techniques. We demonstrated stabilized transfer of an 8,000 MHz microwave-frequency signal over a 166 km metropolitan optical fiber network, achieving a fractional frequency stability of 6.8x10^-14 Hz/Hz at 1 s integration, and 5.0x10^-16 Hz/Hz at 1.6x10^4 s. This technique is being considered for use on the Square Kilometre Array SKA1-mid radio telescope.
0
1
0
0
0
0
Spectral sets for numerical range
We define and study a numerical-range analogue of the notion of spectral set. Among the results obtained are a positivity criterion and a dilation theorem, analogous to those already known for spectral sets. An important difference from the classical definition is the role played in the new definition by the base point. We present some examples to illustrate this aspect.
0
0
1
0
0
0
Learning of Gaussian Processes in Distributed and Communication Limited Systems
It is of fundamental importance to find algorithms obtaining optimal performance for learning of statistical models in distributed and communication limited systems. Aiming at characterizing the optimal strategies, we consider learning of Gaussian Processes (GPs) in distributed systems as a pivotal example. We first address a very basic problem: how many bits are required to estimate the inner-products of Gaussian vectors across distributed machines? Using information theoretic bounds, we obtain an optimal solution for the problem which is based on vector quantization. Two suboptimal and more practical schemes are also presented as substitute for the vector quantization scheme. In particular, it is shown that the performance of one of the practical schemes which is called per-symbol quantization is very close to the optimal one. Schemes provided for the inner-product calculations are incorporated into our proposed distributed learning methods for GPs. Experimental results show that with spending few bits per symbol in our communication scheme, our proposed methods outperform previous zero rate distributed GP learning schemes such as Bayesian Committee Model (BCM) and Product of experts (PoE).
1
0
0
1
0
0
Deep Learning: A Critical Appraisal
Although deep learning has historical roots going back decades, neither the term "deep learning" nor the approach was popular just over five years ago, when the field was reignited by papers such as Krizhevsky, Sutskever and Hinton's now classic (2012) deep network model of Imagenet. What has the field discovered in the five subsequent years? Against a background of considerable progress in areas such as speech recognition, image recognition, and game playing, and considerable enthusiasm in the popular press, I present ten concerns for deep learning, and suggest that deep learning must be supplemented by other techniques if we are to reach artificial general intelligence.
0
0
0
1
0
0
Semi-Supervised Recurrent Neural Network for Adverse Drug Reaction Mention Extraction
Social media is an useful platform to share health-related information due to its vast reach. This makes it a good candidate for public-health monitoring tasks, specifically for pharmacovigilance. We study the problem of extraction of Adverse-Drug-Reaction (ADR) mentions from social media, particularly from twitter. Medical information extraction from social media is challenging, mainly due to short and highly information nature of text, as compared to more technical and formal medical reports. Current methods in ADR mention extraction relies on supervised learning methods, which suffers from labeled data scarcity problem. The State-of-the-art method uses deep neural networks, specifically a class of Recurrent Neural Network (RNN) which are Long-Short-Term-Memory networks (LSTMs) \cite{hochreiter1997long}. Deep neural networks, due to their large number of free parameters relies heavily on large annotated corpora for learning the end task. But in real-world, it is hard to get large labeled data, mainly due to heavy cost associated with manual annotation. Towards this end, we propose a novel semi-supervised learning based RNN model, which can leverage unlabeled data also present in abundance on social media. Through experiments we demonstrate the effectiveness of our method, achieving state-of-the-art performance in ADR mention extraction.
1
0
0
0
0
0
A Deep Neural Architecture for Sentence-level Sentiment Classification in Twitter Social Networking
This paper introduces a novel deep learning framework including a lexicon-based approach for sentence-level prediction of sentiment label distribution. We propose to first apply semantic rules and then use a Deep Convolutional Neural Network (DeepCNN) for character-level embeddings in order to increase information for word-level embedding. After that, a Bidirectional Long Short-Term Memory Network (Bi-LSTM) produces a sentence-wide feature representation from the word-level embedding. We evaluate our approach on three Twitter sentiment classification datasets. Experimental results show that our model can improve the classification accuracy of sentence-level sentiment analysis in Twitter social networking.
1
0
0
0
0
0
A Tree-based Approach for Detecting Redundant Business Rules in very Large Financial Datasets
Net Asset Value (NAV) calculation and validation is the principle task of a fund administrator. If the NAV of a fund is calculated incorrectly then there is huge impact on the fund administrator; such as monetary compensation, reputational loss, or loss of business. In general, these companies use the same methodology to calculate the NAV of a fund, however the type of fund in question dictates the set of business rules used to validate this. Today, most Fund Administrators depend heavily on human resources due to the lack of an automated standardized solutions, however due to economic climate and the need for efficiency and costs reduction many banks are now looking for an automated solution with minimal human interaction; i.e., straight through processing (STP). Within the scope of a collaboration project that focuses on building an optimal solution for NAV validation, in this paper, we will present a new approach for detecting correlated business rules. We also show how we evaluate this approach using real-world financial data.
1
0
0
0
0
0
Dynamics of the nonlinear Klein-Gordon equation in the nonrelativistic limit, I
The nonlinear Klein-Gordon (NLKG) equation on a manifold $M$ in the nonrelativistic limit, namely as the speed of light $c$ tends to infinity, is considered. In particular, a higher-order normalized approximation of NLKG (which corresponds to the NLS at order $r=1$) is constructed, and when $M$ is a smooth compact manifold or $\mathbb{R}^d$ it is proved that the solution of the approximating equation approximates the solution of the NLKG locally uniformly in time. When $M=\mathbb{R}^d$, $d \geq 3$, it is proved that solutions of the linearized order $r$ normalized equation approximate solutions of linear Klein-Gordon equation up to times of order $\mathcal{O}(c^{2(r-1)})$ for any $r>1$.
0
0
1
0
0
0
Surface plasmons in superintense laser-solid interactions
We review studies of superintense laser interaction with solid targets where the generation of propagating surface plasmons (or surface waves) plays a key role. These studies include the onset of plasma instabilities at the irradiated surface, the enhancement of secondary emissions (protons, electrons, and photons as high harmonics in the XUV range) in femtosecond interactions with grating targets, and the generation of unipolar current pulses with picosecond duration. The experimental results give evidence of the existence of surface plasmons in the nonlinear regime of relativistic electron dynamics. These findings open up a route to the improvement of ultrashort laser-driven sources of energetic radiation and, more in general, to the extension of plasmonics in a high field regime.
0
1
0
0
0
0
Jacquard: A Large Scale Dataset for Robotic Grasp Detection
Grasping skill is a major ability that a wide number of real-life applications require for robotisation. State-of-the-art robotic grasping methods perform prediction of object grasp locations based on deep neural networks. However, such networks require huge amount of labeled data for training making this approach often impracticable in robotics. In this paper, we propose a method to generate a large scale synthetic dataset with ground truth, which we refer to as the Jacquard grasping dataset. Jacquard is built on a subset of ShapeNet, a large CAD models dataset, and contains both RGB-D images and annotations of successful grasping positions based on grasp attempts performed in a simulated environment. We carried out experiments using an off-the-shelf CNN, with three different evaluation metrics, including real grasping robot trials. The results show that Jacquard enables much better generalization skills than a human labeled dataset thanks to its diversity of objects and grasping positions. For the purpose of reproducible research in robotics, we are releasing along with the Jacquard dataset a web interface for researchers to evaluate the successfulness of their grasping position detections using our dataset.
1
0
0
0
0
0
Hochschild cohomology of some quantum complete intersections
We compute the Hochschild cohomology ring of the algebras $A= k\langle X, Y\rangle/ (X^a, XY-qYX, Y^a)$ over a field $k$ where $a\geq 2$ and where $q\in k$ is a primitive $a$-th root of unity. We find the the dimension of $\mathrm{HH}^n(A)$ and show that it is independent of $a$. We compute explicitly the ring structure of the even part of the Hochschild cohomology modulo homogeneous nilpotent elements.
0
0
1
0
0
0
Stock Market Visualization
We provide complete source code for a front-end GUI and its back-end counterpart for a stock market visualization tool. It is built based on the "functional visualization" concept we discuss, whereby functionality is not sacrificed for fancy graphics. The GUI, among other things, displays a color-coded signal (computed by the back-end code) based on how "out-of-whack" each stock is trading compared with its peers ("mean-reversion"), and the most sizable changes in the signal ("momentum"). The GUI also allows to efficiently filter/tier stocks by various parameters (e.g., sector, exchange, signal, liquidity, market cap) and functionally display them. The tool can be run as a web-based or local application.
0
0
0
0
0
1
Sharp estimates for solutions of mean field equation with collapsing singularity
The pioneering work of Brezis-Merle [7], Li-Shafrir [27], Li [26] and Bartolucci-Tarantello [4] showed that any sequence of blow up solutions for (singular) mean field equations of Liouville type must exhibit a "mass concentration" property. A typical situation of blow-up occurs when we let the singular (vortex) points involved in the equation (see (1.1) below) collapse together. However in this case Lin-Tarantello in [30] pointed out that the phenomenon: "bubbling implies mass concentration" might not occur and new scenarios open for investigation. In this paper, we present two explicit examples which illustrate (with mathematical rigor) how a "non-concentration" situation does happen and its new features. Among other facts, we show that in certain situations, the collapsing rate of the singularities can be used as blow up parameter to describe the bubbling properties of the solution-sequence. In this way we are able to establish accurate estimates around the blow-up points which we hope to use towards a degree counting formula for the shadow system (1.34) below.
0
0
1
0
0
0
Probabilistic Forwarding of Coded Packets on Networks
We consider a scenario of broadcasting information over a network of nodes connected by noiseless communication links. A source node in the network has $k$ data packets to broadcast, and it suffices that a large fraction of the network nodes receives the broadcast. The source encodes the $k$ data packets into $n \ge k$ coded packets using a maximum distance separable (MDS) code, and transmits them to its one-hop neighbours. Every other node in the network follows a probabilistic forwarding protocol, in which it forwards a previously unreceived packet to all its neighbours with a certain probability $p$. A "near-broadcast" is when the expected fraction of nodes that receive at least $k$ of the $n$ coded packets is close to $1$. The forwarding probability $p$ is chosen so as to minimize the expected total number of transmissions needed for a near-broadcast. In this paper, we analyze the probabilistic forwarding of coded packets on two specific network topologies: binary trees and square grids. For trees, our analysis shows that for fixed $k$, the expected total number of transmissions increases with $n$. On the other hand, on grids, we use ideas from percolation theory to show that a judicious choice of $n$ will significantly reduce the expected total number of transmissions needed for a near-broadcast.
1
0
0
0
0
0
Quantifiers on languages and codensity monads
This paper contributes to the techniques of topo-algebraic recognition for languages beyond the regular setting as they relate to logic on words. In particular, we provide a general construction on recognisers corresponding to adding one layer of various kinds of quantifiers and prove a related Reutenauer-type theorem. Our main tools are codensity monads and duality theory. Our construction hinges, in particular, on a measure-theoretic characterisation of the profinite monad of the free S-semimodule monad for finite and commutative semirings S, which generalises our earlier insight that the Vietoris monad on Boolean spaces is the codensity monad of the finite powerset functor.
1
0
1
0
0
0
Conservative Exploration using Interleaving
In many practical problems, a learning agent may want to learn the best action in hindsight without ever taking a bad action, which is significantly worse than the default production action. In general, this is impossible because the agent has to explore unknown actions, some of which can be bad, to learn better actions. However, when the actions are combinatorial, this may be possible if the unknown action can be evaluated by interleaving it with the production action. We formalize this concept as learning in stochastic combinatorial semi-bandits with exchangeable actions. We design efficient learning algorithms for this problem, bound their n-step regret, and evaluate them on both synthetic and real-world problems. Our real-world experiments show that our algorithms can learn to recommend K most attractive movies without ever violating a strict production constraint, both overall and subject to a diversity constraint.
0
0
0
1
0
0
Power Allocation for Full-Duplex Relay Selection in Underlay Cognitive Radio Networks: Coherent versus Non-Coherent Scenarios
This paper investigates power control and relay selection in Full Duplex Cognitive Relay Networks (FDCRNs), where the secondary-user (SU) relays can simultaneously receive data from the SU source and forward them to the SU destination. We study both non-coherent and coherent scenarios. In the non-coherent case, the SU relay forwards the signal from the SU source without regulating the phase; while in the coherent scenario, the SU relay regulates the phase when forwarding the signal to minimize the interference at the primary-user (PU) receiver. We consider the problem of maximizing the transmission rate from the SU source to the SU destination subject to the interference constraint at the PU receiver and power constraints at both the SU source and SU relay. We then develop a mathematical model to analyze the data rate performance of the FDCRN considering the self-interference effects at the FD relay. We develop low-complexity and high-performance joint power control and relay selection algorithms. Extensive numerical results are presented to illustrate the impacts of power level parameters and the self-interference cancellation quality on the rate performance. Moreover, we demonstrate the significant gain of phase regulation at the SU relay.
1
0
1
1
0
0
Carina: Interactive Million-Node Graph Visualization using Web Browser Technologies
We are working on a scalable, interactive visualization system, called Carina, for people to explore million-node graphs. By using latest web browser technologies, Carina offers fast graph rendering via WebGL, and works across desktop (via Electron) and mobile platforms. Different from most existing graph visualization tools, Carina does not store the full graph in RAM, enabling it to work with graphs with up to 69M edges. We are working to improve and open-source Carina, to offer researchers and practitioners a new, scalable way to explore and visualize large graph datasets.
1
0
0
0
0
0
Domination between different products and finiteness of associated semi-norms
In this note we determine all possible dominations between different products of manifolds, when none of the factors of the codomain is dominated by products. As a consequence, we determine the finiteness of every product-associated functorial semi-norm on the fundamental classes of the aforementioned products. These results give partial answers to questions of M. Gromov.
0
0
1
0
0
0
Discriminant of the ordinary transversal singularity type. The local aspects
Consider a space X with the singular locus, Z=Sing(X), of positive dimension. Suppose both Z and X are locally complete intersections. The transversal type of X along Z is generically constant but at some points of Z it degenerates. We introduce (under certain conditions) the discriminant of the transversal type, a subscheme of Z, that reflects these degenerations whenever the generic transversal type is `ordinary'. The scheme structure of this discriminant is imposed by various compatibility properties and is often non-reduced. We establish the basic properties of this discriminant: it is a Cartier divisor in Z, functorial under base change, flat under some deformations of (X,Z), and compatible with pullback under some morphisms, etc. Furthermore, we study the local geometry of this discriminant, e.g. we compute its multiplicity at a point, and we obtain the resolution of its structure sheaf (as module on Z) and study the locally defining equation.
0
0
1
0
0
0
Possible spin excitation structure in monolayer FeSe grown on SrTiO$_{3}$
Based on recent high-resolution angle-resolved photoemission spectroscopy measurement in monolayer FeSe grown on SrTiO$_{3}$, we constructed a tight-binding model and proposed a superconducting (SC) pairing function which can well fit the observed band structure and SC gap anisotropy. Then we investigated the spin excitation spectra in order to determine the possible sign structure of the SC order parameter. We found that a resonance-like spin excitation may occur if the SC order parameter changes sign along the Fermi surfaces. However, this resonance is located at different locations in momentum space compared to other FeSe-based superconductors, suggesting that the Fermi surface shape and pairing symmetry in monolayer FeSe grown on SrTiO$_{3}$ may be different from other FeSe-based superconductors.
0
1
0
0
0
0
Transient behavior of the solutions to the second order difference equations by the renormalization method based on Newton-Maclaurin expansion
The renormalization method based on the Newton-Maclaurin expansion is applied to study the transient behavior of the solutions to the difference equations as they tend to the steady-states. The key and also natural step is to make the renormalization equations to be continuous such that the elementary functions can be used to describe the transient behavior of the solutions to difference equations. As the concrete examples, we deal with the important second order nonlinear difference equations with a small parameter. The result shows that the method is more natural than the multi-scale method.
0
1
1
0
0
0
Bias in Bios: A Case Study of Semantic Representation Bias in a High-Stakes Setting
We present a large-scale study of gender bias in occupation classification, a task where the use of machine learning may lead to negative outcomes on peoples' lives. We analyze the potential allocation harms that can result from semantic representation bias. To do so, we study the impact on occupation classification of including explicit gender indicators---such as first names and pronouns---in different semantic representations of online biographies. Additionally, we quantify the bias that remains when these indicators are "scrubbed," and describe proxy behavior that occurs in the absence of explicit gender indicators. As we demonstrate, differences in true positive rates between genders are correlated with existing gender imbalances in occupations, which may compound these imbalances.
1
0
0
1
0
0
SphereFace: Deep Hypersphere Embedding for Face Recognition
This paper addresses deep face recognition (FR) problem under open-set protocol, where ideal face features are expected to have smaller maximal intra-class distance than minimal inter-class distance under a suitably chosen metric space. However, few existing algorithms can effectively achieve this criterion. To this end, we propose the angular softmax (A-Softmax) loss that enables convolutional neural networks (CNNs) to learn angularly discriminative features. Geometrically, A-Softmax loss can be viewed as imposing discriminative constraints on a hypersphere manifold, which intrinsically matches the prior that faces also lie on a manifold. Moreover, the size of angular margin can be quantitatively adjusted by a parameter $m$. We further derive specific $m$ to approximate the ideal feature criterion. Extensive analysis and experiments on Labeled Face in the Wild (LFW), Youtube Faces (YTF) and MegaFace Challenge show the superiority of A-Softmax loss in FR tasks. The code has also been made publicly available.
1
0
0
0
0
0
Speed-of-light pulses in the massless nonlinear Dirac equation with a potential
We consider the massless nonlinear Dirac (NLD) equation in $1+1$ dimension with scalar-scalar self-interaction $\frac{g^2}{2} (\bar{\Psi} \Psi)^2$ in the presence of three external electromagnetic potentials $V(x)$, a potential barrier, a constant potential, and a potential well. By solving numerically the NLD equation, we find that, for all three cases, after a short transit time, the initial pulse breaks into two pulses which are solutions of the massless linear Dirac equation traveling in opposite directions with the speed of light. During this splitting the charge and the energy are conserved, whereas the momentum is conserved when the solutions possess specific symmetries. For the case of the constant potential, we derive exact analytical solutions of the massless NLD equation that are also solutions of the massless linearized Dirac equation.
0
1
0
0
0
0
A Hybrid Feasibility Constraints-Guided Search to the Two-Dimensional Bin Packing Problem with Due Dates
The two-dimensional non-oriented bin packing problem with due dates packs a set of rectangular items, which may be rotated by 90 degrees, into identical rectangular bins. The bins have equal processing times. An item's lateness is the difference between its due date and the completion time of its bin. The problem packs all items without overlap as to minimize maximum lateness Lmax. The paper proposes a tight lower bound that enhances an existing bound on Lmax for 24.07% of the benchmark instances and matches it in 30.87% cases. In addition, it models the problem using mixed integer programming (MIP), and solves small-sized instances exactly using CPLEX. It approximately solves larger-sized instances using a two-stage heuristic. The first stage constructs an initial solution via a first-fit heuristic that applies an iterative constraint programming (CP)-based neighborhood search. The second stage, which is iterative too, approximately solves a series of assignment low-level MIPs that are guided by feasibility constraints. It then enhances the solution via a high-level random local search. The approximate approach improves existing upper bounds by 27.45% on average, and obtains the optimum for 33.93% of the instances. Overall, the exact and approximate approaches identify the optimum for 39.07% cases. The proposed approach is applicable to complex problems. It applies CP and MIP sequentially, while exploring their advantages, and hybridizes heuristic search with MIP. It embeds a new lookahead strategy that guards against infeasible search directions and constrains the search to improving directions only; thus, differs from traditional lookahead beam searches.
1
0
0
0
0
0
The earliest phases of high-mass star formation, as seen in NGC 6334 by \emph{Herschel}
To constrain models of high-mass star formation, the Herschel/HOBYS KP aims at discovering massive dense cores (MDCs) able to host the high-mass analogs of low-mass prestellar cores, which have been searched for over the past decade. We here focus on NGC6334, one of the best-studied HOBYS molecular cloud complexes. We used Herschel PACS and SPIRE 70-500mu images of the NGC6334 complex complemented with (sub)millimeter and mid-infrared data. We built a complete procedure to extract ~0.1 pc dense cores with the getsources software, which simultaneously measures their far-infrared to millimeter fluxes. We carefully estimated the temperatures and masses of these dense cores from their SEDs. A cross-correlation with high-mass star formation signposts suggests a mass threshold of 75Msun for MDCs in NGC6334. MDCs have temperatures of 9.5-40K, masses of 75-1000Msun, and densities of 10^5-10^8cm-3. Their mid-IR emission is used to separate 6 IR-bright and 10 IR-quiet protostellar MDCs while their 70mu emission strength, with respect to fitted SEDs, helps identify 16 starless MDC candidates. The ability of the latter to host high-mass prestellar cores is investigated here and remains questionable. An increase in mass and density from the starless to the IR-quiet and IR-bright phases suggests that the protostars and MDCs simultaneously grow in mass. The statistical lifetimes of the high-mass prestellar and protostellar core phases, estimated to be 1-7x10^4yr and at most 3x10^5yr respectively, suggest a dynamical scenario of high-mass star formation. The present study provides good mass estimates for a statistically significant sample, covering the earliest phases of high-mass star formation. High-mass prestellar cores may not exist in NGC6334, favoring a scenario presented here, which simultaneously forms clouds and high-mass protostars.
0
1
0
0
0
0
Robust consistent a posteriori error majorants for approximate solutions of diffusion-reaction equations
Efficiency of the error control of numerical solutions of partial differential equations entirely depends on the two factors: accuracy of an a posteriori error majorant and the computational cost of its evaluation for some test function/vector-function plus the cost of the latter. In the paper, consistency of an a posteriori bound implies that it is the same in the order with the respective unimprovable a priori bound. Therefore, it is the basic characteristic related to the first factor. The paper is dedicated to the elliptic diffusion-reaction equations. We present a guaranteed robust a posteriori error majorant effective at any nonnegative constant reaction coefficient (r.c.). For a wide range of finite element solutions on a quasiuniform meshes the majorant is consistent. For big values of r.c. the majorant coincides with the majorant of Aubin (1972), which, as it is known, for not big r.c. ($<ch^{-2}$) is inconsistent and loses its sense at r.c. approaching zero. Our majorant improves also some other majorants derived for the Poisson and reaction-diffusion equations.
0
0
1
0
0
0
Extended opportunity cost model to find near equilibrium electricity prices under non-convexities
This paper finds near equilibrium prices for electricity markets with nonconvexities due to binary variables, in order to reduce the market participants' opportunity costs, such as generators' unrecovered costs. The opportunity cost is defined as the difference between the profit when the instructions of the market operator are followed and when the market participants can freely make their own decisions based on the market prices. We use the minimum complementarity approximation to the minimum total opportunity cost (MTOC) model, from previous research, with tests on a much more realistic unit commitment (UC) model than in previous research, including features such as reserve requirements, ramping constraints, and minimum up and down times. The developed model incorporates flexible price responsive demand, as in previous research, but since not all demand is price responsive, we consider the more realistic case that total demand is a mixture of fixed and flexible. Another improvement over previous MTOC research is computational: whereas the previous research had nonconvex terms among the objective function's continuous variables, we convert the objective to an equivalent form that contains only linear and convex quadratic terms in the continuous variables. We compare the unit commitment model with the standard social welfare optimization version of UC, in a series of sensitivity analyses, varying flexible demand to represent varying degrees of future penetration of electric vehicles and smart appliances, different ratios of generation availability, and different values of transmission line capacities to consider possible congestion. The minimum total opportunity cost and social welfare solutions are mostly very close in different scenarios, except in some extreme cases.
0
0
0
0
0
1
Kernel Two-Sample Hypothesis Testing Using Kernel Set Classification
The two-sample hypothesis testing problem is studied for the challenging scenario of high dimensional data sets with small sample sizes. We show that the two-sample hypothesis testing problem can be posed as a one-class set classification problem. In the set classification problem the goal is to classify a set of data points that are assumed to have a common class. We prove that the average probability of error given a set is less than or equal to the Bayes error and decreases as a power of $n$ number of sample data points in the set. We use the positive definite Set Kernel for directly mapping sets of data to an associated Reproducing Kernel Hilbert Space, without the need to learn a probability distribution. We specifically solve the two-sample hypothesis testing problem using a one-class SVM in conjunction with the proposed Set Kernel. We compare the proposed method with the Maximum Mean Discrepancy, F-Test and T-Test methods on a number of challenging simulated high dimensional and small sample size data. We also perform two-sample hypothesis testing experiments on six cancer gene expression data sets and achieve zero type-I and type-II error results on all data sets.
0
0
0
1
0
0
An Efficient Approach for Removing Look-ahead Bias in the Least Square Monte Carlo Algorithm: Leave-One-Out
The least square Monte Carlo (LSM) algorithm proposed by Longstaff and Schwartz [2001] is the most widely used method for pricing options with early exercise features. The LSM estimator contains look-ahead bias, and the conventional technique of removing it necessitates an independent set of simulations. This study proposes a new approach for efficiently eliminating look-ahead bias by using the leave-one-out method, a well-known cross-validation technique for machine learning applications. The leave-one-out LSM (LOOLSM) method is illustrated with examples, including multi-asset options whose LSM price is biased high. The asymptotic behavior of look-ahead bias is also discussed with the LOOLSM approach.
0
0
0
0
0
1
Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities
Humans can ground natural language commands to tasks at both abstract and fine-grained levels of specificity. For instance, a human forklift operator can be instructed to perform a high-level action, like "grab a pallet" or a low-level action like "tilt back a little bit." While robots are also capable of grounding language commands to tasks, previous methods implicitly assume that all commands and tasks reside at a single, fixed level of abstraction. Additionally, methods that do not use multiple levels of abstraction encounter inefficient planning and execution times as they solve tasks at a single level of abstraction with large, intractable state-action spaces closely resembling real world complexity. In this work, by grounding commands to all the tasks or subtasks available in a hierarchical planning framework, we arrive at a model capable of interpreting language at multiple levels of specificity ranging from coarse to more granular. We show that the accuracy of the grounding procedure is improved when simultaneously inferring the degree of abstraction in language used to communicate the task. Leveraging hierarchy also improves efficiency: our proposed approach enables a robot to respond to a command within one second on 90% of our tasks, while baselines take over twenty seconds on half the tasks. Finally, we demonstrate that a real, physical robot can ground commands at multiple levels of abstraction allowing it to efficiently plan different subtasks within the same planning hierarchy.
1
0
0
0
0
0
Nonlocal Venttsel' diffusion in fractal-type domains: regularity results and numerical approximation
We study a nonlocal Venttsel' problem in a non-convex bounded domain with a Koch-type boundary. Regularity results of the strict solution are proved in weighted Sobolev spaces. The numerical approximation of the problem is carried out and optimal a priori error estimates are obtained.
0
0
1
0
0
0
Magnon Spin-Momentum Locking: Various Spin Vortices and Dirac Magnons in Noncollinear Antiferromagnets
We generalize the concept of the spin-momentum locking to magnonic systems and derive the formula to calculate the spin expectation value for one-magnon states of general two-body spin Hamiltonians. We give no-go conditions for magnon spin to be independent of momentum. As examples of the magnon spin-momentum locking, we analyze a one-dimensional antiferromagnet with the Néel order and two-dimensional kagome lattice antiferromagnets with the 120$^\circ$ structure. We find that the magnon spin depends on its momentum even when the Hamiltonian has the $z$-axis spin rotational symmetry, which can be explained in the context of a singular band point or a $U(1)$ symmetry breaking. A spin vortex in momentum space generated in a kagome lattice antiferromagnet has the winding number $Q=-2$, while the typical one observed in topological insulator surface states is characterized by $Q=+1$. A magnonic analogue of the surface states, the Dirac magnon with $Q=+1$, is found in another kagome lattice antiferromagnet. We also derive the sum rule for $Q$ by using the Poincaré-Hopf index theorem.
0
1
0
0
0
0
Analytic evaluation of some three- and four- electron atomic integrals involving s STO's and exponential correlation with unlinked $r_{ij}$'s
The method of evaluation outlined in a previous work has been utilized here to evaluate certain other three- electron and four- electron atomic integrals involving s Slater-type orbitals and exponential correlation with unlinked $r_{ij}$'s. Limiting expressions for various such integrals have been derived, which has not been done earlier. Closed-form expressions for $<r_{12} r_{13} / r_{14}>$, $<r_{12}r_{34}/r_{23}>$, $<r_{12}r_{23}/r_{34}>$, $<r_{12}r_{13}/r_{34}>$ and $<r_{12}r_{34}/r_{13}>$ have been obtained.
0
1
0
0
0
0
GIANT: Globally Improved Approximate Newton Method for Distributed Optimization
For distributed computing environment, we consider the empirical risk minimization problem and propose a distributed and communication-efficient Newton-type optimization method. At every iteration, each worker locally finds an Approximate NewTon (ANT) direction, which is sent to the main driver. The main driver, then, averages all the ANT directions received from workers to form a {\it Globally Improved ANT} (GIANT) direction. GIANT is highly communication efficient and naturally exploits the trade-offs between local computations and global communications in that more local computations result in fewer overall rounds of communications. Theoretically, we show that GIANT enjoys an improved convergence rate as compared with first-order methods and existing distributed Newton-type methods. Further, and in sharp contrast with many existing distributed Newton-type methods, as well as popular first-order methods, a highly advantageous practical feature of GIANT is that it only involves one tuning parameter. We conduct large-scale experiments on a computer cluster and, empirically, demonstrate the superior performance of GIANT.
1
0
0
1
0
0
Stability and performance analysis of linear positive systems with delays using input-output methods
It is known that input-output approaches based on scaled small-gain theorems with constant $D$-scalings and integral linear constraints are non-conservative for the analysis of some classes of linear positive systems interconnected with uncertain linear operators. This dramatically contrasts with the case of general linear systems with delays where input-output approaches provide, in general, sufficient conditions only. Using these results we provide simple alternative proofs for many of the existing results on the stability of linear positive systems with discrete/distributed/neutral time-invariant/-varying delays and linear difference equations. In particular, we give a simple proof for the characterization of diagonal Riccati stability for systems with discrete-delays and generalize this equation to other types of delay systems. The fact that all those results can be reproved in a very simple way demonstrates the importance and the efficiency of the input-output framework for the analysis of linear positive systems. The approach is also used to derive performance results evaluated in terms of the $L_1$-, $L_2$- and $L_\infty$-gains. It is also flexible enough to be used for design purposes.
1
0
1
0
0
0
Joint Inference of User Community and Interest Patterns in Social Interaction Networks
Online social media have become an integral part of our social beings. Analyzing conversations in social media platforms can lead to complex probabilistic models to understand social interaction networks. In this paper, we present a modeling approach for characterizing social interaction networks by jointly inferring user communities and interests based on social media interactions. We present several pattern inference models: i) Interest pattern model (IPM) captures population level interaction topics, ii) User interest pattern model (UIPM) captures user specific interaction topics, and iii) Community interest pattern model (CIPM) captures both community structures and user interests. We test our methods on Twitter data collected from Purdue University community. From our model results, we observe the interaction topics and communities related to two big events within Purdue University community, namely Purdue Day of Giving and Senator Bernie Sanders' visit to Purdue University as part of Indiana Primary Election 2016. Constructing social interaction networks based on user interactions accounts for the similarity of users' interactions on various topics of interest and indicates their community belonging further beyond connectivity. We observed that the degree-distributions of such networks follow power-law that is indicative of the existence of fewer nodes in the network with higher levels of interactions, and many other nodes with less interactions. We also discuss the application of such networks as a useful tool to effectively disseminate specific information to the target audience towards planning any large-scale events and demonstrate how to single out specific nodes in a given community by running network algorithms.
1
1
0
0
0
0
A mode theory for the electoweak interaction and its application to neutrino masses
A theory is proposed, in which the basic elements of reality are assumed to be something called modes. Particles are interpreted as composites of modes, corresponding to eigenstates of the interaction Hamiltonian of modes. At the fundamental level of the proposed theory, there are two basic modes only,whose spinor spaces are the two smallest nontrivial representation spaces of the SL(2,C) group, one being the complex conjugate of the other. All other modes are constructed from the two basic modes, making use of the operations of direct sum and direct product for related spinor spaces. Accompanying the construction of direct-product modes, interactions among modes are introduced in a natural way, with the interaction Hamiltonian given from mappings between the corresponding state spaces. The interaction Hamiltonian thus obtained turn out to possess a form, which is similar to a major part of the interaction Hamiltonian in the Glashow-Weinberg-Salam electroweak theory. In the proposed theory, it is possible for the second-order perturbation expansion of energy to be free from ultraviolet divergence. This expansion is used to derive some approximate relations for neutrino masses; in particular, a rough estimate is obtained for the ratio of mass differences of neutrinos, which gives the correct order of magnitude compared with the experimental result.
0
1
0
0
0
0
Guided Machine Learning for power grid segmentation
The segmentation of large scale power grids into zones is crucial for control room operators when managing the grid complexity near real time. In this paper we propose a new method in two steps which is able to automatically do this segmentation, while taking into account the real time context, in order to help them handle shifting dynamics. Our method relies on a "guided" machine learning approach. As a first step, we define and compute a task specific "Influence Graph" in a guided manner. We indeed simulate on a grid state chosen interventions, representative of our task of interest (managing active power flows in our case). For visualization and interpretation, we then build a higher representation of the grid relevant to this task by applying the graph community detection algorithm \textit{Infomap} on this Influence Graph. To illustrate our method and demonstrate its practical interest, we apply it on commonly used systems, the IEEE-14 and IEEE-118. We show promising and original interpretable results, especially on the previously well studied RTS-96 system for grid segmentation. We eventually share initial investigation and results on a large-scale system, the French power grid, whose segmentation had a surprising resemblance with RTE's historical partitioning.
0
0
0
1
0
0
121,123Sb NQR as a microscopic probe in Te doped correlated semimetal FeSb2 : emergence of electronic Griffith phase, magnetism and metallic behavior %
$^{121,123}Sb$ nuclear quadrupole resonance (NQR) was applied to $Fe(Sb_{1-x}Te_x)_2$ in the low doping regime (\emph{x = 0, 0.01} and \emph{0.05}) as a microscopic zero field probe to study the evolution of \emph{3d} magnetism and the emergence of metallic behavior. Whereas the NQR spectra itself reflects the degree of local disorder via the width of the individual NQR lines, the spin lattice relaxation rate (SLRR) $1/T_1(T)$ probes the fluctuations at the $Sb$ - site. The fluctuations originate either from conduction electrons or from magnetic moments. In contrast to the semi metal $FeSb_2$ with a clear signature of the charge and spin gap formation in $1/T_1(T)T ( \sim exp/ (\Delta k_BT) ) $, the 1\% $Te$ doped system exhibits almost metallic conductivity and a almost filled gap. A weak divergence of the SLRR coefficient $1/T_1(T)T \sim T^{-n} \sim T^{-0.2}$ points towards the presence of electronic correlations towards low temperatures wheras the \textit{5\%} $Te$ doped sample exhibits a much larger divergence in the SLRR coefficient showing $1/T_1(T)T \sim T^{-0.72} $. According to the specific heat divergence a power law with $n\ =\ 2\ m\ =\ 0.56$ is expected for the SLRR. Furthermore $Te$-doped $FeSb_2$ as a disordered paramagnetic metal might be a platform for the electronic Griffith phase scenario. NQR evidences a substantial asymmetric broadening of the $^{121,123}Sb$ NQR spectrum for the \emph{5\%} sample. This has purely electronic origin in agreement with the electronic Griffith phase and stems probably from an enhanced $Sb$-$Te$ bond polarization and electronic density shift towards the $Te$ atom inside $Sb$-$Te$ dumbbell.
0
1
0
0
0
0
The Merging Path Plot: adaptive fusing of k-groups with likelihood-based model selection
There are many statistical tests that verify the null hypothesis: the variable of interest has the same distribution among k-groups. But once the null hypothesis is rejected, how to present the structure of dissimilarity between groups? In this article, we introduce The Merging Path Plot - a methodology, and factorMerger - an R package, for exploration and visualization of k-group dissimilarities. Comparison of k-groups is one of the most important issues in exploratory analyses and it has zillions of applications. The classical solution is to test a~null hypothesis that observations from all groups come from the same distribution. If the global null hypothesis is rejected, a~more detailed analysis of differences among pairs of groups is performed. The traditional approach is to use pairwise post hoc tests in order to verify which groups differ significantly. However, this approach fails with a large number of groups in both interpretation and visualization layer. The~Merging Path Plot methodology solves this problem by using an easy-to-understand description of dissimilarity among groups based on Likelihood Ratio Test (LRT) statistic.
1
0
0
1
0
0
Constraint on cosmological parameters by Hubble parameter from gravitational wave standard sirens of neutron star binary system
In this paper, we present a new method of measuring Hubble parameter($H(z)$), making use of the anisotropy of luminosity distance($d_{L}$), and the analysis of gravitational wave(GW) of neutron star(NS) binary system. The method has never been put into practice before due to the lack of the ability of detecting GW. LIGO's success in detecting GW of black hole(BH) binary system merger announced the possibility of this new method. We apply this method to several GW detecting projects, including Advanced LIGO(Adv-LIGO), Einstein Telescope(ET) and DECIGO, finding that the $H(z)$ by Adv-LIGO and ET is of bad accuracy, while the $H(z)$ by DECIGO shows a good accuracy. We use the error information of $H(z)$ by DECIGO to simulate $H(z)$ data at every 0.1 redshift span, and put the mock data into the forecasting of cosmological parameters. Compared with the available 38 observed $H(z)$ data(OHD), mock data shows an obviously tighter constraint on cosmological parameters, and a concomitantly higher value of Figure of Merit(FoM). For a 3-year-observation by standard sirens of DECIGO, the FoM value is as high as 834.9. If a 10-year-observation is launched, the FoM could reach 2783.1. For comparison, the FoM of 38 actual observed $H(z)$ data is 9.3. These improvement indicates that the new method has great potential in further cosmological constraints.
0
1
0
0
0
0
Semidefinite tests for latent causal structures
Testing whether a probability distribution is compatible with a given Bayesian network is a fundamental task in the field of causal inference, where Bayesian networks model causal relations. Here we consider the class of causal structures where all correlations between observed quantities are solely due to the influence from latent variables. We show that each model of this type imposes a certain signature on the observable covariance matrix in terms of a particular decomposition into positive semidefinite components. This signature, and thus the underlying hypothetical latent structure, can be tested in a computationally efficient manner via semidefinite programming. This stands in stark contrast with the algebraic geometric tools required if the full observable probability distribution is taken into account. The semidefinite test is compared with tests based on entropic inequalities.
0
0
1
1
0
0
From atomistic model to the Peierls-Nabarro model with $γ$-surface for dislocations
The Peierls-Nabarro (PN) model for dislocations is a hybrid model that incorporates the atomistic information of the dislocation core structure into the continuum theory. In this paper, we study the convergence from a full atomistic model to the PN model with $\gamma$-surface for the dislocation in a bilayer system (e.g. bilayer graphene). We prove that the displacement field of and the total energy of the dislocation solution of the PN model are asymptotically close to those of the full atomistic model. Our work can be considered as a generalization of the analysis of the convergence from atomistic model to Cauchy-Born rule for crystals without defects in the literature.
0
0
1
0
0
0
A family of monogenic $S_4$ quartic fields arising from elliptic curves
We consider partial torsion fields (fields generated by a root of a division polynomial) for elliptic curves. By analysing the reduction properties of elliptic curves, and applying the Montes Algorithm, we obtain information about the ring of integers. In particular, for the partial $3$-torsion fields for a certain one-parameter family of non-CM elliptic curves, we describe a power basis. As a result, we show that the one-parameter family of quartic $S_4$ fields given by $T^4 - 6T^2 - \alpha T - 3$ for $\alpha \in \mathbb{Z}$ such that $\alpha \pm 8$ are squarefree, are monogenic.
0
0
1
0
0
0
Aggregation and Resource Scheduling in Machine-type Communication Networks: A Stochastic Geometry Approach
Data aggregation is a promising approach to enable massive machine-type communication (mMTC). This paper focuses on the aggregation phase where a massive number of machine-type devices (MTDs) transmit to aggregators. By using non-orthogonal multiple access (NOMA) principles, we allow several MTDs to share the same orthogonal channel in our proposed hybrid access scheme. We develop an analytical framework based on stochastic geometry to investigate the system performance in terms of average success probability and average number of simultaneously served MTDs, under imperfect successive interference cancellation (SIC) at the aggregators, for two scheduling schemes: random resource scheduling (RRS) and channel-aware resource scheduling (CRS). We identify the power constraints on the MTDs sharing the same channel to attain a fair coexistence with purely orthogonal multiple access (OMA) setups, then power control coefficients are found so that these MTDs perform with similar reliability. We show that under high access demand, the hybrid scheme with CRS outperforms the OMA setup by simultaneously serving more MTDs with reduced power consumption.
1
0
0
1
0
0
Setting Players' Behaviors in World of Warcraft through Semi-Supervised Learning
Digital games are one of the major and most important fields on the entertainment domain, which also involves cinema and music. Numerous attempts have been done to improve the quality of the games including more realistic artistic production and computer science. Assessing the player's behavior, a task known as player modeling, is currently the need of the hour which leads to possible improvements in terms of: (i) better game interaction experience, (ii) better exploitation of the relationship between players, and (iii) increasing/maintaining the number of players interested in the game. In this paper we model players using the basic four behaviors proposed in \cite{BartleArtigo}, namely: achiever, explorer, socializer and killer. Our analysis is carried out using data obtained from the game "World of Warcraft" over 3 years (2006 $-$ 2009). We employ a semi-supervised learning technique in order to find out characteristics that possibly impact player's behavior.
1
0
0
0
0
0
Boundary Hamiltonian theory for gapped topological orders
In this letter, we report our systematic construction of the lattice Hamiltonian model of topological orders on open surfaces, with explicit boundary terms. We do this mainly for the Levin-Wen stringnet model. The full Hamiltonian in our approach yields a topologically protected, gapped energy spectrum, with the corresponding wave functions robust under topology-preserving transformations of the lattice of the system. We explicitly present the wavefunctions of the ground states and boundary elementary excitations. We construct the creation and hopping operators of boundary quasi-particles. We find that given a bulk topological order, the gapped boundary conditions are classified by Frobenius algebras in its input data. Emergent topological properties of the ground states and boundary excitations are characterized by (bi-) modules over Frobenius algebras.
0
1
1
0
0
0
Euler characteristics of cominuscule quantum K-theory
We prove an identity relating the product of two opposite Schubert varieties in the (equivariant) quantum K-theory ring of a cominuscule flag variety to the minimal degree of a rational curve connecting the Schubert varieties. We deduce that the sum of the structure constants associated to any product of Schubert classes is equal to one. Equivalently, the sheaf Euler characteristic map extends to a ring homomorphism defined on the quantum K-theory ring.
0
0
1
0
0
0
Dialogue Act Sequence Labeling using Hierarchical encoder with CRF
Dialogue Act recognition associate dialogue acts (i.e., semantic labels) to utterances in a conversation. The problem of associating semantic labels to utterances can be treated as a sequence labeling problem. In this work, we build a hierarchical recurrent neural network using bidirectional LSTM as a base unit and the conditional random field (CRF) as the top layer to classify each utterance into its corresponding dialogue act. The hierarchical network learns representations at multiple levels, i.e., word level, utterance level, and conversation level. The conversation level representations are input to the CRF layer, which takes into account not only all previous utterances but also their dialogue acts, thus modeling the dependency among both, labels and utterances, an important consideration of natural dialogue. We validate our approach on two different benchmark data sets, Switchboard and Meeting Recorder Dialogue Act, and show performance improvement over the state-of-the-art methods by $2.2\%$ and $4.1\%$ absolute points, respectively. It is worth noting that the inter-annotator agreement on Switchboard data set is $84\%$, and our method is able to achieve the accuracy of about $79\%$ despite being trained on the noisy data.
1
0
0
0
0
0
Noise Stability is computable and low dimensional
Questions of noise stability play an important role in hardness of approximation in computer science as well as in the theory of voting. In many applications, the goal is to find an optimizer of noise stability among all possible partitions of $\mathbb{R}^n$ for $n \geq 1$ to $k$ parts with given Gaussian measures $\mu_1,\ldots,\mu_k$. We call a partition $\epsilon$-optimal, if its noise stability is optimal up to an additive $\epsilon$. In this paper, we give an explicit, computable function $n(\epsilon)$ such that an $\epsilon$-optimal partition exists in $\mathbb{R}^{n(\epsilon)}$. This result has implications for the computability of certain problems in non-interactive simulation, which are addressed in a subsequent work.
1
0
1
0
0
0