title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Discrete CMC surfaces in R^3 and discrete minimal surfaces in S^3. A discrete Lawson correspondence
The main result of this paper is a discrete Lawson correspondence between discrete CMC surfaces in R^3 and discrete minimal surfaces in S^3. This is a correspondence between two discrete isothermic surfaces. We show that this correspondence is an isometry in the following sense: it preserves the metric coefficients introduced previously by Bobenko and Suris for isothermic nets. Exactly as in the smooth case, this is a correspondence between nets with the same Lax matrices, and the immersion formulas also coincide with the smooth case.
0
1
1
0
0
0
Network Model Selection Using Task-Focused Minimum Description Length
Networks are fundamental models for data used in practically every application domain. In most instances, several implicit or explicit choices about the network definition impact the translation of underlying data to a network representation, and the subsequent question(s) about the underlying system being represented. Users of downstream network data may not even be aware of these choices or their impacts. We propose a task-focused network model selection methodology which addresses several key challenges. Our approach constructs network models from underlying data and uses minimum description length (MDL) criteria for selection. Our methodology measures efficiency, a general and comparable measure of the network's performance of a local (i.e. node-level) predictive task of interest. Selection on efficiency favors parsimonious (e.g. sparse) models to avoid overfitting and can be applied across arbitrary tasks and representations. We show stability, sensitivity, and significance testing in our methodology.
1
0
0
0
0
0
Concentration of curvature and Lipschitz invariants of holomorphic functions of two variables
By combining analytic and geometric viewpoints on the concentration of the curvature of the Milnor fibre, we prove that Lipschitz homeomorphisms preserve the zones of multi-scale curvature concentration as well as the gradient canyon structure of holomorphic functions of two variables. This yields the first new Lipschitz invariants after those discovered by Henry and Parusinski in 2003.
0
0
1
0
0
0
Beam tuning and bunch length measurement in the bunch compression operation at the cERL
Realization of a short bunch beam by manipulating the longitudinal phase space distribution with a finite longitudinal dispersion following an off-crest accelera- tion is a widely used technique. The technique was applied in a compact test accelerator of an energy-recovery linac scheme for compressing the bunch length at the return loop. A diagnostic system utilizing coherent transition radiation was developed for the beam tuning and for estimating the bunch length. By scanning the beam parameters, we experimentally found the best condition for the bunch compression. The RMS bunch length of 250+-50 fs was obtained at a bunch charge of 2 pC. This result confirmed the design and the tuning pro- cedure of the bunch compression operation for the future energy-recovery linac (ERL).
0
1
0
0
0
0
Long-Lived Ultracold Molecules with Electric and Magnetic Dipole Moments
We create fermionic dipolar $^{23}$Na$^6$Li molecules in their triplet ground state from an ultracold mixture of $^{23}$Na and $^6$Li. Using magneto-association across a narrow Feshbach resonance followed by a two-photon STIRAP transfer to the triplet ground state, we produce $3\,{\times}\,10^4$ ground state molecules in a spin-polarized state. We observe a lifetime of $4.6\,\text{s}$ in an isolated molecular sample, approaching the $p$-wave universal rate limit. Electron spin resonance spectroscopy of the triplet state was used to determine the hyperfine structure of this previously unobserved molecular state.
0
1
0
0
0
0
Bilinear generalized Radon transforms in the plane
Let $\sigma$ be arc-length measure on $S^1\subset \mathbb R^2$ and $\Theta$ denote rotation by an angle $\theta \in (0, \pi]$. Define a model bilinear generalized Radon transform, $$B_{\theta}(f,g)(x)=\int_{S^1} f(x-y)g(x-\Theta y)\, d\sigma(y),$$ an analogue of the linear generalized Radon transforms of Guillemin and Sternberg \cite{GS} and Phong and Stein (e.g., \cite{PhSt91,St93}). Operators such as $B_\theta$ are motivated by problems in geometric measure theory and combinatorics. For $\theta<\pi$, we show that $B_{\theta}: L^p({\Bbb R}^2) \times L^q({\Bbb R}^2) \to L^r({\Bbb R}^2)$ if $\left(\frac{1}{p},\frac{1}{q},\frac{1}{r}\right)\in Q$, the polyhedron with the vertices $(0,0,0)$, $(\frac{2}{3}, \frac{2}{3}, 1)$, $(0, \frac{2}{3}, \frac{1}{3})$, $(\frac{2}{3},0,\frac{1}{3})$, $(1,0,1)$, $(0,1,1)$ and $(\frac{1}{2},\frac{1}{2},\frac{1}{2})$, except for $\left( \frac{1}{2},\frac{1}{2},\frac{1}{2} \right)$, where we obtain a restricted strong type estimate. For the degenerate case $\theta=\pi$, a more restrictive set of exponents holds. In the scale of normed spaces, $p,q,r \ge 1$, the type set $Q$ is sharp. Estimates for the same exponents are also proved for a class of bilinear generalized Radon transforms in $\mathbb R^2$ of the form $$ B(f,g)(x)=\int \int \delta(\phi_1(x,y)-t_1)\delta(\phi_2(x,z)-t_2) \delta(\phi_3(y,z)-t_3) f(y)g(z) \psi(y,z) \, dy\, dz, $$ where $\delta$ denotes the Dirac distribution, $t_1,t_2,t_3\in\mathbb R$, $\psi$ is a smooth cut-off and the defining functions $\phi_j$ satisfy some natural geometric assumptions.
0
0
1
0
0
0
Diffusion of new products with recovering consumers
We consider the diffusion of new products in the discrete Bass-SIR model, in which consumers who adopt the product can later "recover" and stop influencing their peers to adopt the product. To gain insight into the effect of the social network structure on the diffusion, we focus on two extreme cases. In the "most-connected" configuration where all consumers are inter-connected (complete network), averaging over all consumers leads to an aggregate model, which combines the Bass model for diffusion of new products with the SIR model for epidemics. In the "least-connected" configuration where consumers are arranged on a circle and each consumer can only be influenced by his left neighbor (one-sided 1D network), averaging over all consumers leads to a different aggregate model which is linear, and can be solved explicitly. We conjecture that for any other network, the diffusion is bounded from below and from above by that on a one-sided 1D network and on a complete network, respectively. When consumers are arranged on a circle and each consumer can be influenced by his left and right neighbors (two-sided 1D network), the diffusion is strictly faster than on a one-sided 1D network. This is different from the case of non-recovering adopters, where the diffusion on one-sided and on two-sided 1D networks is identical. We also propose a nonlinear model for recoveries, and show that consumers' heterogeneity has a negligible effect on the aggregate diffusion.
1
1
0
0
0
0
Answering Complex Questions Using Open Information Extraction
While there has been substantial progress in factoid question-answering (QA), answering complex questions remains challenging, typically requiring both a large body of knowledge and inference techniques. Open Information Extraction (Open IE) provides a way to generate semi-structured knowledge for QA, but to date such knowledge has only been used to answer simple questions with retrieval-based methods. We overcome this limitation by presenting a method for reasoning with Open IE knowledge, allowing more complex questions to be handled. Using a recently proposed support graph optimization framework for QA, we develop a new inference model for Open IE, in particular one that can work effectively with multiple short facts, noise, and the relational structure of tuples. Our model significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty, while also removing the reliance on manually curated knowledge.
1
0
0
0
0
0
On Testing Quantum Programs
A quantum computer (QC) can solve many computational problems more efficiently than a classic one. The field of QCs is growing: companies (such as DWave, IBM, Google, and Microsoft) are building QC offerings. We position that software engineers should look into defining a set of software engineering practices that apply to QC's software. To start this process, we give examples of challenges associated with testing such software and sketch potential solutions to some of these challenges.
1
0
0
0
0
0
Degree weighted recurrence networks for the analysis of time series data
Recurrence networks are powerful tools used effectively in the nonlinear analysis of time series data. The analysis in this context is done mostly with unweighted and undirected complex networks constructed with specific criteria from the time series. In this work, we propose a novel method to construct "weighted recurrence network"(WRN) from a time series and show how it can reveal useful information regarding the structure of a chaotic attractor, which the usual unweighted recurrence network cannot provide. Especially, we find the node strength distribution of the WRN, from every chaotic attractor follows a power law (with exponential tail) with the index characteristic to the fractal structure of the attractor. This leads to a new class among complex networks, to which networks from all standard chaotic attractors are found to belong. In addition, we present generalized definitions for clustering coefficient and characteristic path length and show that these measures can effectively discriminate chaotic dynamics from white noise and $1/f$ colored noise. Our results indicate that the WRN and the associated measures can become potentially important tools for the analysis of short and noisy time series from the real world systems as they are clearly demarked from that of noisy or stochastic systems.
0
1
0
0
0
0
Parcels v0.9: prototyping a Lagrangian Ocean Analysis framework for the petascale age
As Ocean General Circulation Models (OGCMs) move into the petascale age, where the output from global high-resolution model runs can be of the order of hundreds of terabytes in size, tools to analyse the output of these models will need to scale up too. Lagrangian Ocean Analysis, where virtual particles are tracked through hydrodynamic fields, is an increasingly popular way to analyse OGCM output, by mapping pathways and connectivity of biotic and abiotic particulates. However, the current software stack of Lagrangian Ocean Analysis codes is not dynamic enough to cope with the increasing complexity, scale and need for customisation of use-cases. Furthermore, most community codes are developed for stand-alone use, making it a nontrivial task to integrate virtual particles at runtime of the OGCM. Here, we introduce the new Parcels code, which was designed from the ground up to be sufficiently scalable to cope with petascale computing. We highlight its API design that combines flexibility and customisation with the ability to optimise for HPC workflows, following the paradigm of domain-specific languages. Parcels is primarily written in Python, utilising the wide range of tools available in the scientific Python ecosystem, while generating low-level C-code and using Just-In-Time compilation for performance-critical computation. We show a worked-out example of its API, and validate the accuracy of the code against seven idealised test cases. This version~0.9 of Parcels is focussed on laying out the API, with future work concentrating on optimisation, efficiency and at-runtime coupling with OGCMs.
1
1
0
0
0
0
TFLMS: Large Model Support in TensorFlow by Graph Rewriting
While accelerators such as GPUs have limited memory, deep neural networks are becoming larger and will not fit with the memory limitation of accelerators for training. We propose an approach to tackle this problem by rewriting the computational graph of a neural network, in which swap-out and swap-in operations are inserted to temporarily store intermediate results on CPU memory. In particular, we first revise the concept of a computational graph by defining a concrete semantics for variables in a graph. We then formally show how to derive swap-out and swap-in operations from an existing graph and present rules to optimize the graph. To realize our approach, we developed a module in TensorFlow, named TFLMS. TFLMS is published as a pull request in the TensorFlow repository for contributing to the TensorFlow community. With TFLMS, we were able to train ResNet-50 and 3DUnet with 4.7x and 2x larger batch size, respectively. In particular, we were able to train 3DUNet using images of size of $192^3$ for image segmentation, which, without TFLMS, had been done only by dividing the images to smaller images, which affects the accuracy.
0
0
0
1
0
0
Machine Learning for Quantum Dynamics: Deep Learning of Excitation Energy Transfer Properties
Understanding the relationship between the structure of light-harvesting systems and their excitation energy transfer properties is of fundamental importance in many applications including the development of next generation photovoltaics. Natural light harvesting in photosynthesis shows remarkable excitation energy transfer properties, which suggests that pigment-protein complexes could serve as blueprints for the design of nature inspired devices. Mechanistic insights into energy transport dynamics can be gained by leveraging numerically involved propagation schemes such as the hierarchical equations of motion (HEOM). Solving these equations, however, is computationally costly due to the adverse scaling with the number of pigments. Therefore virtual high-throughput screening, which has become a powerful tool in material discovery, is less readily applicable for the search of novel excitonic devices. We propose the use of artificial neural networks to bypass the computational limitations of established techniques for exploring the structure-dynamics relation in excitonic systems. Once trained, our neural networks reduce computational costs by several orders of magnitudes. Our predicted transfer times and transfer efficiencies exhibit similar or even higher accuracies than frequently used approximate methods such as secular Redfield theory
0
1
0
1
0
0
Multi-Player Bandits: A Trekking Approach
We study stochastic multi-armed bandits with many players. The players do not know the number of players, cannot communicate with each other and if multiple players select a common arm they collide and none of them receive any reward. We consider the static scenario, where the number of players remains fixed, and the dynamic scenario, where the players enter and leave at any time. We provide algorithms based on a novel `trekking approach' that guarantees constant regret for the static case and sub-linear regret for the dynamic case with high probability. The trekking approach eliminates the need to estimate the number of players resulting in fewer collisions and improved regret performance compared to the state-of-the-art algorithms. We also develop an epoch-less algorithm that eliminates any requirement of time synchronization across the players provided each player can detect the presence of other players on an arm. We validate our theoretical guarantees using simulation based and real test-bed based experiments.
0
0
0
1
0
0
High-resolution investigation of spinal cord and spine
High-resolution non-invasive 3D study of intact spine and spinal cord morphology on the level of complex vascular and neuronal organization is a crucial issue for the development of treatments for the injuries and pathologies of central nervous system (CNS). X-ray phase contrast tomography enables high quality 3D visualization in ex-vivo mouse model of both vascular and neuronal network of the soft spinal cord tissue at the scale from millimeters to hundreds of nanometers without any contrast agents and sectioning. Until now, 3D high resolution visualization of spinal cord mostly has been limited by imaging of organ extracted from vertebral column because high absorbing boney tissue drastically reduces the morphological details of soft tissue in image. However, the extremely destructive procedure of bones removal leads to sample deterioration and, therefore, to the lack of considerable part of information about the object. In this work we present the data analysis procedure to get high resolution and high contrast 3D images of intact mice spinal cord surrounded by vertebras, preserving all richness of micro-details of the spinal cord inhabiting inside. Our results are the first step forward to the difficult way toward the high- resolution investigation of in-vivo model central nervous system.
0
1
0
0
0
0
Bohr--Rogosinski radius for analytic functions
There are a number of articles which deal with Bohr's phenomenon whereas only a few papers appeared in the literature on Rogosinski's radii for analytic functions defined on the unit disk $|z|<1$. In this article, we introduce and investigate Bohr-Rogosinski's radii for analytic functions defined for $|z|<1$. Also, we prove several different improved versions of the classical Bohr's inequality. Finally, we also discuss the Bohr-Rogosinski's radius for a class of subordinations. All the results are proved to be sharp.
0
0
1
0
0
0
Methods to locate Saddle Points in Complex Landscapes
We present a class of simple algorithms that allows to find the reaction path in systems with a complex potential energy landscape. The approach does not need any knowledge on the product state and does not require the calculation of any second derivatives. The underlying idea is to use two nearby points in configuration space to locate the path of slowest ascent. By introducing a weak noise term, the algorithm is able to find even low-lying saddle points that are not reachable by means of a slowest ascent path. Since the algorithm makes only use of the value of the potential and its gradient, the computational effort to find saddles is linear in the number of degrees of freedom, if the potential is short-ranged. We test the performance of the algorithm for two potential energy landscapes. For the Müller-Brown surface we find that the algorithm always finds the correct saddle point. For the modified Müller-Brown surface, which has a saddle point that is not reachable by means of a slowest ascent path, the algorithm is still able to find this saddle point with high probability.
0
1
0
0
0
0
On the Complexity of Opinions and Online Discussions
In an increasingly polarized world, demagogues who reduce complexity down to simple arguments based on emotion are gaining in popularity. Are opinions and online discussions falling into demagoguery? In this work, we aim to provide computational tools to investigate this question and, by doing so, explore the nature and complexity of online discussions and their space of opinions, uncovering where each participant lies. More specifically, we present a modeling framework to construct latent representations of opinions in online discussions which are consistent with human judgements, as measured by online voting. If two opinions are close in the resulting latent space of opinions, it is because humans think they are similar. Our modeling framework is theoretically grounded and establishes a surprising connection between opinions and voting models and the sign-rank of a matrix. Moreover, it also provides a set of practical algorithms to both estimate the dimension of the latent space of opinions and infer where opinions expressed by the participants of an online discussion lie in this space. Experiments on a large dataset from Yahoo! News, Yahoo! Finance, Yahoo! Sports, and the Newsroom app suggest that unidimensional opinion models may often be unable to accurately represent online discussions, provide insights into human judgements and opinions, and show that our framework is able to circumvent language nuances such as sarcasm or humor by relying on human judgements instead of textual analysis.
1
0
0
1
0
0
Free energy distribution of the stationary O'Connell-Yor directed random polymer model
We study the semi-discrete directed polymer model introduced by O'Connell-Yor in its stationary regime, based on our previous work on the stationary $q$-totally asymmetric simple exclusion process ($q$-TASEP) using a two-sided $q$-Whittaker process. We give a formula for the free energy distribution of the polymer model in terms of Fredholm determinant and show that the universal KPZ stationary distribution appears in the long time limit. We also consider the limit to the stationary KPZ equation and discuss the connections with previously found formulas.
0
1
1
0
0
0
Personalized Gaussian Processes for Forecasting of Alzheimer's Disease Assessment Scale-Cognition Sub-Scale (ADAS-Cog13)
In this paper, we introduce the use of a personalized Gaussian Process model (pGP) to predict per-patient changes in ADAS-Cog13 -- a significant predictor of Alzheimer's Disease (AD) in the cognitive domain -- using data from each patient's previous visits, and testing on future (held-out) data. We start by learning a population-level model using multi-modal data from previously seen patients using a base Gaussian Process (GP) regression. The personalized GP (pGP) is formed by adapting the base GP sequentially over time to a new (target) patient using domain adaptive GPs. We extend this personalized approach to predict the values of ADAS-Cog13 over the future 6, 12, 18, and 24 months. We compare this approach to a GP model trained only on past data of the target patients (tGP), as well as to a new approach that combines pGP with tGP. We find that the new approach, combining pGP with tGP, leads to large improvements in accurately forecasting future ADAS-Cog13 scores.
0
0
0
1
0
0
Analysis and Control of a Non-Standard Hyperbolic PDE Traffic Flow Model
The paper provides results for a non-standard, hyperbolic, 1-D, nonlinear traffic flow model on a bounded domain. The model consists of two first-order PDEs with a dynamic boundary condition that involves the time derivative of the velocity. The proposed model has features that are important from a traffic-theoretic point of view: is completely anisotropic and information travels forward exactly at the same speed as traffic. It is shown that, for all physically meaningful initial conditions, the model admits a globally defined, unique, classical solution that remains positive and bounded for all times. Moreover, it is shown that global stabilization can be achieved for arbitrary equilibria by means of an explicit boundary feedback law. The stabilizing feedback law depends only on the inlet velocity and consequently, the measurement requirements for the implementation of the proposed boundary feedback law are minimal. The efficiency of the proposed boundary feedback law is demonstrated by means of a numerical example.
1
0
1
0
0
0
Learning Robust Representations for Computer Vision
Unsupervised learning techniques in computer vision often require learning latent representations, such as low-dimensional linear and non-linear subspaces. Noise and outliers in the data can frustrate these approaches by obscuring the latent spaces. Our main goal is deeper understanding and new development of robust approaches for representation learning. We provide a new interpretation for existing robust approaches and present two specific contributions: a new robust PCA approach, which can separate foreground features from dynamic background, and a novel robust spectral clustering method, that can cluster facial images with high accuracy. Both contributions show superior performance to standard methods on real-world test sets.
1
0
0
1
0
0
Detection Estimation and Grid matching of Multiple Targets with Single Snapshot Measurements
In this work, we explore the problems of detecting the number of narrow-band, far-field targets and estimating their corresponding directions from single snapshot measurements. The principles of sparse signal recovery (SSR) are used for the single snapshot detection and estimation of multiple targets. In the SSR framework, the DoA estimation problem is grid based and can be posed as the lasso optimization problem. However, the SSR framework for DoA estimation gives rise to the grid mismatch problem, when the unknown targets (sources) are not matched with the estimation grid chosen for the construction of the array steering matrix at the receiver. The block sparse recovery framework is known to mitigate the grid mismatch problem by jointly estimating the targets and their corresponding offsets from the estimation grid using the group lasso estimator. The corresponding detection problem reduces to estimating the optimal regularization parameter ($\tau$) of the lasso (in case of perfect grid-matching) or group-lasso estimation problem for achieving the required probability of correct detection ($P_c$). We propose asymptotic and finite sample test statistics for detecting the number of sources with the required $P_c$ at moderate to high signal to noise ratios. Once the number of sources are detected, or equivalently the optimal $\hat{\tau}$ is estimated, the corresponding estimation and grid matching of the DoAs can be performed by solving the lasso or group-lasso problem at $\hat{\tau}$
0
0
0
1
0
0
Small-Variance Asymptotics for Nonparametric Bayesian Overlapping Stochastic Blockmodels
The latent feature relational model (LFRM) is a generative model for graph-structured data to learn a binary vector representation for each node in the graph. The binary vector denotes the node's membership in one or more communities. At its core, the LFRM miller2009nonparametric is an overlapping stochastic blockmodel, which defines the link probability between any pair of nodes as a bilinear function of their community membership vectors. Moreover, using a nonparametric Bayesian prior (Indian Buffet Process) enables learning the number of communities automatically from the data. However, despite its appealing properties, inference in LFRM remains a challenge and is typically done via MCMC methods. This can be slow and may take a long time to converge. In this work, we develop a small-variance asymptotics based framework for the non-parametric Bayesian LFRM. This leads to an objective function that retains the nonparametric Bayesian flavor of LFRM, while enabling us to design deterministic inference algorithms for this model, that are easy to implement (using generic or specialized optimization routines) and are fast in practice. Our results on several benchmark datasets demonstrate that our algorithm is competitive to methods such as MCMC, while being much faster.
0
0
0
1
0
0
Towards a Science of Mind
The ancient mind/body problem continues to be one of deepest mysteries of science and of the human spirit. Despite major advances in many fields, there is still no plausible link between subjective experience (qualia) and its realization in the body. This paper outlines some of the elements of a rigorous science of mind (SoM) - key ideas include scientific realism of mind, agnostic mysterianism, careful attention to language, and a focus on concrete (touchstone) questions and results.
1
0
0
0
1
0
Resistance distance criterion for optimal slack bus selection
We investigate the dependence of transmission losses on the choice of a slack bus in high voltage AC transmission networks. We formulate a transmission loss minimization problem in terms of slack variables representing the additional power injection that each generator provides to compensate the transmission losses. We show analytically that for transmission lines having small, homogeneous resistance over reactance ratios ${r/x\ll1}$, transmission losses are generically minimal in the case of a unique \textit{slack bus} instead of a distributed slack bus. For the unique slack bus scenario, to lowest order in ${r/x}$, transmission losses depend linearly on a resistance distance based indicator measuring the separation of the slack bus candidate from the rest of the network. We confirm these results numerically for several IEEE and Pegase testcases, and show that our predictions qualitatively hold also in the case of lines having inhomogeneous ${r/x}$ ratios, with optimal slack bus choices reducing transmission losses by ${10}\%$ typically.
1
1
0
0
0
0
Interpolation in the Presence of Domain Inhomogeneity
Standard interpolation techniques are implicitly based on the assumption that the signal lies on a homogeneous domain. In this letter, the proposed interpolation method instead exploits prior information about domain inhomogeneity, characterized by different, potentially overlapping, subdomains. By introducing a domain-similarity metric for each sample, the interpolation process is then based on a domain-informed consistency principle. We illustrate and demonstrate the feasibility of domain-informed linear interpolation in 1D, and also, on a real fMRI image in 2D. The results show the benefit of incorporating domain knowledge so that, for example, sharp domain boundaries can be recovered by the interpolation, if such information is available.
0
0
1
0
0
0
Lectures on the mean values of functionals -- An elementary introduction to infinite-dimensional probability
This is an elementary introduction to infinite-dimensional probability. In the lectures, we compute the exact mean values of some functionals on C[0,1] and L[0,1] by considering these functionals as infinite-dimensional random variables. The results show that there exist the complete concentration of measure phenomenon for these mean values since the variances are all zeroes.
0
1
1
1
0
0
Synthesis of Optimal Resilient Control Strategies
Repair mechanisms are important within resilient systems to maintain the system in an operational state after an error occurred. Usually, constraints on the repair mechanisms are imposed, e.g., concerning the time or resources required (such as energy consumption or other kinds of costs). For systems modeled by Markov decision processes (MDPs), we introduce the concept of resilient schedulers, which represent control strategies guaranteeing that these constraints are always met within some given probability. Assigning rewards to the operational states of the system, we then aim towards resilient schedulers which maximize the long-run average reward, i.e., the expected mean payoff. We present a pseudo-polynomial algorithm that decides whether a resilient scheduler exists and if so, yields an optimal resilient scheduler. We show also that already the decision problem asking whether there exists a resilient scheduler is PSPACE-hard.
1
0
0
0
0
0
Limits of the Kucera-Gacs coding method
Every real is computable from a Martin-Loef random real. This well known result in algorithmic randomness was proved by Kucera and Gacs. In this survey article we discuss various approaches to the problem of coding an arbitrary real into a Martin-Loef random real,and also describe new results concerning optimal methods of coding. We start with a simple presentation of the original methods of Kucera and Gacs and then rigorously demonstrate their limitations in terms of the size of the redundancy in the codes that they produce. Armed with a deeper understanding of these methods, we then proceed to motivate and illustrate aspects of the new coding method that was recently introduced by Barmpalias and Lewis-Pye and which achieves optimal logarithmic redundancy, an exponential improvement over the original redundancy bounds.
0
0
1
0
0
0
Subspace Tracking Algorithms for Millimeter Wave MIMO Channel Estimation with Hybrid Beamforming
This paper proposes the use of subspace tracking algorithms for performing MIMO channel estimation at millimeter wave (mmWave) frequencies. Using a subspace approach, we develop a protocol enabling the estimation of the right (resp. left) singular vectors at the transmitter (resp. receiver) side; then, we adapt the projection approximation subspace tracking with deflation (PASTd) and the orthogonal Oja (OOJA) algorithms to our framework and obtain two channel estimation algorithms. The hybrid analog/digital nature of the beamformer is also explicitly taken into account at the algorithm design stage. Numerical results show that the proposed estimation algorithms are effective, and that they perform better than two relevant competing alternatives available in the open literature.
1
0
0
0
0
0
Thomas Precession for Dressed Particles
We consider a particle dressed with boundary gravitons in three-dimensional Minkowski space. The existence of BMS transformations implies that the particle's wavefunction picks up a Berry phase when subjected to changes of reference frames that trace a closed path in the asymptotic symmetry group. We evaluate this phase and show that, for BMS superrotations, it provides a gravitational generalization of Thomas precession. In principle, such phases are observable signatures of asymptotic symmetries.
0
1
1
0
0
0
Exponential random graphs behave like mixtures of stochastic block models
We study the behavior of exponential random graphs in both the sparse and the dense regime. We show that exponential random graphs are approximate mixtures of graphs with independent edges whose probability matrices are critical points of an associated functional, thereby satisfying a certain matrix equation. In the dense regime, every solution to this equation is close to a block matrix, concluding that the exponential random graph behaves roughly like a mixture of stochastic block models. We also show existence and uniqueness of solutions to this equation for several families of exponential random graphs, including the case where the subgraphs are counted with positive weights and the case where all weights are small in absolute value. In particular, this generalizes some of the results in a paper by Chatterjee and Diaconis from the dense regime to the sparse regime and strengthens their bounds from the cut-metric to the one-metric.
1
0
1
1
0
0
Certificate Enhanced Data-Flow Analysis
Proof-carrying-code was proposed as a solution to ensure a trust relationship between two parties: a (heavyweight) analyzer and a (lightweight) checker. The analyzer verifies the conformance of a given application to a specified property and generates a certificate attesting the validity of the analysis result. It suffices then for the checker just to test the consistency of the proof instead of constructing it. We set out to study the applicability of this technique in the context of data- flow analysis. In particular, we want to know if there is a significant performance difference between the analyzer and the checker. Therefore, we developed a tool, called DCert, implementing an inter-procedural context and flow-sensitive data-flow analyzer and checker for Android. Applying our tool to real-world large applications, we found out that checking can be up to 8 times faster than verification. This important gain in time suggests a potential for equipping applications on app stores with certificates that can be checked on mobile devices which are limited in computation and storage resources. We describe our implementation and report on experimental results.
1
0
0
0
0
0
Small cells in a Poisson hyperplane tessellation
Until now, little was known about properties of small cells in a Poisson hyperplane tessellation. The few existing results were either heuristic or applying only to the two dimensional case and for very specific size functionals and directional distributions. This paper fills this gap by providing a systematic study of small cells in a Poisson hyperplane tessellation of arbitrary dimension, arbitrary directional distribution $\varphi$ and with respect to an arbitrary size functional $\Sigma$. More precisely, we investigate the distribution of the typical cell $Z$, conditioned on the event $\{\Sigma(Z)<a\}$, where $a\to0$ and $\Sigma$ is a size functional, i.e. a functional on the set of convex bodies which is continuous, not identically zero, homogeneous of degree $k>0$, and increasing with respect to set inclusion. We focus on the number of facets and the shape of such small cells. We show in various general settings that small cells tend to minimize the number of facets and that they have a non degenerated limit shape distribution which depends on the size $\Sigma$ and the directional distribution. We also exhibit a class of directional distribution for which cells with small inradius do not tend to minimize the number of facets.
0
0
1
0
0
0
Privacy-Aware Guessing Efficiency
We investigate the problem of guessing a discrete random variable $Y$ under a privacy constraint dictated by another correlated discrete random variable $X$, where both guessing efficiency and privacy are assessed in terms of the probability of correct guessing. We define $h(P_{XY}, \epsilon)$ as the maximum probability of correctly guessing $Y$ given an auxiliary random variable $Z$, where the maximization is taken over all $P_{Z|Y}$ ensuring that the probability of correctly guessing $X$ given $Z$ does not exceed $\epsilon$. We show that the map $\epsilon\mapsto h(P_{XY}, \epsilon)$ is strictly increasing, concave, and piecewise linear, which allows us to derive a closed form expression for $h(P_{XY}, \epsilon)$ when $X$ and $Y$ are connected via a binary-input binary-output channel. For $(X^n, Y^n)$ being pairs of independent and identically distributed binary random vectors, we similarly define $\underline{h}_n(P_{X^nY^n}, \epsilon)$ under the assumption that $Z^n$ is also a binary vector. Then we obtain a closed form expression for $\underline{h}_n(P_{X^nY^n}, \epsilon)$ for sufficiently large, but nontrivial values of $\epsilon$.
1
0
1
1
0
0
Positioning services of a travel agency in social networks
In this paper the methods of forming a travel company customer base by means of social networks are observed. These methods are made to involve web-users of the social networks (VK.com and Facebook) for positioning of the service of the travel agency "New Europe" on the Internet. The methods of applying the maintenance activities and interests of web-users are also used. So, the main method of information exchanging in modern network society is on-line social networks. The rapid development and improvement of such information and communication technologies is a key factor in the positioning of the travel agency brand in the global information space. The absence of time and space restrictions and the speed of spreading of the information among an aim audience of social networks create all the conditions for effective popularization of the travel agency "New Europe" and its service in the Internet.
1
0
0
0
0
0
Inverse Mapping for Rainfall-Runoff Models using History Matching Approach
In this paper, we consider two rainfall-runoff computer models. The first model is Matlab-Simulink model which simulates runoff from windrow compost pad (located at the Bioconversion Center in Athens, GA) over a period of time based on rainfall events. The second model is Soil Water Assessment Tool (SWAT) which estimates surface runoff in the Middle Oconee River in Athens, GA. The input parameter spaces of both models are sensitive and high dimensional, the model output for every input combination is a time-series of runoff, and these two computer models generate a wide spectrum of outputs including some that are far from reality. In order to improve the prediction accuracy, in this paper we propose to apply a history matching approach for calibrating these hydrological models, which also gives better insights for improved management of these systems.
0
0
0
1
0
0
Geometric SMOTE: Effective oversampling for imbalanced learning through a geometric extension of SMOTE
Classification of imbalanced datasets is a challenging task for standard algorithms. Although many methods exist to address this problem in different ways, generating artificial data for the minority class is a more general approach compared to algorithmic modifications. SMOTE algorithm and its variations generate synthetic samples along a line segment that joins minority class instances. In this paper we propose Geometric SMOTE (G-SMOTE) as a generalization of the SMOTE data generation mechanism. G-SMOTE generates synthetic samples in a geometric region of the input space, around each selected minority instance. While in the basic configuration this region is a hyper-sphere, G-SMOTE allows its deformation to a hyper-spheroid and finally to a line segment, emulating, in the last case, the SMOTE mechanism. The performance of G-SMOTE is compared against multiple standard oversampling algorithms. We present empirical results that show a significant improvement in the quality of the generated data when G-SMOTE is used as an oversampling algorithm.
1
0
0
0
0
0
Liveness Verification and Synthesis: New Algorithms for Recursive Programs
We consider the problems of liveness verification and liveness synthesis for recursive programs. The liveness verification problem (LVP) is to decide whether a given omega-context-free language is contained in a given omega-regular language. The liveness synthesis problem (LSP) is to compute a strategy so that a given omega-context-free game, when played along the strategy, is guaranteed to derive a word in a given omega-regular language. The problems are known to be EXPTIME-complete and EXPTIME-complete, respectively. Our contributions are new algorithms with optimal time complexity. For LVP, we generalize recent lasso-finding algorithms (also known as Ramsey-based algorithms) from finite to recursive programs. For LSP, we generalize a recent summary-based algorithm from finite to infinite words. Lasso finding and summaries have proven to be efficient in a number of implementations for the finite state and finite word setting.
1
0
0
0
0
0
A semiparametric approach for bivariate extreme exceedances
Inference over tails is performed by applying only the results of extreme value theory. Whilst such theory is well defined and flexible enough in the univariate case, multivariate inferential methods often require the imposition of arbitrary constraints not fully justifed by the underlying theory. In contrast, our approach uses only the constraints imposed by theory. We build on previous, theoretically justified work for marginal exceedances over a high, unknown threshold, by combining it with flexible, semiparametric copulae specifications to investigate extreme dependence. Whilst giving probabilistic judgements about the extreme regime of all marginal variables, our approach formally uses the full dataset and allows for a variety of patterns of dependence, be them extremal or not. A new probabilistic criterion quantifying the possibility that the data exhibits asymptotic independence is introduced and its robustness empirically studied. Estimation of functions of interest in extreme value analyses is performed via MCMC algorithms. Attention is also devoted to the prediction of new extreme observations. Our approach is evaluated through a series of simulations, applied to real data sets and assessed against competing approaches. Evidence demonstrates that the bulk of the data does not bias and improves the inferential process for the extremal dependence.
0
0
0
1
0
0
Prolongation of SMAP to Spatio-temporally Seamless Coverage of Continental US Using a Deep Learning Neural Network
The Soil Moisture Active Passive (SMAP) mission has delivered valuable sensing of surface soil moisture since 2015. However, it has a short time span and irregular revisit schedule. Utilizing a state-of-the-art time-series deep learning neural network, Long Short-Term Memory (LSTM), we created a system that predicts SMAP level-3 soil moisture data with atmospheric forcing, model-simulated moisture, and static physiographic attributes as inputs. The system removes most of the bias with model simulations and improves predicted moisture climatology, achieving small test root-mean-squared error (<0.035) and high correlation coefficient >0.87 for over 75\% of Continental United States, including the forested Southeast. As the first application of LSTM in hydrology, we show the proposed network avoids overfitting and is robust for both temporal and spatial extrapolation tests. LSTM generalizes well across regions with distinct climates and physiography. With high fidelity to SMAP, LSTM shows great potential for hindcasting, data assimilation, and weather forecasting.
0
0
0
1
0
0
Note on Green Function Formalism and Topological Invariants
It has been discovered previously that the topological order parameter could be identified from the topological data of the Green function, namely the (generalized) TKNN invariant in general dimensions, for both non-interacting and interacting systems. In this note, we show that this phenomena has a clear geometric derivation. This proposal could be regarded as an alternative proof for the identification of the corresponding topological invariant and topological order parameter.
0
0
1
0
0
0
Unsupervised Document Embedding With CNNs
We propose a new model for unsupervised document embedding. Leading existing approaches either require complex inference or use recurrent neural networks (RNN) that are difficult to parallelize. We take a different route and develop a convolutional neural network (CNN) embedding model. Our CNN architecture is fully parallelizable resulting in over 10x speedup in inference time over RNN models. Parallelizable architecture enables to train deeper models where each successive layer has increasingly larger receptive field and models longer range semantic structure within the document. We additionally propose a fully unsupervised learning algorithm to train this model based on stochastic forward prediction. Empirical results on two public benchmarks show that our approach produces comparable to state-of-the-art accuracy at a fraction of computational cost.
1
0
0
1
0
0
A metric model for the functional architecture of the visual cortex
The purpose of this work is to construct a model for the functional architecture of the primary visual cortex (V1), based on a structure of metric measure space induced by the underlying organization of receptive profiles (RPs) of visual cells. In order to account for the horizontal connectivity of V1 in such a context, a diffusion process compatible with the geometry of the space is defined following the classical approach of K.-T. Sturm. The construction of our distance function does neither require any group parameterization of the family of RPs, nor involve any differential structure. As such, it adapts to non-parameterized sets of RPs, possibly obtained through numerical procedures; it also allows to model the lateral connectivity arising from non-differential metrics such as the one induced on a pinwheel surface by a family of filters of vanishing scale. On the other hand, when applied to the classical framework of Gabor filters, this construction yields a distance approximating the sub-Riemannian structure proposed as a model for V1 by G. Citti and A. Sarti [J Math Imaging Vis 24: 307 (2006)], thus showing itself to be consistent with existing cortex models.
0
0
0
0
1
0
Bayesian parameter identification in Cahn-Hilliard models for biological growth
We consider the inverse problem of parameter estimation in a diffuse interface model for tumour growth. The model consists of a fourth-order Cahn--Hilliard system and contains three phenomenological parameters: the tumour proliferation rate, the nutrient consumption rate, and the chemotactic sensitivity. We study the inverse problem within the Bayesian framework and construct the likelihood and noise for two typical observation settings. One setting involves an infinite-dimensional data space where we observe the full tumour. In the second setting we observe only the tumour volume, hence the data space is finite-dimensional. We show the well-posedness of the posterior measure for both settings, building upon and improving the analytical results in [C. Kahle and K.F. Lam, Appl. Math. Optim. (2018)]. A numerical example involving synthetic data is presented in which the posterior measure is numerically approximated by the Sequential Monte Carlo approach with tempering.
0
0
0
1
0
0
Transport Phase Diagram and Anderson Localization in Hyperuniform Disordered Photonic Materials
Hyperuniform disordered photonic materials (HDPM) are spatially correlated dielectric structures with unconventional optical properties. They can be transparent to long-wavelength radiation while at the same time have isotropic band gaps in another frequency range. This phenomenon raises fundamental questions concerning photon transport through disordered media. While optical transparency is robust against recurrent multiple scattering, little is known about other transport regimes like diffusive multiple scattering or Anderson localization. Here we investigate band gaps, and we report Anderson localization in two-dimensional stealthy HDPM using numerical simulations of the density of states and optical transport statistics. To establish a unified view, we propose a transport phase diagram. Our results show that, depending only on the degree of correlation, a dielectric material can transition from localization behavior to a bandgap crossing an intermediate regime dominated by tunneling between weakly coupled states.
0
1
0
0
0
0
Nearly Maximally Predictive Features and Their Dimensions
Scientific explanation often requires inferring maximally predictive features from a given data set. Unfortunately, the collection of minimal maximally predictive features for most stochastic processes is uncountably infinite. In such cases, one compromises and instead seeks nearly maximally predictive features. Here, we derive upper-bounds on the rates at which the number and the coding cost of nearly maximally predictive features scales with desired predictive power. The rates are determined by the fractal dimensions of a process' mixed-state distribution. These results, in turn, show how widely-used finite-order Markov models can fail as predictors and that mixed-state predictive features offer a substantial improvement.
1
1
0
1
0
0
A Survey Of Cross-lingual Word Embedding Models
Cross-lingual representations of words enable us to reason about word meaning in multilingual contexts and are a key facilitator of cross-lingual transfer when developing natural language processing models for low-resource languages. In this survey, we provide a comprehensive typology of cross-lingual word embedding models. We compare their data requirements and objective functions. The recurring theme of the survey is that many of the models presented in the literature optimize for the same objectives, and that seemingly different models are often equivalent modulo optimization strategies, hyper-parameters, and such. We also discuss the different ways cross-lingual word embeddings are evaluated, as well as future challenges and research horizons.
1
0
0
0
0
0
Space-efficient classical and quantum algorithms for the shortest vector problem
A lattice is the integer span of some linearly independent vectors. Lattice problems have many significant applications in coding theory and cryptographic systems for their conjectured hardness. The Shortest Vector Problem (SVP), which is to find the shortest non-zero vector in a lattice, is one of the well-known problems that are believed to be hard to solve, even with a quantum computer. In this paper we propose space-efficient classical and quantum algorithms for solving SVP. Currently the best time-efficient algorithm for solving SVP takes $2^{n+o(n)}$ time and $2^{n+o(n)}$ space. Our classical algorithm takes $2^{2.05n+o(n)}$ time to solve SVP with only $2^{0.5n+o(n)}$ space. We then modify our classical algorithm to a quantum version, which can solve SVP in time $2^{1.2553n+o(n)}$ with $2^{0.5n+o(n)}$ classical space and only poly(n) qubits.
1
0
0
0
0
0
Low Power SI Class E Power Amplifier and RF Switch For Health Care
This research was to design a 2.4 GHz class E Power Amplifier (PA) for health care, with 0.18um Semiconductor Manufacturing International Corporation CMOS technology by using Cadence software. And also RF switch was designed at cadence software with power Jazz 180nm SOI process. The ultimate goal for such application is to reach high performance and low cost, and between high performance and low power consumption design. This paper introduces the design of a 2.4GHz class E power amplifier and RF switch design. PA consists of cascade stage with negative capacitance. This power amplifier can transmit 16dBm output power to a 50{\Omega} load. The performance of the power amplifier and switch meet the specification requirements of the desired.
1
0
0
0
0
0
Improved approximation algorithm for the Dense-3-Subhypergraph Problem
The study of Dense-$3$-Subhypergraph problem was initiated in Chlamt{á}c et al. [Approx'16]. The input is a universe $U$ and collection ${\cal S}$ of subsets of $U$, each of size $3$, and a number $k$. The goal is to choose a set $W$ of $k$ elements from the universe, and maximize the number of sets, $S\in {\cal S}$ so that $S\subseteq W$. The members in $U$ are called {\em vertices} and the sets of ${\cal S}$ are called the {\em hyperedges}. This is the simplest extension into hyperedges of the case of sets of size $2$ which is the well known Dense $k$-subgraph problem. The best known ratio for the Dense-$3$-Subhypergraph is $O(n^{0.69783..})$ by Chlamt{á}c et al. We improve this ratio to $n^{0.61802..}$. More importantly, we give a new algorithm that approximates Dense-$3$-Subhypergraph within a ratio of $\tilde O(n/k)$, which improves the ratio of $O(n^2/k^2)$ of Chlamt{á}c et al. We prove that under the {\em log density conjecture} (see Bhaskara et al. [STOC'10]) the ratio cannot be better than $\Omega(\sqrt{n})$ and demonstrate some cases in which this optimum can be attained.
1
0
0
0
0
0
A Survey of Security Assessment Ontologies
A literature survey on ontologies concerning the Security Assessment domain has been carried out to uncover initiatives that aim at formalizing concepts from the Security Assessment field of research. A preliminary analysis and a discussion on the selected works are presented. Our main contribution is an updated literature review, describing key characteristics, results, research issues, and application domains of the papers. We have also detected gaps in the Security Assessment literature that could be the subject of further studies in the field. This work is meant to be useful for security researchers who wish to adopt a formal approach in their methods.
1
0
0
0
0
0
Distributed Online Learning of Event Definitions
Logic-based event recognition systems infer occurrences of events in time using a set of event definitions in the form of first-order rules. The Event Calculus is a temporal logic that has been used as a basis in event recognition applications, providing among others, direct connections to machine learning, via Inductive Logic Programming (ILP). OLED is a recently proposed ILP system that learns event definitions in the form of Event Calculus theories, in a single pass over a data stream. In this work we present a version of OLED that allows for distributed, online learning. We evaluate our approach on a benchmark activity recognition dataset and show that we can significantly reduce training times, exchanging minimal information between processing nodes.
1
0
0
0
0
0
Solving Graph Isomorphism Problem for a Special case
Graph isomorphism is an important computer science problem. The problem for the general case is unknown to be in polynomial time. The base algorithm for the general case works in quasi-polynomial time. The solutions in polynomial time for some special type of classes are known. In this work, we have worked with a special type of graphs. We have proposed a method to represent these graphs and finding isomorphism between these graphs. The method uses a modified version of the degree list of a graph and neighbourhood degree list. These special type of graphs have a property that neighbourhood degree list of any two immediate neighbours is different for every vertex.The representation becomes invariant to the order in which the node was selected for giving the representation making the isomorphism problem trivial for this case. The algorithm works in $O(n^4)$ time, where n is the number of vertices present in the graph. The proposed algorithm runs faster than quasi-polynomial time for the graphs used in the study.
1
0
0
0
0
0
Selective inference for effect modification via the lasso
Effect modification occurs when the effect of the treatment on an outcome varies according to the level of other covariates and often has important implications in decision making. When there are tens or hundreds of covariates, it becomes necessary to use the observed data to select a simpler model for effect modification and then make valid statistical inference. We propose a two stage procedure to solve this problem. First, we use Robinson's transformation to decouple the nuisance parameters from the treatment effect of interest and use machine learning algorithms to estimate the nuisance parameters. Next, after plugging in the estimates of the nuisance parameters, we use the Lasso to choose a low-complexity model for effect modification. Compared to a full model consisting of all the covariates, the selected model is much more interpretable. Compared to the univariate subgroup analyses, the selected model greatly reduces the number of false discoveries. We show that the conditional selective inference for the selected model is asymptotically valid given the rate assumptions in classical semiparametric regression. Extensive simulation studies are conducted to verify the asymptotic results and an epidemiological application is used to demonstrate the method.
0
0
1
1
0
0
Is Climate Change Controversial? Modeling Controversy as Contention Within Populations
A growing body of research focuses on computationally detecting controversial topics and understanding the stances people hold on them. Yet gaps remain in our theoretical and practical understanding of how to define controversy, how it manifests, and how to measure it. In this paper, we introduce a novel measure we call "contention", defined with respect to a topic and a population. We model contention from a mathematical standpoint. We validate our model by examining a diverse set of sources: real-world polling data sets, actual voter data, and Twitter coverage on several topics. In our publicly-released Twitter data set of nearly 100M tweets, we examine several topics such as Brexit, the 2016 U.S. Elections, and "The Dress", and cross-reference them with other sources. We demonstrate that the contention measure holds explanatory power for a wide variety of observed phenomena, such as controversies over climate change and other topics that are well within scientific consensus. Finally, we re-examine the notion of controversy, and present a theoretical framework that defines it in terms of population. We present preliminary evidence suggesting that contention is one dimension of controversy, along with others, such as "importance". Our new contention measure, along with the hypothesized model of controversy, suggest several avenues for future work in this emerging interdisciplinary research area.
1
1
0
0
0
0
Defending Against Adversarial Attacks by Leveraging an Entire GAN
Recent work has shown that state-of-the-art models are highly vulnerable to adversarial perturbations of the input. We propose cowboy, an approach to detecting and defending against adversarial attacks by using both the discriminator and generator of a GAN trained on the same dataset. We show that the discriminator consistently scores the adversarial samples lower than the real samples across multiple attacks and datasets. We provide empirical evidence that adversarial samples lie outside of the data manifold learned by the GAN. Based on this, we propose a cleaning method which uses both the discriminator and generator of the GAN to project the samples back onto the data manifold. This cleaning procedure is independent of the classifier and type of attack and thus can be deployed in existing systems.
0
0
0
1
0
0
Spectral Graph Convolutions for Population-based Disease Prediction
Exploiting the wealth of imaging and non-imaging information for disease prediction tasks requires models capable of representing, at the same time, individual features as well as data associations between subjects from potentially large populations. Graphs provide a natural framework for such tasks, yet previous graph-based approaches focus on pairwise similarities without modelling the subjects' individual characteristics and features. On the other hand, relying solely on subject-specific imaging feature vectors fails to model the interaction and similarity between subjects, which can reduce performance. In this paper, we introduce the novel concept of Graph Convolutional Networks (GCN) for brain analysis in populations, combining imaging and non-imaging data. We represent populations as a sparse graph where its vertices are associated with image-based feature vectors and the edges encode phenotypic information. This structure was used to train a GCN model on partially labelled graphs, aiming to infer the classes of unlabelled nodes from the node features and pairwise associations between subjects. We demonstrate the potential of the method on the challenging ADNI and ABIDE databases, as a proof of concept of the benefit from integrating contextual information in classification tasks. This has a clear impact on the quality of the predictions, leading to 69.5% accuracy for ABIDE (outperforming the current state of the art of 66.8%) and 77% for ADNI for prediction of MCI conversion, significantly outperforming standard linear classifiers where only individual features are considered.
1
0
0
1
0
0
An independence system as knot invariant
An independence system (with respect to the unknotting number) is defined for a classical knot diagram. It is proved that the independence system is a knot invariant for alternating knots. The exchange property for minimal unknotting sets are also discussed. It is shown that there exists an infinite family of knot diagrams whose corresponding independence systems are matroids. In contrast, infinite families of knot diagrams exist whose independence systems are not matroids.
0
0
1
0
0
0
A cavity-induced artificial gauge field in a Bose-Hubbard ladder
We consider theoretically ultracold interacting bosonic atoms confined to quasi-one-dimensional ladder structures formed by optical lattices and coupled to the field of an optical cavity. The atoms can collect a spatial phase imprint during a cavity-assisted tunneling along a rung via Raman transitions employing a cavity mode and a transverse running wave pump beam. By adiabatic elimination of the cavity field we obtain an effective Hamiltonian for the bosonic atoms, with a self-consistency condition. Using the numerical density matrix renormalization group method, we obtain a rich steady state diagram of self-organized steady states. Transitions between superfluid to Mott-insulating states occur, on top of which we can have Meissner, vortex liquid, and vortex lattice phases. Also a state that explicitly breaks the symmetry between the two legs of the ladder, namely the biased-ladder phase is dynamically stabilized.
0
1
0
0
0
0
A Review of Laser-Plasma Ion Acceleration
An overview of research on laser-plasma based acceleration of ions is given. The experimental state of the art is summarized and recent progress is discussed. The basic acceleration processes are briefly reviewed with an outlook on hybrid mechanisms and novel concepts. Finally, we put focus on the development of engineered targets for enhanced acceleration and of all-optical methods for beam post-acceleration and control.
0
1
0
0
0
0
Minimax Optimal Estimators for Additive Scalar Functionals of Discrete Distributions
In this paper, we consider estimators for an additive functional of $\phi$, which is defined as $\theta(P;\phi)=\sum_{i=1}^k\phi(p_i)$, from $n$ i.i.d. random samples drawn from a discrete distribution $P=(p_1,...,p_k)$ with alphabet size $k$. We propose a minimax optimal estimator for the estimation problem of the additive functional. We reveal that the minimax optimal rate is characterized by the divergence speed of the fourth derivative of $\phi$ if the divergence speed is high. As a result, we show there is no consistent estimator if the divergence speed of the fourth derivative of $\phi$ is larger than $p^{-4}$. Furthermore, if the divergence speed of the fourth derivative of $\phi$ is $p^{4-\alpha}$ for $\alpha \in (0,1)$, the minimax optimal rate is obtained within a universal multiplicative constant as $\frac{k^2}{(n\ln n)^{2\alpha}} + \frac{k^{2-2\alpha}}{n}$.
1
0
1
1
0
0
Van der Waals Heterostructures Based on Allotropes of Phosphorene and MoSe2
The van der Waals heterostructures of allotropes of phosphorene (${\alpha}$- and $\beta-P$) with MoSe2 (H-, T-, ZT- and SO-MoSe2) are investigated in the framework of state-of-the-art density functional theory. The semiconducting heterostructures, $\beta$-P /H-MoSe2 and ${\alpha}$-P / H-MoSe2, forms anti-type structures with type I and type II band alignments, respectively, whose bands are tunable with external electric field. ${\alpha}$-P / ZT-MoSe2 and ${\alpha}$-P / SO-MoSe2 form ohmic semiconductor-metal contacts while Schottky barrier in $\beta$-P / T-MoSe2 can be reduced to zero by external electric field to form ohmic contact which is useful to realize high-performance devices. Simulated STM images of given heterostructures reveal that ${\alpha}$-P can be used as a capping layer to differentiate between various allotropes of underlying MoSe2. The dielectric response of considered heterostructures is highly anisotropic in terms of lateral and vertical polarization. The tunable electronic and dielectric response of van der Waals phosphorene/MoSe2 heterostructure may find potentials applications in the fabrication of optoelectronic devices.
0
1
0
0
0
0
On separated solutions of logistic population equation with harvesting
We provide a surprising answer to a question raised in S. Ahmad and A.C. Lazer [2], and extend the results of that paper.
0
0
1
0
0
0
Nonreciprocal Electromagnetic Scattering from a Periodically Space-Time Modulated Slab and Application to a Quasisonic Isolator
Scattering of obliquely incident electromagnetic waves from periodically space-time modulated slabs is investigated. It is shown that such structures operate as nonreciprocal harmonic generators and spatial-frequency filters. For oblique incidences, low-frequency harmonics are filtered out in the form of surface waves, while high-frequency harmonics are transmitted as space waves. In the quasisonic regime, where the velocity of the space-time modulation is close to the velocity of the electromagnetic waves in the background medium, the incident wave is strongly coupled to space-time harmonics in the forward direction, while in the backward direction it exhibits low coupling to other harmonics. This nonreciprocity is leveraged for the realization of an electromagnetic isolator in the quasisonic regime and is experimentally demonstrated at microwave frequencies.
0
1
0
0
0
0
Story Cloze Ending Selection Baselines and Data Examination
This paper describes two supervised baseline systems for the Story Cloze Test Shared Task (Mostafazadeh et al., 2016a). We first build a classifier using features based on word embeddings and semantic similarity computation. We further implement a neural LSTM system with different encoding strategies that try to model the relation between the story and the provided endings. Our experiments show that a model using representation features based on average word embedding vectors over the given story words and the candidate ending sentences words, joint with similarity features between the story and candidate ending representations performed better than the neural models. Our best model achieves an accuracy of 72.42, ranking 3rd in the official evaluation.
1
0
0
0
0
0
Linear Quadratic Optimal Control Problems with Fixed Terminal States and Integral Quadratic Constraints
This paper is concerned with a linear quadratic (LQ, for short) optimal control problem with fixed terminal states and integral quadratic constraints. A Riccati equation with infinite terminal value is introduced, which is uniquely solvable and whose solution can be approximated by the solution for a suitable unconstrained LQ problem with penalized terminal state. Using results from duality theory, the optimal control is explicitly derived by solving the Riccati equation together with an optimal parameter selection problem. It turns out that the optimal control is not only a feedback of the current state, but also a feedback of the target (terminal state). Some examples are presented to illustrate the theory developed.
0
0
1
0
0
0
A new sampling density condition for shift-invariant spaces
Let $X=\{x_i:i\in\mathbb{Z}\}$, $\dots<x_{i-1}<x_i<x_{i+1}<\dots$, be a sampling set which is separated by a constant $\gamma>0$. Under certain conditions on $\phi$, it is proved that if there exists a positive integer $\nu$ such that $$\delta_\nu:=\sup\limits_{i\in\mathbb{Z}}(x_{i+\nu}-x_i)<\dfrac{\nu}{2\pi}\left(\dfrac{c_{k}^2}{M_{2k}}\right)^{\frac{1}{4k}},$$ then every function belonging to a shift-invariant space $V(\phi)$ can be reconstructed stably from its nonuniform sample values $\{f^{(j)}(x_i):j=0,1,\dots, k-1, i\in\mathbb{Z}\}$, where $c_k$ is a Wirtinger-Sobolev constant and $M_{2k}$ is a constant in Bernstein-type inequality of $V(\phi)$. Further, when $k=1$, the maximum gap $\delta_\nu<\nu$ is sharp for certain shift-invariant spaces.
0
0
1
0
0
0
Experimental Design via Generalized Mean Objective Cost of Uncertainty
The mean objective cost of uncertainty (MOCU) quantifies the performance cost of using an operator that is optimal across an uncertainty class of systems as opposed to using an operator that is optimal for a particular system. MOCU-based experimental design selects an experiment to maximally reduce MOCU, thereby gaining the greatest reduction of uncertainty impacting the operational objective. The original formulation applied to finding optimal system operators, where optimality is with respect to a cost function, such as mean-square error; and the prior distribution governing the uncertainty class relates directly to the underlying physical system. Here we provide a generalized MOCU and the corresponding experimental design. We then demonstrate how this new formulation includes as special cases MOCU-based experimental design methods developed for materials science and genomic networks when there is experimental error. Most importantly, we show that the classical Knowledge Gradient and Efficient Global Optimization experimental design procedures are actually implementations of MOCU-based experimental design under their modeling assumptions.
0
0
0
1
0
0
Parallelizing Over Artificial Neural Network Training Runs with Multigrid
Artificial neural networks are a popular and effective machine learning technique. Great progress has been made parallelizing the expensive training phase of an individual network, leading to highly specialized pieces of hardware, many based on GPU-type architectures, and more concurrent algorithms such as synthetic gradients. However, the training phase continues to be a bottleneck, where the training data must be processed serially over thousands of individual training runs. This work considers a multigrid reduction in time (MGRIT) algorithm that is able to parallelize over the thousands of training runs and converge to the exact same solution as traditional training would provide. MGRIT was originally developed to provide parallelism for time evolution problems that serially step through a finite number of time-steps. This work recasts the training of a neural network similarly, treating neural network training as an evolution equation that evolves the network weights from one step to the next. Thus, this work concerns distributed computing approaches for neural networks, but is distinct from other approaches which seek to parallelize only over individual training runs. The work concludes with supporting numerical results for two model problems.
1
0
0
0
0
0
Learning the Structure of Generative Models without Labeled Data
Curating labeled training data has become the primary bottleneck in machine learning. Recent frameworks address this bottleneck with generative models to synthesize labels at scale from weak supervision sources. The generative model's dependency structure directly affects the quality of the estimated labels, but selecting a structure automatically without any labeled data is a distinct challenge. We propose a structure estimation method that maximizes the $\ell_1$-regularized marginal pseudolikelihood of the observed data. Our analysis shows that the amount of unlabeled data required to identify the true structure scales sublinearly in the number of possible dependencies for a broad class of models. Simulations show that our method is 100$\times$ faster than a maximum likelihood approach and selects $1/4$ as many extraneous dependencies. We also show that our method provides an average of 1.5 F1 points of improvement over existing, user-developed information extraction applications on real-world data such as PubMed journal abstracts.
1
0
0
1
0
0
Robust Loss Functions under Label Noise for Deep Neural Networks
In many applications of classifier learning, training data suffers from label noise. Deep networks are learned using huge training data where the problem of noisy labels is particularly relevant. The current techniques proposed for learning deep networks under label noise focus on modifying the network architecture and on algorithms for estimating true labels from noisy labels. An alternate approach would be to look for loss functions that are inherently noise-tolerant. For binary classification there exist theoretical results on loss functions that are robust to label noise. In this paper, we provide some sufficient conditions on a loss function so that risk minimization under that loss function would be inherently tolerant to label noise for multiclass classification problems. These results generalize the existing results on noise-tolerant loss functions for binary classification. We study some of the widely used loss functions in deep networks and show that the loss function based on mean absolute value of error is inherently robust to label noise. Thus standard back propagation is enough to learn the true classifier even under label noise. Through experiments, we illustrate the robustness of risk minimization with such loss functions for learning neural networks.
1
0
0
1
0
0
A quality model for evaluating and choosing a stream processing framework architecture
Today, we have to deal with many data (Big data) and we need to make decisions by choosing an architectural framework to analyze these data coming from different area. Due to this, it become problematic when we want to process these data, and even more, when it is continuous data. When you want to process some data, you have to first receive it, store it, and then query it. This is what we call Batch Processing. It works well when you process big amount of data, but it finds its limits when you want to get fast (or real-time) processing results, such as financial trades, sensors, user session activity, etc. The solution to this problem is stream processing. Stream processing approach consists of data arriving record by record and rather than storing it, the processing should be done directly. Therefore, direct results are needed with a latency that may vary in real-time. In this paper, we propose an assessment quality model to evaluate and choose stream processing frameworks. We describe briefly different architectural frameworks such as Kafka, Spark Streaming and Flink that address the stream processing. Using our quality model, we present a decision tree to support engineers to choose a framework following the quality aspects. Finally, we evaluate our model doing a case study to Twitter and Netflix streaming.
1
0
0
0
0
0
On the Hardness of Inventory Management with Censored Demand Data
We consider a repeated newsvendor problem where the inventory manager has no prior information about the demand, and can access only censored/sales data. In analogy to multi-armed bandit problems, the manager needs to simultaneously "explore" and "exploit" with her inventory decisions, in order to minimize the cumulative cost. We make no probabilistic assumptions---importantly, independence or time stationarity---regarding the mechanism that creates the demand sequence. Our goal is to shed light on the hardness of the problem, and to develop policies that perform well with respect to the regret criterion, that is, the difference between the cumulative cost of a policy and that of the best fixed action/static inventory decision in hindsight, uniformly over all feasible demand sequences. We show that a simple randomized policy, termed the Exponentially Weighted Forecaster, combined with a carefully designed cost estimator, achieves optimal scaling of the expected regret (up to logarithmic factors) with respect to all three key primitives: the number of time periods, the number of inventory decisions available, and the demand support. Through this result, we derive an important insight: the benefit from "information stalking" as well as the cost of censoring are both negligible in this dynamic learning problem, at least with respect to the regret criterion. Furthermore, we modify the proposed policy in order to perform well in terms of the tracking regret, that is, using as benchmark the best sequence of inventory decisions that switches a limited number of times. Numerical experiments suggest that the proposed approach outperforms existing ones (that are tailored to, or facilitated by, time stationarity) on nonstationary demand models. Finally, we extend the proposed approach and its analysis to a "combinatorial" version of the repeated newsvendor problem.
1
0
0
1
0
0
Social Media Analysis For Organizations: Us Northeastern Public And State Libraries Case Study
Social networking sites such as Twitter have provided a great opportunity for organizations such as public libraries to disseminate information for public relations purposes. However, there is a need to analyze vast amounts of social media data. This study presents a computational approach to explore the content of tweets posted by nine public libraries in the northeastern United States of America. In December 2017, this study extracted more than 19,000 tweets from the Twitter accounts of seven state libraries and two urban public libraries. Computational methods were applied to collect the tweets and discover meaningful themes. This paper shows how the libraries have used Twitter to represent their services and provides a starting point for different organizations to evaluate the themes of their public tweets.
1
0
0
1
0
0
Weakly tripotent rings
We study the class of rings $R$ with the property that for $x\in R$ at least one of the elements $x$ and $1+x$ are tripotent.
0
0
1
0
0
0
ML for Flood Forecasting at Scale
Effective riverine flood forecasting at scale is hindered by a multitude of factors, most notably the need to rely on human calibration in current methodology, the limited amount of data for a specific location, and the computational difficulty of building continent/global level models that are sufficiently accurate. Machine learning (ML) is primed to be useful in this scenario: learned models often surpass human experts in complex high-dimensional scenarios, and the framework of transfer or multitask learning is an appealing solution for leveraging local signals to achieve improved global performance. We propose to build on these strengths and develop ML systems for timely and accurate riverine flood prediction.
1
0
0
1
0
0
Spectral proper orthogonal decomposition and its relationship to dynamic mode decomposition and resolvent analysis
We consider the frequency domain form of proper orthogonal decomposition (POD) called spectral proper orthogonal decomposition (SPOD). Spectral POD is derived from a space-time POD problem for statistically stationary flows and leads to modes that each oscillate at a single frequency. This form of POD goes back to the original work of Lumley (Stochastic tools in turbulence, Academic Press, 1970), but has been overshadowed by a space-only form of POD since the 1990s. We clarify the relationship between these two forms of POD and show that SPOD modes represent structures that evolve coherently in space and time while space-only POD modes in general do not. We also establish a relationship between SPOD and dynamic mode decomposition (DMD); we show that SPOD modes are in fact optimally averaged DMD modes obtained from an ensemble DMD problem for stationary flows. Accordingly, SPOD modes represent structures that are dynamic in the same sense as DMD modes but also optimally account for the statistical variability of turbulent flows. Finally, we establish a connection between SPOD and resolvent analysis. The key observation is that the resolvent-mode expansion coefficients must be regarded as statistical quantities to ensure convergent approximations of the flow statistics. When the expansion coefficients are uncorrelated, we show that SPOD and resolvent modes are identical. Our theoretical results and the overall utility of SPOD are demonstrated using two example problems: the complex Ginzburg-Landau equation and a turbulent jet.
0
1
0
0
0
0
An optimal transportation approach for assessing almost stochastic order
When stochastic dominance $F\leq_{st}G$ does not hold, we can improve agreement to stochastic order by suitably trimming both distributions. In this work we consider the $L_2-$Wasserstein distance, $\mathcal W_2$, to stochastic order of these trimmed versions. Our characterization for that distance naturally leads to consider a $\mathcal W_2$-based index of disagreement with stochastic order, $\varepsilon_{\mathcal W_2}(F,G)$. We provide asymptotic results allowing to test $H_0: \varepsilon_{\mathcal W_2}(F,G)\geq \varepsilon_0$ vs $H_a: \varepsilon_{\mathcal W_2}(F,G)<\varepsilon_0$, that, under rejection, would give statistical guarantee of almost stochastic dominance. We include a simulation study showing a good performance of the index under the normal model.
0
0
0
1
0
0
The Effect of Electron Lens as Landau Damping Device on Single Particle Dynamics in HL-LHC
An electron lens can serve as an effective mechanism for suppressing coherent instabilities in high intensity storage rings through nonlinear amplitude dependent betatron tune shift. However, the addition of a strong localized nonlinear focusing element to the accelerator lattice may lead to undesired effects in particle dynamics. We evaluate the effect of a Gaussian electron lens on single particle motion in HL-LHC using numerical tracking simulations, and compare the results to the case when an equal tune spread is generated by conventional octupole magnets.
0
1
0
0
0
0
Search for sterile neutrinos in holographic dark energy cosmology: Reconciling Planck observation with the local measurement of the Hubble constant
We search for sterile neutrinos in the holographic dark energy cosmology by using the latest observational data. To perform the analysis, we employ the current cosmological observations, including the cosmic microwave background temperature power spectrum data from the Planck mission, the baryon acoustic oscillation measurements, the type Ia supernova data, the redshift space distortion measurements, the shear data of weak lensing observation, the Planck lensing measurement, and the latest direct measurement of $H_0$ as well. We show that, compared to the $\Lambda$CDM cosmology, the holographic dark energy cosmology with sterile neutrinos can relieve the tension between the Planck observation and the direct measurement of $H_0$ much better. Once we include the $H_0$ measurement in the global fit, we find that the hint of the existence of sterile neutrinos in the holographic dark energy cosmology can be given. Under the constraint of the all-data combination, we obtain $N_{\rm eff}= 3.76\pm0.26$ and $m_{\nu,\rm sterile}^{\rm eff}< 0.215\,\rm eV$, indicating that the detection of $\Delta N_{\rm eff}>0$ in the holographic dark energy cosmology is at the $2.75\sigma$ level and the massless or very light sterile neutrino is favored by the current observations.
0
1
0
0
0
0
Simulation of high temperature superconductors and experimental validation
In this work, we present a parallel, fully-distributed finite element numerical framework to simulate the low-frequency electromagnetic response of superconducting devices, which allows to efficiently exploit HPC platforms. We select the so-called H-formulation, which uses the magnetic field as a state variable. Nédélec elements (of arbitrary order) are required for an accurate approximation of the H-formulation for modelling electromagnetic fields along interfaces between regions with high contrast medium properties. An h-adaptive mesh refinement technique customized for Nédélec elements leads to a structured fine mesh in areas of interest whereas a smart coarsening is obtained in other regions. The composition of a tailored, robust, parallel nonlinear solver completes the exposition of the developed tools to tackle the problem. First, a comparison against experimental data is performed to show the availability of the finite element approximation to model the physical phenomena. Then, a selected state-of-the-art 3D benchmark is reproduced, focusing on the parallel performance of the algorithms.
1
1
0
0
0
0
Hardware Translation Coherence for Virtualized Systems
To improve system performance, modern operating systems (OSes) often undertake activities that require modification of virtual-to-physical page translation mappings. For example, the OS may migrate data between physical frames to defragment memory and enable superpages. The OS may migrate pages of data between heterogeneous memory devices. We refer to all such activities as page remappings. Unfortunately, page remappings are expensive. We show that translation coherence is a major culprit and that systems employing virtualization are especially badly affected by their overheads. In response, we propose hardware translation invalidation and coherence or HATRIC, a readily implementable hardware mechanism to piggyback translation coherence atop existing cache coherence protocols. We perform detailed studies using KVM-based virtualization, showing that HATRIC achieves up to 30% performance and 10% energy benefits, for per-CPU area overheads of 2%. We also quantify HATRIC's benefits on systems running Xen and find up to 33% performance improvements.
1
0
0
0
0
0
MotifMark: Finding Regulatory Motifs in DNA Sequences
The interaction between proteins and DNA is a key driving force in a significant number of biological processes such as transcriptional regulation, repair, recombination, splicing, and DNA modification. The identification of DNA-binding sites and the specificity of target proteins in binding to these regions are two important steps in understanding the mechanisms of these biological activities. A number of high-throughput technologies have recently emerged that try to quantify the affinity between proteins and DNA motifs. Despite their success, these technologies have their own limitations and fall short in precise characterization of motifs, and as a result, require further downstream analysis to extract useful and interpretable information from a haystack of noisy and inaccurate data. Here we propose MotifMark, a new algorithm based on graph theory and machine learning, that can find binding sites on candidate probes and rank their specificity in regard to the underlying transcription factor. We developed a pipeline to analyze experimental data derived from compact universal protein binding microarrays and benchmarked it against two of the most accurate motif search methods. Our results indicate that MotifMark can be a viable alternative technique for prediction of motif from protein binding microarrays and possibly other related high-throughput techniques.
1
0
0
0
0
0
Discovery of Latent 3D Keypoints via End-to-end Geometric Reasoning
This paper presents KeypointNet, an end-to-end geometric reasoning framework to learn an optimal set of category-specific 3D keypoints, along with their detectors. Given a single image, KeypointNet extracts 3D keypoints that are optimized for a downstream task. We demonstrate this framework on 3D pose estimation by proposing a differentiable objective that seeks the optimal set of keypoints for recovering the relative pose between two views of an object. Our model discovers geometrically and semantically consistent keypoints across viewing angles and instances of an object category. Importantly, we find that our end-to-end framework using no ground-truth keypoint annotations outperforms a fully supervised baseline using the same neural network architecture on the task of pose estimation. The discovered 3D keypoints on the car, chair, and plane categories of ShapeNet are visualized at this http URL.
0
0
0
1
0
0
Multi-State Trajectory Approach to Non-Adiabatic Dynamics: General Formalism and the Active State Trajectory Approximation
A general theoretical framework is derived for the recently developed multi-state trajectory (MST) approach from the time dependent Schrödinger equation, resulting in equations of motion for coupled nuclear-electronic dynamics equivalent to Hamilton dynamics or Heisenberg equation based on a new multistate Meyer-Miller (MM) model. The derived MST formalism incorporates both diabatic and adiabatic representations as limiting cases, and reduces to Ehrenfest or Born-Oppenheimer dynamics in the mean field or the single state limits, respectively. By quantizing nuclear dynamics to a particular active state, the MST algorithm does not suffer from the instability caused by the negative instant electronic population variables unlike the standard MM dynamics. Furthermore the multistate representation for electron coupled nuclear dynamics with each state associated with one individual trajectory presumably captures single state dynamics better than the mean field description. The coupled electronic-nuclear coherence is incorporated consistently in the MST framework with no ad-hoc state switch and the associated momentum adjustment or parameters for artificial decoherence, unlike the original or modified surface hopping treatments. The implementation of the MST approach to benchmark problems shows reasonably good agreement with exact quantum calculations, and the results in both representations are similar in accuracy. The active state trajectory (AST) approximation of the MST approach provides a consistent interpretation to trajectory surface hopping, which predicts the transition probabilities reasonably well for multiple nonadiabatic transitions and conical intersection problems.
0
1
0
0
0
0
Combining Homotopy Methods and Numerical Optimal Control to Solve Motion Planning Problems
This paper presents a systematic approach for computing local solutions to motion planning problems in non-convex environments using numerical optimal control techniques. It extends the range of use of state-of-the-art numerical optimal control tools to problem classes where these tools have previously not been applicable. Today these problems are typically solved using motion planners based on randomized or graph search. The general principle is to define a homotopy that perturbs, or preferably relaxes, the original problem to an easily solved problem. By combining a Sequential Quadratic Programming (SQP) method with a homotopy approach that gradually transforms the problem from a relaxed one to the original one, practically relevant locally optimal solutions to the motion planning problem can be computed. The approach is demonstrated in motion planning problems in challenging 2D and 3D environments, where the presented method significantly outperforms a state-of-the-art open-source optimizing sampled-based planner commonly used as benchmark.
0
0
1
0
0
0
High quality atomically thin PtSe2 films grown by molecular beam epitaxy
Atomically thin PtSe2 films have attracted extensive research interests for potential applications in high-speed electronics, spintronics and photodetectors. Obtaining high quality, single crystalline thin films with large size is critical. Here we report the first successful layer-by-layer growth of high quality PtSe2 films by molecular beam epitaxy. Atomically thin films from 1 ML to 22 ML have been grown and characterized by low-energy electron diffraction, Raman spectroscopy and X-ray photoemission spectroscopy. Moreover, a systematic thickness dependent study of the electronic structure is revealed by angle-resolved photoemission spectroscopy (ARPES), and helical spin texture is revealed by spin-ARPES. Our work provides new opportunities for growing large size single crystalline films for investigating the physical properties and potential applications of PtSe2.
0
1
0
0
0
0
NAVREN-RL: Learning to fly in real environment via end-to-end deep reinforcement learning using monocular images
We present NAVREN-RL, an approach to NAVigate an unmanned aerial vehicle in an indoor Real ENvironment via end-to-end reinforcement learning RL. A suitable reward function is designed keeping in mind the cost and weight constraints for micro drone with minimum number of sensing modalities. Collection of small number of expert data and knowledge based data aggregation is integrated into the RL process to aid convergence. Experimentation is carried out on a Parrot AR drone in different indoor arenas and the results are compared with other baseline technologies. We demonstrate how the drone successfully avoids obstacles and navigates across different arenas.
1
0
0
1
0
0
Fast, Accurate and Fully Parallelizable Digital Image Correlation
Digital image correlation (DIC) is a widely used optical metrology for surface deformation measurements. DIC relies on nonlinear optimization method. Thus an initial guess is quite important due to its influence on the converge characteristics of the algorithm. In order to obtain a reliable, accurate initial guess, a reliability-guided digital image correlation (RG-DIC) method, which is able to intelligently obtain a reliable initial guess without using time-consuming integer-pixel registration, was proposed. However, the RG-DIC and its improved methods are path-dependent and cannot be fully parallelized. Besides, it is highly possible that RG-DIC fails in the full-field analysis of deformation without manual intervention if the deformation fields contain large areas of discontinuous deformation. Feature-based initial guess is highly robust while it is relatively time-consuming. Recently, path-independent algorithm, fast Fourier transform-based cross correlation (FFT-CC) algorithm, was proposed to estimate the initial guess. Complete parallelizability is the major advantage of the FFT-CC algorithm, while it is sensitive to small deformation. Wu et al proposed an efficient integer-pixel search scheme, but the parameters of this algorithm are set by the users empirically. In this technical note, a fully parallelizable DIC method is proposed. Different from RG-DIC method, the proposed method divides DIC algorithm into two parts: full-field initial guess estimation and sub-pixel registration. The proposed method has the following benefits: 1) providing a pre-knowledge of deformation fields; 2) saving computational time; 3) reducing error propagation; 4) integratability with well-established DIC algorithms; 5) fully parallelizability.
0
1
0
0
0
0
Discovering the effect of nonlocal payoff calculation on the stabilty of ESS: Spatial patterns of Hawk-Dove game in metapopulations
The classical idea of evolutionarily stable strategy (ESS) modeling animal behavior does not involve any spatial dependence. We considered a spatial Hawk-Dove game played by animals in a patchy environment with wrap around boundaries. We posit that each site contains the same number of individuals. An evolution equation for analyzing the stability of the ESS is found as the mean dynamics of the classical frequency dependent Moran process coupled via migration and nonlocal payoff calculation in 1D and 2D habitats. The linear stability analysis of the model is performed and conditions to observe spatial patterns are investigated. For the nearest neighbor interactions (including von Neumann and Moore neighborhoods in 2D) we concluded that it is possible to destabilize the ESS of the game and observe pattern formation when the dispersal rate is small enough. We numerically investigate the spatial patterns arising from the replicator equations coupled via nearest neighbor payoff calculation and dispersal.
0
0
0
0
1
0
Analysis of the measurements of anisotropic a.c. vortex resistivity in tilted magnetic fields
Measurements of the high-frequency complex resistivity in superconductors are a tool often used to obtain the vortex parameters, such as the vortex viscosity, the pinning constant and the depinning frequency. In anisotropic superconductors, the extraction of these quantities from the measurements faces new difficulties due to the tensor nature of the electromagnetic problem. The problem is specifically intricate when the magnetic field is tilted with respect to the crystallographic axes. Partial solutions exist in the free-flux-flow (no pinning) and Campbell (pinning dominated) regimes. In this paper we develop a full tensor model for the vortex motion complex resistivity, including flux-flow, pinning, and creep. We give explicit expressions for the tensors involved. We obtain that, despite the complexity of the physics, some parameters remain scalar in nature. We show that under specific circumstances the directly measured quantities do not reflect the true vortex parameters, and we give procedures to derive the true vortex parameters from measurements taken with arbitrary field orientations. Finally, we discuss the applicability of the angular scaling properties to the measured and transformed vortex parameters and we exploit these properties as a tool to unveil the existence of directional pinning.
0
1
0
0
0
0
Deep Convolutional Neural Network to Detect J-UNIWARD
This paper presents an empirical study on applying convolutional neural networks (CNNs) to detecting J-UNIWARD, one of the most secure JPEG steganographic method. Experiments guiding the architectural design of the CNNs have been conducted on the JPEG compressed BOSSBase containing 10,000 covers of size 512x512. Results have verified that both the pooling method and the depth of the CNNs are critical for performance. Results have also proved that a 20-layer CNN, in general, outperforms the most sophisticated feature-based methods, but its advantage gradually diminishes on hard-to-detect cases. To show that the performance generalizes to large-scale databases and to different cover sizes, one experiment has been conducted on the CLS-LOC dataset of ImageNet containing more than one million covers cropped to unified size of 256x256. The proposed 20-layer CNN has cut the error achieved by a CNN recently proposed for large-scale JPEG steganalysis by 35%. Source code is available via GitHub: this https URL
1
0
0
0
0
0
Observing Power-Law Dynamics of Position-Velocity Correlation in Anomalous Diffusion
In this letter we present a measurement of the phase-space density distribution (PSDD) of ultra-cold \Rb atoms performing 1D anomalous diffusion. The PSDD is imaged using a direct tomographic method based on Raman velocity selection. It reveals that the position-velocity correlation function $C_{xv}(t)$ builds up on a timescale related to the initial conditions of the ensemble and then decays asymptotically as a power-law. We show that the decay follows a simple scaling theory involving the power-law asymptotic dynamics of position and velocity. The generality of this scaling theory is confirmed using Monte-Carlo simulations of two distinct models of anomalous diffusion.
0
1
0
0
0
0
Modular curves, invariant theory and $E_8$
The $E_8$ root lattice can be constructed from the modular curve $X(13)$ by the invariant theory for the simple group $\text{PSL}(2, 13)$. This gives a different construction of the $E_8$ root lattice. It also gives an explicit construction of the modular curve $X(13)$.
0
0
1
0
0
0
Analysis of Approximate Stochastic Gradient Using Quadratic Constraints and Sequential Semidefinite Programs
We present convergence rate analysis for the approximate stochastic gradient method, where individual gradient updates are corrupted by computation errors. We develop stochastic quadratic constraints to formulate a small linear matrix inequality (LMI) whose feasible set characterizes convergence properties of the approximate stochastic gradient. Based on this LMI condition, we develop a sequential minimization approach to analyze the intricate trade-offs that couple stepsize selection, convergence rate, optimization accuracy, and robustness to gradient inaccuracy. We also analytically solve this LMI condition and obtain theoretical formulas that quantify the convergence properties of the approximate stochastic gradient under various assumptions on the loss functions.
0
0
0
1
0
0
Invariant holomorphic discs in some non-convex domains
We give a description of complex geodesics and we study the structure of stationary discs in some non-convex domains for which complex geodesics are not unique.
0
0
1
0
0
0
MIMIX: a Bayesian Mixed-Effects Model for Microbiome Data from Designed Experiments
Recent advances in bioinformatics have made high-throughput microbiome data widely available, and new statistical tools are required to maximize the information gained from these data. For example, analysis of high-dimensional microbiome data from designed experiments remains an open area in microbiome research. Contemporary analyses work on metrics that summarize collective properties of the microbiome, but such reductions preclude inference on the fine-scale effects of environmental stimuli on individual microbial taxa. Other approaches model the proportions or counts of individual taxa as response variables in mixed models, but these methods fail to account for complex correlation patterns among microbial communities. In this paper, we propose a novel Bayesian mixed-effects model that exploits cross-taxa correlations within the microbiome, a model we call MIMIX (MIcrobiome MIXed model). MIMIX offers global tests for treatment effects, local tests and estimation of treatment effects on individual taxa, quantification of the relative contribution from heterogeneous sources to microbiome variability, and identification of latent ecological subcommunities in the microbiome. MIMIX is tailored to large microbiome experiments using a combination of Bayesian factor analysis to efficiently represent dependence between taxa and Bayesian variable selection methods to achieve sparsity. We demonstrate the model using a simulation experiment and on a 2x2 factorial experiment of the effects of nutrient supplement and herbivore exclusion on the foliar fungal microbiome of $\textit{Andropogon gerardii}$, a perennial bunchgrass, as part of the global Nutrient Network research initiative.
0
0
0
1
0
0
SpatEntropy: Spatial Entropy Measures in R
This article illustrates how to measure the heterogeneity of spatial data presenting a finite number of categories via computation of spatial entropy. The R package SpatEntropy contains functions for the computation of entropy and spatial entropy measures. The extension to spatial entropy measures is a unique feature of SpatEntropy. In addition to the traditional version of Shannon's entropy, the package includes Batty's spatial entropy, O'Neill's entropy, Li and Reynolds' contagion index, Karlstrom and Ceccato's entropy, Leibovici's entropy, Parresol and Edwards' entropy and Altieri's entropy. The package is able to work with both areal and point data. This paper is a general description of SpatEntropy, as well as its necessary theoretical background, and an introduction for new users.
0
0
0
1
0
0