ID
int64
1
21k
TITLE
stringlengths
7
239
ABSTRACT
stringlengths
7
2.76k
Computer Science
int64
0
1
Physics
int64
0
1
Mathematics
int64
0
1
Statistics
int64
0
1
Quantitative Biology
int64
0
1
Quantitative Finance
int64
0
1
16,801
Incorporation of prior knowledge of the signal behavior into the reconstruction to accelerate the acquisition of MR diffusion data
Diffusion MRI measurements using hyperpolarized gases are generally acquired during patient breath hold, which yields a compromise between achievable image resolution, lung coverage and number of b-values. In this work, we propose a novel method that accelerates the acquisition of MR diffusion data by undersampling in both spatial and b-value dimensions, thanks to incorporating knowledge about the signal decay into the reconstruction (SIDER). SIDER is compared to total variation (TV) reconstruction by assessing their effect on both the recovery of ventilation images and estimated mean alveolar dimensions (MAD). Both methods are assessed by retrospectively undersampling diffusion datasets of normal volunteers and COPD patients (n=8) for acceleration factors between x2 and x10. TV led to large errors and artefacts for acceleration factors equal or larger than x5. SIDER improved TV, presenting lower errors and histograms of MAD closer to those obtained from fully sampled data for accelerations factors up to x10. SIDER preserved image quality at all acceleration factors but images were slightly smoothed and some details were lost at x10. In conclusion, we have developed and validated a novel compressed sensing method for lung MRI imaging and achieved high acceleration factors, which can be used to increase the amount of data acquired during a breath-hold. This methodology is expected to improve the accuracy of estimated lung microstructure dimensions and widen the possibilities of studying lung diseases with MRI.
1
1
0
0
0
0
16,802
Rabi noise spectroscopy of individual two-level tunneling defects
Understanding the nature of two-level tunneling defects is important for minimizing their disruptive effects in various nano-devices. By exploiting the resonant coupling of these defects to a superconducting qubit, one can probe and coherently manipulate them individually. In this work we utilize a phase qubit to induce Rabi oscillations of single tunneling defects and measure their dephasing rates as a function of the defect's asymmetry energy, which is tuned by an applied strain. The dephasing rates scale quadratically with the external strain and are inversely proportional to the Rabi frequency. These results are analyzed and explained within a model of interacting standard defects, in which pure dephasing of coherent high-frequency (GHz) defects is caused by interaction with incoherent low-frequency thermally excited defects.
0
1
0
0
0
0
16,803
Learning rate adaptation for federated and differentially private learning
We propose an algorithm for the adaptation of the learning rate for stochastic gradient descent (SGD) that avoids the need for validation set use. The idea for the adaptiveness comes from the technique of extrapolation: to get an estimate for the error against the gradient flow which underlies SGD, we compare the result obtained by one full step and two half-steps. The algorithm is applied in two separate frameworks: federated and differentially private learning. Using examples of deep neural networks we empirically show that the adaptive algorithm is competitive with manually tuned commonly used optimisation methods for differentially privately training. We also show that it works robustly in the case of federated learning unlike commonly used optimisation methods.
0
0
0
1
0
0
16,804
Holomorphic Hermite polynomials in two variables
Generalizations of the Hermite polynomials to many variables and/or to the complex domain have been located in mathematical and physical literature for some decades. Polynomials traditionally called complex Hermite ones are mostly understood as polynomials in $z$ and $\bar{z}$ which in fact makes them polynomials in two real variables with complex coefficients. The present paper proposes to investigate for the first time holomorphic Hermite polynomials in two variables. Their algebraic and analytic properties are developed here. While the algebraic properties do not differ too much for those considered so far, their analytic features are based on a kind of non-rotational orthogonality invented by van Eijndhoven and Meyers. Inspired by their invention we merely follow the idea of Bargmann's seminal paper (1961) giving explicit construction of reproducing kernel Hilbert spaces based on those polynomials. "Homotopic" behavior of our new formation culminates in comparing it to the very classical Bargmann space of two variables on one edge and the aforementioned Hermite polynomials in $z$ and $\bar{z}$ on the other. Unlike in the case of Bargmann's basis our Hermite polynomials are not product ones but factorize to it when bonded together with the first case of limit properties leading both to the Bargmann basis and suitable form of the reproducing kernel. Also in the second limit we recover standard results obeyed by Hermite polynomials in $z$ and $\bar{z}$.
0
0
1
0
0
0
16,805
Equilibria, information and frustration in heterogeneous network games with conflicting preferences
Interactions between people are the basis on which the structure of our society arises as a complex system and, at the same time, are the starting point of any physical description of it. In the last few years, much theoretical research has addressed this issue by combining the physics of complex networks with a description of interactions in terms of evolutionary game theory. We here take this research a step further by introducing a most salient societal factor such as the individuals' preferences, a characteristic that is key to understand much of the social phenomenology these days. We consider a heterogeneous, agent-based model in which agents interact strategically with their neighbors but their preferences and payoffs for the possible actions differ. We study how such a heterogeneous network behaves under evolutionary dynamics and different strategic interactions, namely coordination games and best shot games. With this model we study the emergence of the equilibria predicted analytically in random graphs under best response dynamics, and we extend this test to unexplored contexts like proportional imitation and scale free networks. We show that some theoretically predicted equilibria do not arise in simulations with incomplete Information, and we demonstrate the importance of the graph topology and the payoff function parameters for some games. Finally, we discuss our results with available experimental evidence on coordination games, showing that our model agrees better with the experiment that standard economic theories, and draw hints as to how to maximize social efficiency in situations of conflicting preferences.
1
1
0
0
0
0
16,806
Scalable Generalized Dynamic Topic Models
Dynamic topic models (DTMs) model the evolution of prevalent themes in literature, online media, and other forms of text over time. DTMs assume that word co-occurrence statistics change continuously and therefore impose continuous stochastic process priors on their model parameters. These dynamical priors make inference much harder than in regular topic models, and also limit scalability. In this paper, we present several new results around DTMs. First, we extend the class of tractable priors from Wiener processes to the generic class of Gaussian processes (GPs). This allows us to explore topics that develop smoothly over time, that have a long-term memory or are temporally concentrated (for event detection). Second, we show how to perform scalable approximate inference in these models based on ideas around stochastic variational inference and sparse Gaussian processes. This way we can train a rich family of DTMs to massive data. Our experiments on several large-scale datasets show that our generalized model allows us to find interesting patterns that were not accessible by previous approaches.
0
0
0
1
0
0
16,807
Session Types for Orchestrated Interactions
In the setting of the pi-calculus with binary sessions, we aim at relaxing the notion of duality of session types by the concept of retractable compliance developed in contract theory. This leads to extending session types with a new type operator of "speculative selection" including choices not necessarily offered by a compliant partner. We address the problem of selecting successful communicating branches by means of an operational semantics based on orchestrators, which has been shown to be equivalent to the retractable semantics of contracts, but clearly more feasible. A type system, sound with respect to such a semantics, is hence provided.
1
0
0
0
0
0
16,808
An Agent-Based Approach for Optimizing Modular Vehicle Fleet Operation
Modularity in military vehicle designs enables on-base assembly, disassembly, and reconfiguration of vehicles, which can be beneficial in promoting fleet adaptability and life cycle cost savings. To properly manage the fleet operation and to control the resupply, demand prediction, and scheduling process, this paper illustrates an agent-based approach customized for highly modularized military vehicle fleets and studies the feasibility and flexibility of modularity for various mission scenarios. Given deterministic field demands with operation stochasticity, we compare the performance of a modular fleet to a conventional fleet in equivalent operation strategies and also compare fleet performance driven by heuristic rules and optimization. Several indicators are selected to quantify the fleet performance, including operation costs, total resupplied resources, and fleet readiness. When the model is implemented for military Joint Tactical Transport System (JTTS) mission, our results indicate that fleet modularity can reduce total resource supplies without significant losses in fleet readiness. The benefits of fleet modularity can also be amplified through a real-time optimized operation strategy. To highlight the feasibility of fleet modularity, a parametric study is performed to show the impacts from working capacity on modular fleet performance. Finally, we provide practical suggestions of modular vehicle designs based on the analysis and other possible usage.
1
0
0
0
0
0
16,809
Delta-epsilon functions and uniform continuity on metric spaces
Under certain general conditions, an explicit formula to compute the greatest delta-epsilon function of a continuous function is given. From this formula, a new way to analyze the uniform continuity of a continuous function is given. Several examples illustrating the theory are discussed.
0
0
1
0
0
0
16,810
Deterministic Dispersion of Mobile Robots in Dynamic Rings
In this work, we study the problem of dispersion of mobile robots on dynamic rings. The problem of dispersion of $n$ robots on an $n$ node graph, introduced by Augustine and Moses Jr. [1], requires robots to coordinate with each other and reach a configuration where exactly one robot is present on each node. This problem has real world applications and applies whenever we want to minimize the total cost of $n$ agents sharing $n$ resources, located at various places, subject to the constraint that the cost of an agent moving to a different resource is comparatively much smaller than the cost of multiple agents sharing a resource (e.g. smart electric cars sharing recharge stations). The study of this problem also provides indirect benefits to the study of scattering on graphs, the study of exploration by mobile robots, and the study of load balancing on graphs. We solve the problem of dispersion in the presence of two types of dynamism in the underlying graph: (i) vertex permutation and (ii) 1-interval connectivity. We introduce the notion of vertex permutation dynamism and have it mean that for a given set of nodes, in every round, the adversary ensures a ring structure is maintained, but the connections between the nodes may change. We use the idea of 1-interval connectivity from Di Luna et al. [10], where for a given ring, in each round, the adversary chooses at most one edge to remove. We assume robots have full visibility and present asymptotically time optimal algorithms to achieve dispersion in the presence of both types of dynamism when robots have chirality. When robots do not have chirality, we present asymptotically time optimal algorithms to achieve dispersion subject to certain constraints. Finally, we provide impossibility results for dispersion when robots have no visibility.
1
0
0
0
0
0
16,811
A brain signature highly predictive of future progression to Alzheimer's dementia
Early prognosis of Alzheimer's dementia is hard. Mild cognitive impairment (MCI) typically precedes Alzheimer's dementia, yet only a fraction of MCI individuals will progress to dementia, even when screened using biomarkers. We propose here to identify a subset of individuals who share a common brain signature highly predictive of oncoming dementia. This signature was composed of brain atrophy and functional dysconnectivity and discovered using a machine learning model in patients suffering from dementia. The model recognized the same brain signature in MCI individuals, 90% of which progressed to dementia within three years. This result is a marked improvement on the state-of-the-art in prognostic precision, while the brain signature still identified 47% of all MCI progressors. We thus discovered a sizable MCI subpopulation which represents an excellent recruitment target for clinical trials at the prodromal stage of Alzheimer's disease.
0
0
0
1
0
0
16,812
Deep scattering transform applied to note onset detection and instrument recognition
Automatic Music Transcription (AMT) is one of the oldest and most well-studied problems in the field of music information retrieval. Within this challenging research field, onset detection and instrument recognition take important places in transcription systems, as they respectively help to determine exact onset times of notes and to recognize the corresponding instrument sources. The aim of this study is to explore the usefulness of multiscale scattering operators for these two tasks on plucked string instrument and piano music. After resuming the theoretical background and illustrating the key features of this sound representation method, we evaluate its performances comparatively to other classical sound representations. Using both MIDI-driven datasets with real instrument samples and real musical pieces, scattering is proved to outperform other sound representations for these AMT subtasks, putting forward its richer sound representation and invariance properties.
1
0
0
1
0
0
16,813
Gaschütz Lemma for Compact Groups
We prove the Gaschütz Lemma holds for all metrisable compact groups.
0
0
1
0
0
0
16,814
Driven flow with exclusion and spin-dependent transport in graphenelike structures
We present a simplified description for spin-dependent electronic transport in honeycomb-lattice structures with spin-orbit interactions, using generalizations of the stochastic non-equilibrium model known as the totally asymmetric simple exclusion process. Mean field theory and numerical simulations are used to study currents, density profiles and current polarization in quasi- one dimensional systems with open boundaries, and externally-imposed particle injection ($\alpha$) and ejection ($\beta$) rates. We investigate the influence of allowing for double site occupancy, according to Pauli's exclusion principle, on the behavior of the quantities of interest. We find that double occupancy shows strong signatures for specific combinations of rates, namely high $\alpha$ and low $\beta$, but otherwise its effects are quantitatively suppressed. Comments are made on the possible relevance of the present results to experiments on suitably doped graphenelike structures.
0
1
0
0
0
0
16,815
MOG: Mapper on Graphs for Relationship Preserving Clustering
The interconnected nature of graphs often results in difficult to interpret clutter. Typically techniques focus on either decluttering by clustering nodes with similar properties or grouping edges with similar relationship. We propose using mapper, a powerful topological data analysis tool, to summarize the structure of a graph in a way that both clusters data with similar properties and preserves relationships. Typically, mapper operates on a given data by utilizing a scalar function defined on every point in the data and a cover for scalar function codomain. The output of mapper is a graph that summarize the shape of the space. In this paper, we outline how to use this mapper construction on an input graphs, outline three filter functions that capture important structures of the input graph, and provide an interface for interactively modifying the cover. To validate our approach, we conduct several case studies on synthetic and real world data sets and demonstrate how our method can give meaningful summaries for graphs with various complexities
0
0
0
1
0
0
16,816
Variation Evolving for Optimal Control Computation, A Compact Way
A compact version of the Variation Evolving Method (VEM) is developed for the optimal control computation. It follows the idea that originates from the continuous-time dynamics stability theory in the control field. The optimal solution is analogized to the equilibrium point of a dynamic system and is anticipated to be obtained in an asymptotically evolving way. With the introduction of a virtual dimension, the variation time, the Evolution Partial Differential Equation (EPDE), which describes the variation motion towards the optimal solution, is deduced from the Optimal Control Problem (OCP), and the equivalent optimality conditions with no employment of costates are established. In particular, it is found that theoretically the analytic feedback optimal control law does not exist for general OCPs because the optimal control is related to the future state. Since the derived EPDE is suitable to be solved with the semi-discrete method in the field of PDE numerical calculation, the resulting Initial-value Problems (IVPs) may be solved with mature Ordinary Differential Equation (ODE) numerical integration methods.
1
0
0
0
0
0
16,817
Transforming Sensor Data to the Image Domain for Deep Learning - an Application to Footstep Detection
Convolutional Neural Networks (CNNs) have become the state-of-the-art in various computer vision tasks, but they are still premature for most sensor data, especially in pervasive and wearable computing. A major reason for this is the limited amount of annotated training data. In this paper, we propose the idea of leveraging the discriminative power of pre-trained deep CNNs on 2-dimensional sensor data by transforming the sensor modality to the visual domain. By three proposed strategies, 2D sensor output is converted into pressure distribution imageries. Then we utilize a pre-trained CNN for transfer learning on the converted imagery data. We evaluate our method on a gait dataset of floor surface pressure mapping. We obtain a classification accuracy of 87.66%, which outperforms the conventional machine learning methods by over 10%.
1
0
0
0
0
0
16,818
The Price of Differential Privacy For Online Learning
We design differentially private algorithms for the problem of online linear optimization in the full information and bandit settings with optimal $\tilde{O}(\sqrt{T})$ regret bounds. In the full-information setting, our results demonstrate that $\epsilon$-differential privacy may be ensured for free -- in particular, the regret bounds scale as $O(\sqrt{T})+\tilde{O}\left(\frac{1}{\epsilon}\right)$. For bandit linear optimization, and as a special case, for non-stochastic multi-armed bandits, the proposed algorithm achieves a regret of $\tilde{O}\left(\frac{1}{\epsilon}\sqrt{T}\right)$, while the previously known best regret bound was $\tilde{O}\left(\frac{1}{\epsilon}T^{\frac{2}{3}}\right)$.
1
0
0
1
0
0
16,819
Simulation chain and signal classification for acoustic neutrino detection in seawater
Acoustic neutrino detection is a promising approach to extend the energy range of neutrino telescopes to energies beyond $10^{18}$\,eV. Currently operational and planned water-Cherenkov neutrino telescopes, most notably KM3NeT, include acoustic sensors in addition to the optical ones. These acoustic sensors could be used as instruments for acoustic detection, while their main purpose is the position calibration of the detection units. In this article, a Monte Carlo simulation chain for acoustic detectors will be presented, covering the initial interaction of the neutrino up to the signal classification of recorded events. The ambient and transient background in the simulation was implemented according to data recorded by the acoustic set-up AMADEUS inside the ANTARES detector. The effects of refraction on the neutrino signature in the detector are studied, and a classification of the recorded events is implemented. As bipolar waveforms similar to those of the expected neutrino signals are also emitted from other sound sources, additional features like the geometrical shape of the propagation have to be considered for the signal classification. This leads to a large improvement of the background suppression by almost two orders of magnitude, since a flat cylindrical "pancake" propagation pattern is a distinctive feature of neutrino signals. An overview of the simulation chain and the signal classification will be presented and preliminary studies of the performance of the classification will be discussed.
0
1
0
0
0
0
16,820
Parameter Space Noise for Exploration
Deep reinforcement learning (RL) methods generally engage in exploratory behavior through noise injection in the action space. An alternative is to add noise directly to the agent's parameters, which can lead to more consistent exploration and a richer set of behaviors. Methods such as evolutionary strategies use parameter perturbations, but discard all temporal structure in the process and require significantly more samples. Combining parameter noise with traditional RL methods allows to combine the best of both worlds. We demonstrate that both off- and on-policy methods benefit from this approach through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Our results show that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually.
1
0
0
1
0
0
16,821
Deep Illumination: Approximating Dynamic Global Illumination with Generative Adversarial Network
We present Deep Illumination, a novel machine learning technique for approximating global illumination (GI) in real-time applications using a Conditional Generative Adversarial Network. Our primary focus is on generating indirect illumination and soft shadows with offline rendering quality at interactive rates. Inspired from recent advancement in image-to-image translation problems using deep generative convolutional networks, we introduce a variant of this network that learns a mapping from Gbuffers (depth map, normal map, and diffuse map) and direct illumination to any global illumination solution. Our primary contribution is showing that a generative model can be used to learn a density estimation from screen space buffers to an advanced illumination model for a 3D environment. Once trained, our network can approximate global illumination for scene configurations it has never encountered before within the environment it was trained on. We evaluate Deep Illumination through a comparison with both a state of the art real-time GI technique (VXGI) and an offline rendering GI technique (path tracing). We show that our method produces effective GI approximations and is also computationally cheaper than existing GI techniques. Our technique has the potential to replace existing precomputed and screen-space techniques for producing global illumination effects in dynamic scenes with physically-based rendering quality.
1
0
0
0
0
0
16,822
Fraternal Dropout
Recurrent neural networks (RNNs) are important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets - Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.
1
0
0
1
0
0
16,823
Finite-sample bounds for the multivariate Behrens-Fisher distribution with proportional covariances
The Behrens-Fisher problem is a well-known hypothesis testing problem in statistics concerning two-sample mean comparison. In this article, we confirm one conjecture in Eaton and Olshen (1972), which provides stochastic bounds for the multivariate Behrens-Fisher test statistic under the null hypothesis. We also extend their results on the stochastic ordering of random quotients to the arbitrary finite dimensional case. This work can also be seen as a generalization of Hsu (1938) that provided the bounds for the univariate Behrens-Fisher problem. The results obtained in this article can be used to derive a testing procedure for the multivariate Behrens-Fisher problem that strongly controls the Type I error.
0
0
1
1
0
0
16,824
Evidence for mixed rationalities in preference formation
Understanding the mechanisms underlying the formation of cultural traits, such as preferences, opinions and beliefs is an open challenge. Trait formation is intimately connected to cultural dynamics, which has been the focus of a variety of quantitative models. Recently, some studies have emphasized the importance of connecting those models to snapshots of cultural dynamics that are empirically accessible. By analyzing data obtained from different sources, it has been suggested that culture has properties that are universally present, and that empirical cultural states differ systematically from randomized counterparts. Hence, a question about the mechanism responsible for the observed patterns naturally arises. This study proposes a stochastic structural model for generating cultural states that retain those robust, empirical properties. One ingredient of the model, already used in previous work, assumes that every individual's set of traits is partly dictated by one of several, universal "rationalities", informally postulated by several social science theories. The second, new ingredient taken from the same theories assumes that, apart from a dominant rationality, each individual also has a certain exposure to the other rationalities. It is shown that both ingredients are required for reproducing the empirical regularities. This key result suggests that the effects of cultural dynamics in the real world can be described as an interplay of multiple, mixing rationalities, and thus provides indirect evidence for the class of social science theories postulating such mixing. The model should be seen as a static, effective description of culture, while a dynamical, more fundamental description is left for future research.
1
1
0
0
0
0
16,825
A Variance Maximization Criterion for Active Learning
Active learning aims to train a classifier as fast as possible with as few labels as possible. The core element in virtually any active learning strategy is the criterion that measures the usefulness of the unlabeled data based on which new points to be labeled are picked. We propose a novel approach which we refer to as maximizing variance for active learning or MVAL for short. MVAL measures the value of unlabeled instances by evaluating the rate of change of output variables caused by changes in the next sample to be queried and its potential labelling. In a sense, this criterion measures how unstable the classifier's output is for the unlabeled data points under perturbations of the training data. MVAL maintains, what we refer to as, retraining information matrices to keep track of these output scores and exploits two kinds of variance to measure the informativeness and representativeness, respectively. By fusing these variances, MVAL is able to select the instances which are both informative and representative. We employ our technique both in combination with logistic regression and support vector machines and demonstrate that MVAL achieves state-of-the-art performance in experiments on a large number of standard benchmark datasets.
1
0
0
1
0
0
16,826
Polarization, plasmon, and Debye screening in doped 3D ani-Weyl semimetal
We compute the polarization function in a doped three-dimensional anisotropic-Weyl semimetal, in which the fermion energy dispersion is linear in two components of the momenta and quadratic in the third. Through detailed calculations, we find that the long wavelength plasmon mode depends on the fermion density $n_e$ in the form $\Omega_{p}^{\bot}\propto n_{e}^{3/10}$ within the basal plane and behaves as $\Omega_{p}^{z}\propto n_{e}^{1/2}$ along the third direction. This unique characteristic of the plasmon mode can be probed by various experimental techniques, such as electron energy-loss spectroscopy. The Debye screening at finite chemical potential and finite temperature is also analyzed based on the polarization function.
0
1
0
0
0
0
16,827
Identifying Product Order with Restricted Boltzmann Machines
Unsupervised machine learning via a restricted Boltzmann machine is an useful tool in distinguishing an ordered phase from a disordered phase. Here we study its application on the two-dimensional Ashkin-Teller model, which features a partially ordered product phase. We train the neural network with spin configuration data generated by Monte Carlo simulations and show that distinct features of the product phase can be learned from non-ergodic samples resulting from symmetry breaking. Careful analysis of the weight matrices inspires us to define a nontrivial machine-learning motivated quantity of the product form, which resembles the conventional product order parameter.
0
1
0
0
0
0
16,828
A finite temperature study of ideal quantum gases in the presence of one dimensional quasi-periodic potential
We study the thermodynamics of ideal Bose gas as well as the transport properties of non interacting bosons and fermions in a one dimensional quasi-periodic potential, namely Aubry-André (AA) model at finite temperature. For bosons in finite size systems, the effect of quasi-periodic potential on the crossover phenomena corresponding to Bose-Einstein condensation (BEC), superfluidity and localization phenomena at finite temperatures are investigated. From the ground state number fluctuation we calculate the crossover temperature of BEC which exhibits a non monotonic behavior with the strength of AA potential and vanishes at the self-dual critical point following power law. Appropriate rescaling of the crossover temperatures reveals universal behavior which is studied for different quasi-periodicity of the AA model. Finally, we study the temperature and flux dependence of the persistent current of fermions in presence of a quasi-periodic potential to identify the localization at the Fermi energy from the decay of the current.
0
1
0
0
0
0
16,829
High-Frequency Analysis of Effective Interactions and Bandwidth for Transient States after Monocycle Pulse Excitation of Extended Hubbard Model
Using a high-frequency expansion in periodically driven extended Hubbard models, where the strengths and ranges of density-density interactions are arbitrary, we obtain the effective interactions and bandwidth, which depend sensitively on the polarization of the driving field. Then, we numerically calculate modulations of correlation functions in a quarter-filled extended Hubbard model with nearest-neighbor interactions on a triangular lattice with trimers after monocycle pulse excitation. We discuss how the resultant modulations are compatible with the effective interactions and bandwidth derived above on the basis of their dependence on the polarization of photoexcitation, which is easily accessible by experiments. Some correlation functions after monocycle pulse excitation are consistent with the effective interactions, which are weaker or stronger than the original ones. However, the photoinduced enhancement of anisotropic charge correlations previously discussed for the three-quarter-filled organic conductor $\alpha$-(bis[ethylenedithio]-tetrathiafulvalene)$_2$I$_3$ [$\alpha$-(BEDT-TTF)$_2$I$_3$] in the metallic phase is not fully explained by the effective interactions or bandwidth, which are derived independently of the filling.
0
1
0
0
0
0
16,830
Fast binary embeddings, and quantized compressed sensing with structured matrices
This paper deals with two related problems, namely distance-preserving binary embeddings and quantization for compressed sensing . First, we propose fast methods to replace points from a subset $\mathcal{X} \subset \mathbb{R}^n$, associated with the Euclidean metric, with points in the cube $\{\pm 1\}^m$ and we associate the cube with a pseudo-metric that approximates Euclidean distance among points in $\mathcal{X}$. Our methods rely on quantizing fast Johnson-Lindenstrauss embeddings based on bounded orthonormal systems and partial circulant ensembles, both of which admit fast transforms. Our quantization methods utilize noise-shaping, and include Sigma-Delta schemes and distributed noise-shaping schemes. The resulting approximation errors decay polynomially and exponentially fast in $m$, depending on the embedding method. This dramatically outperforms the current decay rates associated with binary embeddings and Hamming distances. Additionally, it is the first such binary embedding result that applies to fast Johnson-Lindenstrauss maps while preserving $\ell_2$ norms. Second, we again consider noise-shaping schemes, albeit this time to quantize compressed sensing measurements arising from bounded orthonormal ensembles and partial circulant matrices. We show that these methods yield a reconstruction error that again decays with the number of measurements (and bits), when using convex optimization for reconstruction. Specifically, for Sigma-Delta schemes, the error decays polynomially in the number of measurements, and it decays exponentially for distributed noise-shaping schemes based on beta encoding. These results are near optimal and the first of their kind dealing with bounded orthonormal systems.
0
0
0
1
0
0
16,831
The Many Faces of Link Fraud
Most past work on social network link fraud detection tries to separate genuine users from fraudsters, implicitly assuming that there is only one type of fraudulent behavior. But is this assumption true? And, in either case, what are the characteristics of such fraudulent behaviors? In this work, we set up honeypots ("dummy" social network accounts), and buy fake followers (after careful IRB approval). We report the signs of such behaviors including oddities in local network connectivity, account attributes, and similarities and differences across fraud providers. Most valuably, we discover and characterize several types of fraud behaviors. We discuss how to leverage our insights in practice by engineering strongly performing entropy-based features and demonstrating high classification accuracy. Our contributions are (a) instrumentation: we detail our experimental setup and carefully engineered data collection process to scrape Twitter data while respecting API rate-limits, (b) observations on fraud multimodality: we analyze our honeypot fraudster ecosystem and give surprising insights into the multifaceted behaviors of these fraudster types, and (c) features: we propose novel features that give strong (>0.95 precision/recall) discriminative power on ground-truth Twitter data.
1
0
0
0
0
0
16,832
Diophantine approximation by special primes
We show that whenever $\delta>0$, $\eta$ is real and constants $\lambda_i$ satisfy some necessary conditions, there are infinitely many prime triples $p_1,\, p_2,\, p_3$ satisfying the inequality $|\lambda_1p_1 + \lambda_2p_2 + \lambda_3p_3+\eta|<(\max p_j)^{-1/12+\delta}$ and such that, for each $i\in\{1,2,3\}$, $p_i+2$ has at most $28$ prime factors.
0
0
1
0
0
0
16,833
A Compositional Treatment of Iterated Open Games
Compositional Game Theory is a new, recently introduced model of economic games based upon the computer science idea of compositionality. In it, complex and irregular games can be built up from smaller and simpler games, and the equilibria of these complex games can be defined recursively from the equilibria of their simpler subgames. This paper extends the model by providing a final coalgebra semantics for infinite games. In the course of this, we introduce a new operator on games to model the economic concept of subgame perfection.
1
0
0
0
0
0
16,834
Bayesian inference for spectral projectors of covariance matrix
Let $X_1, \ldots, X_n$ be i.i.d. sample in $\mathbb{R}^p$ with zero mean and the covariance matrix $\mathbf{\Sigma^*}$. The classic principal component analysis estimates the projector $\mathbf{P^*_{\mathcal{J}}}$ onto the direct sum of some eigenspaces of $\mathbf{\Sigma^*}$ by its empirical counterpart $\mathbf{\widehat{P}_{\mathcal{J}}}$. Recent papers [Koltchinskii, Lounici (2017)], [Naumov et al. (2017)] investigate the asymptotic distribution of the Frobenius distance between the projectors $\| \mathbf{\widehat{P}_{\mathcal{J}}} - \mathbf{P^*_{\mathcal{J}}} \|_2$. The problem arises when one tries to build a confidence set for the true projector effectively. We consider the problem from Bayesian perspective and derive an approximation for the posterior distribution of the Frobenius distance between projectors. The derived theorems hold true for non-Gaussian data: the only assumption that we impose is the concentration of the sample covariance $\mathbf{\widehat{\Sigma}}$ in a vicinity of $\mathbf{\Sigma^*}$. The obtained results are applied to construction of sharp confidence sets for the true projector. Numerical simulations illustrate good performance of the proposed procedure even on non-Gaussian data in quite challenging regime.
0
0
1
1
0
0
16,835
Handling Incomplete Heterogeneous Data using VAEs
Variational autoencoders (VAEs), as well as other generative models, have been shown to be efficient and accurate to capture the latent structure of vast amounts of complex high-dimensional data. However, existing VAEs can still not directly handle data that are heterogenous (mixed continuous and discrete) or incomplete (with missing data at random), which is indeed common in real-world applications. In this paper, we propose a general framework to design VAEs, suitable for fitting incomplete heterogenous data. The proposed HI-VAE includes likelihood models for real-valued, positive real valued, interval, categorical, ordinal and count data, and allows to estimate (and potentially impute) missing data accurately. Furthermore, HI-VAE presents competitive predictive performance in supervised tasks, outperforming super- vised models when trained on incomplete data
0
0
0
1
0
0
16,836
Geometric mean of probability measures and geodesics of Fisher information metric
The space of all probability measures having positive density function on a connected compact smooth manifold $M$, denoted by $\mathcal{P}(M)$, carries the Fisher information metric $G$. We define the geometric mean of probability measures by the aid of which we investigate information geometry of $\mathcal{P}(M)$, equipped with $G$. We show that a geodesic segment joining arbitrary probability measures $\mu_1$ and $\mu_2$ is expressed by using the normalized geometric mean of its endpoints. As an application, we show that any two points of $\mathcal{P}(M)$ can be joined by a geodesic. Moreover, we prove that the function $\ell$ defined by $\ell(\mu_1, \mu_2):=2\arccos\int_M \sqrt{p_1\,p_2}\,d\lambda$, $\mu_i=p_i\,\lambda$, $i=1,2$ gives the distance function on $\mathcal{P}(M)$. It is shown that geodesics are all minimal.
0
0
1
0
0
0
16,837
On automorphism groups of Toeplitz subshifts
In this article we study automorphisms of Toeplitz subshifts. Such groups are abelian and any finitely generated torsion subgroup is finite and cyclic. When the complexity is non superlinear, we prove that the automorphism group is, modulo a finite cyclic group, generated by a unique root of the shift. In the subquadratic complexity case, we show that the automorphism group modulo the torsion is generated by the roots of the shift map and that the result of the non superlinear case is optimal. Namely, for any $\varepsilon > 0$ we construct examples of minimal Toeplitz subshifts with complexity bounded by $C n^{1+\epsilon}$ whose automorphism groups are not finitely generated. Finally, we observe the coalescence and the automorphism group give no restriction on the complexity since we provide a family of coalescent Toeplitz subshifts with positive entropy such that their automorphism groups are arbitrary finitely generated infinite abelian groups with cyclic torsion subgroup (eventually restricted to powers of the shift).
0
0
1
0
0
0
16,838
How to Generate Pseudorandom Permutations Over Other Groups
Recent results by Alagic and Russell have given some evidence that the Even-Mansour cipher may be secure against quantum adversaries with quantum queries, if considered over other groups than $(\mathbb{Z}/2)^n$. This prompts the question as to whether or not other classical schemes may be generalized to arbitrary groups and whether classical results still apply to those generalized schemes. In this thesis, we generalize the Even-Mansour cipher and the Feistel cipher. We show that Even and Mansour's original notions of secrecy are obtained on a one-key, group variant of the Even-Mansour cipher. We generalize the result by Kilian and Rogaway, that the Even-Mansour cipher is pseudorandom, to super pseudorandomness, also in the one-key, group case. Using a Slide Attack we match the bound found above. After generalizing the Feistel cipher to arbitrary groups we resolve an open problem of Patel, Ramzan, and Sundaram by showing that the 3-round Feistel cipher over an arbitrary group is not super pseudorandom. We generalize a result by Gentry and Ramzan showing that the Even-Mansour cipher can be implemented using the Feistel cipher as the public permutation. In this result, we also consider the one-key case over a group and generalize their bound. Finally, we consider Zhandry's result on quantum pseudorandom permutations, showing that his result may be generalized to hold for arbitrary groups. In this regard, we consider whether certain card shuffles may be generalized as well.
1
0
1
0
0
0
16,839
Measures of Tractography Convergence
In the present work, we use information theory to understand the empirical convergence rate of tractography, a widely-used approach to reconstruct anatomical fiber pathways in the living brain. Based on diffusion MRI data, tractography is the starting point for many methods to study brain connectivity. Of the available methods to perform tractography, most reconstruct a finite set of streamlines, or 3D curves, representing probable connections between anatomical regions, yet relatively little is known about how the sampling of this set of streamlines affects downstream results, and how exhaustive the sampling should be. Here we provide a method to measure the information theoretic surprise (self-cross entropy) for tract sampling schema. We then empirically assess four streamline methods. We demonstrate that the relative information gain is very low after a moderate number of streamlines have been generated for each tested method. The results give rise to several guidelines for optimal sampling in brain connectivity analyses.
0
0
0
1
1
0
16,840
Network Flow Based Post Processing for Sales Diversity
Collaborative filtering is a broad and powerful framework for building recommendation systems that has seen widespread adoption. Over the past decade, the propensity of such systems for favoring popular products and thus creating echo chambers have been observed. This has given rise to an active area of research that seeks to diversify recommendations generated by such algorithms. We address the problem of increasing diversity in recommendation systems that are based on collaborative filtering that use past ratings to predicting a rating quality for potential recommendations. Following our earlier work, we formulate recommendation system design as a subgraph selection problem from a candidate super-graph of potential recommendations where both diversity and rating quality are explicitly optimized: (1) On the modeling side, we define a new flexible notion of diversity that allows a system designer to prescribe the number of recommendations each item should receive, and smoothly penalizes deviations from this distribution. (2) On the algorithmic side, we show that minimum-cost network flow methods yield fast algorithms in theory and practice for designing recommendation subgraphs that optimize this notion of diversity. (3) On the empirical side, we show the effectiveness of our new model and method to increase diversity while maintaining high rating quality in standard rating data sets from Netflix and MovieLens.
1
0
0
0
0
0
16,841
Lattice Model for Production of Gas
We define a lattice model for rock, absorbers, and gas that makes it possible to examine the flow of gas to a complicated absorbing boundary over long periods of time. The motivation is to deduce the geometry of the boundary from the time history of gas absorption. We find a solution to this model using Green's function techniques, and apply the solution to three absorbing networks of increasing complexity.
0
1
0
0
0
0
16,842
Adaptive Representation Selection in Contextual Bandit
We consider an extension of the contextual bandit setting, motivated by several practical applications, where an unlabeled history of contexts can become available for pre-training before the online decision-making begins. We propose an approach for improving the performance of contextual bandit in such setting, via adaptive, dynamic representation learning, which combines offline pre-training on unlabeled history of contexts with online selection and modification of embedding functions. Our experiments on a variety of datasets and in different nonstationary environments demonstrate clear advantages of our approach over the standard contextual bandit.
0
0
0
1
0
0
16,843
Algebraic surfaces with zero-dimensional cohomology support locus
Using the theory of cohomology support locus, we give a necessary condition for the Albanese map of a smooth projective surface being a submersion. More precisely, assuming the cohomology support locus of any finite abelian cover of a smooth projective surface consists of finitely many points, we prove that the surface has trivial first Betti number, or is a ruled surface of genus one, or is an abelian surface.
0
0
1
0
0
0
16,844
Insight into the temperature dependent properties of the ferromagnetic Kondo lattice YbNiSn
Analyzing temperature dependent photoemission (PE) data of the ferromagnetic Kondo-lattice (KL) system YbNiSn in the light of the Periodic Anderson model (PAM) we show that the KL behavior is not limited to temperatures below a temperature T_K, defined empirically from resistivity and specificic heat measurements. As characteristic for weakly hybridized Ce and Yb systems, the PE spectra reveal a 4f-derived Fermi level peak, which reflects contributions from the Kondo resonance and its crystal electric field (CEF) satellites. In YbNiSn this peak has an unusual temperature dependence: With decreasing temperature a steady linear increase of intensity is observed which extends over a large interval ranging from 100 K down to 1 K without showing any peculiarities in the region of T_K ~ TC= 5.6 K. In the light of the single-impurity Anderson model (SIAM) this intensity variation reflects a linear increase of 4f occupancy with decreasing temperature, indicating an onset of Kondo screening at temperatures above 100 K. Within the PAM this phenomenon could be described by a non-Fermi liquid like T- linear damping of the self-energy which accounts phenomenologically for the feedback from the closely spaced CEF-states.
0
1
0
0
0
0
16,845
Some Ageing Properties of Dynamic Additive Mean Residual Life Model
Although proportional hazard rate model is a very popular model to analyze failure time data, sometimes it becomes important to study the additive hazard rate model. Again, sometimes the concept of the hazard rate function is abstract, in comparison to the concept of mean residual life function. A new model called `dynamic additive mean residual life model' where the covariates are time-dependent has been defined in the literature. Here we study the closure properties of the model for different positive and negative ageing classes under certain condition(s). Quite a few examples are presented to illustrate different properties of the model.
0
0
1
1
0
0
16,846
From Monte Carlo to Las Vegas: Improving Restricted Boltzmann Machine Training Through Stopping Sets
We propose a Las Vegas transformation of Markov Chain Monte Carlo (MCMC) estimators of Restricted Boltzmann Machines (RBMs). We denote our approach Markov Chain Las Vegas (MCLV). MCLV gives statistical guarantees in exchange for random running times. MCLV uses a stopping set built from the training data and has maximum number of Markov chain steps K (referred as MCLV-K). We present a MCLV-K gradient estimator (LVS-K) for RBMs and explore the correspondence and differences between LVS-K and Contrastive Divergence (CD-K), with LVS-K significantly outperforming CD-K training RBMs over the MNIST dataset, indicating MCLV to be a promising direction in learning generative models.
1
0
0
1
0
0
16,847
CD meets CAT
We show that if a noncollapsed $CD(K,n)$ space $X$ with $n\ge 2$ has curvature bounded above by $\kappa$ in the sense of Alexandrov then $K\le (n-1)\kappa$ and $X$ is an Alexandrov space of curvature bounded below by $K-\kappa (n-2)$. We also show that if a $CD(K,n)$ space $Y$ with finite $n$ has curvature bounded above then it is infinitesimally Hilbertian.
0
0
1
0
0
0
16,848
Cost Models for Selecting Materialized Views in Public Clouds
Data warehouse performance is usually achieved through physical data structures such as indexes or materialized views. In this context, cost models can help select a relevant set ofsuch performance optimization structures. Nevertheless, selection becomes more complex in the cloud. The criterion to optimize is indeed at least two-dimensional, with monetary cost balancing overall query response time. This paper introduces new cost models that fit into the pay-as-you-go paradigm of cloud computing. Based on these cost models, an optimization problem is defined to discover, among candidate views, those to be materialized to minimize both the overall cost of using and maintaining the database in a public cloud and the total response time ofa given query workload. We experimentally show that maintaining materialized views is always advantageous, both in terms of performance and cost.
1
0
0
0
0
0
16,849
Dealing with Integer-valued Variables in Bayesian Optimization with Gaussian Processes
Bayesian optimization (BO) methods are useful for optimizing functions that are expensive to evaluate, lack an analytical expression and whose evaluations can be contaminated by noise. These methods rely on a probabilistic model of the objective function, typically a Gaussian process (GP), upon which an acquisition function is built. This function guides the optimization process and measures the expected utility of performing an evaluation of the objective at a new point. GPs assume continous input variables. When this is not the case, such as when some of the input variables take integer values, one has to introduce extra approximations. A common approach is to round the suggested variable value to the closest integer before doing the evaluation of the objective. We show that this can lead to problems in the optimization process and describe a more principled approach to account for input variables that are integer-valued. We illustrate in both synthetic and a real experiments the utility of our approach, which significantly improves the results of standard BO methods on problems involving integer-valued variables.
0
0
0
1
0
0
16,850
Second-order constrained variational problems on Lie algebroids: applications to optimal control
The aim of this work is to study, from an intrinsic and geometric point of view, second-order constrained variational problems on Lie algebroids, that is, optimization problems defined by a cost functional which depends on higher-order derivatives of admissible curves on a Lie algebroid. Extending the classical Skinner and Rusk formalism for the mechanics in the context of Lie algebroids, for second-order constrained mechanical systems, we derive the corresponding dynamical equations. We find a symplectic Lie subalgebroid where, under some mild regularity conditions, the second-order constrained variational problem, seen as a presymplectic Hamiltonian system, has a unique solution. We study the relationship of this formalism with the second-order constrained Euler-Poincaré and Lagrange-Poincaré equations, among others. Our study is applied to the optimal control of mechanical systems.
0
0
1
0
0
0
16,851
The Galactic Cosmic Ray Electron Spectrum from 3 to 70 MeV Measured by Voyager 1 Beyond the Heliopause, What This Tells Us About the Propagation of Electrons and Nuclei In and Out of the Galaxy at Low Energies
The cosmic ray electrons measured by Voyager 1 between 3-70 MeV beyond the heliopause have intensities several hundred times those measured at the Earth by PAMELA at nearly the same energies. This paper compares this new V1 data with data from the earth-orbiting PAMELA experiment up to energies greater than 10 GeV where solar modulation effects are negligible. In this energy regime we assume the main parameters governing electron propagation are diffusion and energy loss and we use a Monte Carlo program to describe this propagation in the galaxy. To reproduce the new Voyager electron spectrum, which is E-1.3, together with that measured by PAMELA which is E-3.20 above 10 GeV, we require a diffusion coefficient which is P 0.45 at energies above 0.5 GeV changing to a P-1.00 dependence at lower rigidities. The entire electron spectrum observed at both V1 and PAMELA from 3 MeV to 30 GeV can then be described by a simple source spectrum, dj/dP P-2.25, with a spectral exponent that is independent of rigidity. The change in exponent of the measured electron spectrum from -1.3 at low energies to 3.2 at the highest energies can be explained by galactic propagation effects related to the changing dependence of the diffusion coefficient below 0.5 GeV, and the increasing importance above 0.5 GV of energy loss from synchrotron and inverse Compton radiation, which are both E2, and which are responsible for most of the changing spectral exponent above 1.0 GV. As a result of the P-1.00 dependence of the diffusion coefficient below 0.5 GV that is required to fit the V1 electron spectrum, there is a rapid flow of these low energy electrons out of the galaxy. These electrons in local IG space are unobservable to us at any wave length and therefore form a dark energy component which is 100 times the electrons rest energy.
0
1
0
0
0
0
16,852
Online Scheduling of Spark Workloads with Mesos using Different Fair Allocation Algorithms
In the following, we present example illustrative and experimental results comparing fair schedulers allocating resources from multiple servers to distributed application frameworks. Resources are allocated so that at least one resource is exhausted in every server. Schedulers considered include DRF (DRFH) and Best-Fit DRF (BF-DRF), TSF, and PS-DSF. We also consider server selection under Randomized Round Robin (RRR) and based on their residual (unreserved) resources. In the following, we consider cases with frameworks of equal priority and without server-preference constraints. We first give typical results of a illustrative numerical study and then give typical results of a study involving Spark workloads on Mesos which we have modified and open-sourced to prototype different schedulers.
1
0
0
0
0
0
16,853
On the representation dimension and finitistic dimension of special multiserial algebras
For monomial special multiserial algebras, which in general are of wild representation type, we construct radical embeddings into algebras of finite representation type. As a consequence, we show that the representation dimension of monomial and self-injective special multiserial algebras is less or equal to three. This implies that the finitistic dimension conjecture holds for all special multiserial algebras.
0
0
1
0
0
0
16,854
Would You Like to Motivate Software Testers? Ask Them How
Context. Considering the importance of software testing to the development of high quality and reliable software systems, this paper aims to investigate how can work-related factors influence the motivation of software testers. Method. We applied a questionnaire that was developed using a previous theory of motivation and satisfaction of software engineers to conduct a survey-based study to explore and understand how professional software testers perceive and value work-related factors that could influence their motivation at work. Results. With a sample of 80 software testers we observed that software testers are strongly motivated by variety of work, creative tasks, recognition for their work, and activities that allow them to acquire new knowledge, but in general the social impact of this activity has low influence on their motivation. Conclusion. This study discusses the difference of opinions among software testers, regarding work-related factors that could impact their motivation, which can be relevant for managers and leaders in software engineering practice.
1
0
0
0
0
0
16,855
POMDP Structural Results for Controlled Sensing
This article provides a short review of some structural results in controlled sensing when the problem is formulated as a partially observed Markov decision process. In particular, monotone value functions, Blackwell dominance and quickest detection are described.
1
0
0
0
0
0
16,856
Low Rank Matrix Recovery with Simultaneous Presence of Outliers and Sparse Corruption
We study a data model in which the data matrix D can be expressed as D = L + S + C, where L is a low rank matrix, S an element-wise sparse matrix and C a matrix whose non-zero columns are outlying data points. To date, robust PCA algorithms have solely considered models with either S or C, but not both. As such, existing algorithms cannot account for simultaneous element-wise and column-wise corruptions. In this paper, a new robust PCA algorithm that is robust to simultaneous types of corruption is proposed. Our approach hinges on the sparse approximation of a sparsely corrupted column so that the sparse expansion of a column with respect to the other data points is used to distinguish a sparsely corrupted inlier column from an outlying data point. We also develop a randomized design which provides a scalable implementation of the proposed approach. The core idea of sparse approximation is analyzed analytically where we show that the underlying ell_1-norm minimization can obtain the representation of an inlier in presence of sparse corruptions.
1
0
0
1
0
0
16,857
Power-Sum Denominators
The power sum $1^n + 2^n + \cdots + x^n$ has been of interest to mathematicians since classical times. Johann Faulhaber, Jacob Bernoulli, and others who followed expressed power sums as polynomials in $x$ of degree $n+1$ with rational coefficients. Here we consider the denominators of these polynomials, and prove some of their properties. A remarkable one is that such a denominator equals $n+1$ times the squarefree product of certain primes $p$ obeying the condition that the sum of the base-$p$ digits of $n+1$ is at least $p$. As an application, we derive a squarefree product formula for the denominators of the Bernoulli polynomials.
0
0
1
0
0
0
16,858
A resource-frugal probabilistic dictionary and applications in bioinformatics
Indexing massive data sets is extremely expensive for large scale problems. In many fields, huge amounts of data are currently generated, however extracting meaningful information from voluminous data sets, such as computing similarity between elements, is far from being trivial. It remains nonetheless a fundamental need. This work proposes a probabilistic data structure based on a minimal perfect hash function for indexing large sets of keys. Our structure out-compete the hash table for construction, query times and for memory usage, in the case of the indexation of a static set. To illustrate the impact of algorithms performances, we provide two applications based on similarity computation between collections of sequences, and for which this calculation is an expensive but required operation. In particular, we show a practical case in which other bioinformatics tools fail to scale up the tested data set or provide lower recall quality results.
1
0
0
0
0
0
16,859
Fast learning rate of deep learning via a kernel perspective
We develop a new theoretical framework to analyze the generalization error of deep learning, and derive a new fast learning rate for two representative algorithms: empirical risk minimization and Bayesian deep learning. The series of theoretical analyses of deep learning has revealed its high expressive power and universal approximation capability. Although these analyses are highly nonparametric, existing generalization error analyses have been developed mainly in a fixed dimensional parametric model. To compensate this gap, we develop an infinite dimensional model that is based on an integral form as performed in the analysis of the universal approximation capability. This allows us to define a reproducing kernel Hilbert space corresponding to each layer. Our point of view is to deal with the ordinary finite dimensional deep neural network as a finite approximation of the infinite dimensional one. The approximation error is evaluated by the degree of freedom of the reproducing kernel Hilbert space in each layer. To estimate a good finite dimensional model, we consider both of empirical risk minimization and Bayesian deep learning. We derive its generalization error bound and it is shown that there appears bias-variance trade-off in terms of the number of parameters of the finite dimensional approximation. We show that the optimal width of the internal layers can be determined through the degree of freedom and the convergence rate can be faster than $O(1/\sqrt{n})$ rate which has been shown in the existing studies.
1
0
1
1
0
0
16,860
Closed-loop field development optimization with multipoint geostatistics and statistical assessment
Closed-loop field development (CLFD) optimization is a comprehensive framework for optimal development of subsurface resources. CLFD involves three major steps: 1) optimization of full development plan based on current set of models, 2) drilling new wells and collecting new spatial and temporal (production) data, 3) model calibration based on all data. This process is repeated until the optimal number of wells is drilled. This work introduces an efficient CLFD implementation for complex systems described by multipoint geostatistics (MPS). Model calibration is accomplished in two steps: conditioning to spatial data by a geostatistical simulation method, and conditioning to production data by optimization-based PCA. A statistical procedure is presented to assess the performance of CLFD. Methodology is applied to an oil reservoir example for 25 different true-model cases. Application of a single-step of CLFD, improved the true NPV in 64%--80% of cases. The full CLFD procedure (with three steps) improved the true NPV in 96% of cases, with an average improvement of 37%.
1
0
0
1
0
0
16,861
Reduction and regular $t$-balanced Cayley maps on split metacyclic 2-groups
A regular $t$-balanced Cayley map (RBCM$_t$ for short) on a group $\Gamma$ is an embedding of a Cayley graph on $\Gamma$ into a surface with some special symmetric properties. We propose a reduction method to study RBCM$_t$'s, and as a first practice, we completely classify RBCM$_t$'s for a class of split metacyclic 2-groups.
0
0
1
0
0
0
16,862
Perovskite Substrates Boost the Thermopower of Cobaltate Thin Films at High Temperatures
Transition metal oxides are promising candidates for thermoelectric applications, because they are stable at high temperature and because strong electronic correlations can generate large Seebeck coefficients, but their thermoelectric power factors are limited by the low electrical conductivity. We report transport measurements on Ca3Co4O9 films on various perovskite substrates and show that reversible incorporation of oxygen into SrTiO3 and LaAlO3 substrates activates a parallel conduction channel for p-type carriers, greatly enhancing the thermoelectric performance of the film-substrate system at temperatures above 450 °C. Thin-film structures that take advantage of both electronic correlations and the high oxygen mobility of transition metal oxides thus open up new perspectives for thermopower generation at high temperature.
0
1
0
0
0
0
16,863
Motion of a rod pushed at one point in a weightless environment in space
We analyze the motion of a rod floating in a weightless environment in space when a force is applied at some point on the rod in a direction perpendicular to its length. If the force applied is at the centre of mass, then the rod gets a linear motion perpendicular to its length. However, if the same force is applied at a point other than the centre of mass, say, near one end of the rod, thereby giving rise to a torque, then there will also be a rotation of the rod about its centre of mass, in addition to the motion of the centre of mass itself. If the force applied is for a very short duration, but imparting nevertheless a finite impulse, like in a sudden (quick) hit at one end of the rod, then the centre of mass will move with a constant linear speed and superimposed on it will be a rotation of the rod with constant angular speed about the centre of mass. However, if force is applied continuously, say by strapping a tiny rocket at one end of the rod, then the rod will spin faster and faster about the centre of mass, with angular speed increasing linearly with time. As the direction of the applied force, as seen by an external (inertial) observer, will be changing continuously with the rotation of the rod, the acceleration of the centre of mass would also be not in one fixed direction. However, it turns out that the locus of the velocity vector of the centre of mass will describe a Cornu spiral, with the velocity vector reaching a final constant value with time. The mean motion of the centre of mass will be in a straight line, with superposed initial oscillations that soon die down.
0
1
0
0
0
0
16,864
Dimension theory and components of algebraic stacks
We prove some basic results on the dimension theory of algebraic stacks, and on the multiplicities of their irreducible components, for which we do not know a reference.
0
0
1
0
0
0
16,865
When is a polynomial ideal binomial after an ambient automorphism?
Can an ideal I in a polynomial ring k[x] over a field be moved by a change of coordinates into a position where it is generated by binomials $x^a - cx^b$ with c in k, or by unital binomials (i.e., with c = 0 or 1)? Can a variety be moved into a position where it is toric? By fibering the G-translates of I over an algebraic group G acting on affine space, these problems are special cases of questions about a family F of ideals over an arbitrary base B. The main results in this general setting are algorithms to find the locus of points in B over which the fiber of F - is contained in the fiber of a second family F' of ideals over B; - defines a variety of dimension at least d; - is generated by binomials; or - is generated by unital binomials. A faster containment algorithm is also presented when the fibers of F are prime. The big-fiber algorithm is probabilistic but likely faster than known deterministic ones. Applications include the setting where a second group T acts on affine space, in addition to G, in which case algorithms compute the set of G-translates of I - whose stabilizer subgroups in T have maximal dimension; or - that admit a faithful multigrading by $Z^r$ of maximal rank r. Even with no ambient group action given, the final application is an algorithm to - decide whether a normal projective variety is abstractly toric. All of these loci in B and subsets of G are constructible; in some cases they are closed.
1
0
1
0
0
0
16,866
Annihilators in $\mathbb{N}^k$-graded and $\mathbb{Z}^k$-graded rings
It has been shown by McCoy that a right ideal of a polynomial ring with several indeterminates has a non-trivial homogeneous right annihilator of degree 0 provided its right annihilator is non-trivial to begin with. In this note, it is documented that any $\mathbb{N}$-graded ring $R$ has a slightly weaker property: the right annihilator of a right ideal contains a homogeneous non-zero element, if it is non-trivial to begin with. If $R$ is a subring of a $\mathbb{Z}^k$ -graded ring $S$ satisfying a certain non-annihilation property (which is the case if $S$ is strongly graded, for example), then it is possible to find annihilators of degree 0.
0
0
1
0
0
0
16,867
q-Neurons: Neuron Activations based on Stochastic Jackson's Derivative Operators
We propose a new generic type of stochastic neurons, called $q$-neurons, that considers activation functions based on Jackson's $q$-derivatives with stochastic parameters $q$. Our generalization of neural network architectures with $q$-neurons is shown to be both scalable and very easy to implement. We demonstrate experimentally consistently improved performances over state-of-the-art standard activation functions, both on training and testing loss functions.
0
0
0
1
0
0
16,868
Liveness-Driven Random Program Generation
Randomly generated programs are popular for testing compilers and program analysis tools, with hundreds of bugs in real-world C compilers found by random testing. However, existing random program generators may generate large amounts of dead code (computations whose result is never used). This leaves relatively little code to exercise a target compiler's more complex optimizations. To address this shortcoming, we introduce liveness-driven random program generation. In this approach the random program is constructed bottom-up, guided by a simultaneous structural data-flow analysis to ensure that the generator never generates dead code. The algorithm is implemented as a plugin for the Frama-C framework. We evaluate it in comparison to Csmith, the standard random C program generator. Our tool generates programs that compile to more machine code with a more complex instruction mix.
1
0
0
0
0
0
16,869
Modeling and optimal control of HIV/AIDS prevention through PrEP
Pre-exposure prophylaxis (PrEP) consists in the use of an antiretroviral medication to prevent the acquisition of HIV infection by uninfected individuals and has recently demonstrated to be highly efficacious for HIV prevention. We propose a new epidemiological model for HIV/AIDS transmission including PrEP. Existence, uniqueness and global stability of the disease free and endemic equilibriums are proved. The model with no PrEP is calibrated with the cumulative cases of infection by HIV and AIDS reported in Cape Verde from 1987 to 2014, showing that it predicts well such reality. An optimal control problem with a mixed state control constraint is then proposed and analyzed, where the control function represents the PrEP strategy and the mixed constraint models the fact that, due to PrEP costs, epidemic context and program coverage, the number of individuals under PrEP is limited at each instant of time. The objective is to determine the PrEP strategy that satisfies the mixed state control constraint and minimizes the number of individuals with pre-AIDS HIV-infection as well as the costs associated with PrEP. The optimal control problem is studied analytically. Through numerical simulations, we demonstrate that PrEP reduces HIV transmission significantly.
0
0
1
0
0
0
16,870
STARIMA-based Traffic Prediction with Time-varying Lags
Based on the observation that the correlation between observed traffic at two measurement points or traffic stations may be time-varying, attributable to the time-varying speed which subsequently causes variations in the time required to travel between the two points, in this paper, we develop a modified Space-Time Autoregressive Integrated Moving Average (STARIMA) model with time-varying lags for short-term traffic flow prediction. Particularly, the temporal lags in the modified STARIMA change with the time-varying speed at different time of the day or equivalently change with the (time-varying) time required to travel between two measurement points. Firstly, a technique is developed to evaluate the temporal lag in the STARIMA model, where the temporal lag is formulated as a function of the spatial lag (spatial distance) and the average speed. Secondly, an unsupervised classification algorithm based on ISODATA algorithm is designed to classify different time periods of the day according to the variation of the speed. The classification helps to determine the appropriate time lag to use in the STARIMA model. Finally, a STARIMA-based model with time-varying lags is developed for short-term traffic prediction. Experimental results using real traffic data show that the developed STARIMA-based model with time-varying lags has superior accuracy compared with its counterpart developed using the traditional cross-correlation function and without employing time-varying lags.
1
0
1
0
0
0
16,871
Electric Vehicle Charging Station Placement Method for Urban Areas
For accommodating more electric vehicles (EVs) to battle against fossil fuel emission, the problem of charging station placement is inevitable and could be costly if done improperly. Some researches consider a general setup, using conditions such as driving ranges for planning. However, most of the EV growths in the next decades will happen in the urban area, where driving ranges is not the biggest concern. For such a need, we consider several practical aspects of urban systems, such as voltage regulation cost and protection device upgrade resulting from the large integration of EVs. Notably, our diversified objective can reveal the trade-off between different factors in different cities worldwide. To understand the global optimum of large-scale analysis, we add constraint one-by-one to see how to preserve the problem convexity. Our sensitivity analysis before and after convexification shows that our approach is not only universally applicable but also has a small approximation error for prioritizing the most urgent constraint in a specific setup. Finally, numerical results demonstrate the trade-off, the relationship between different factors and the global objective, and the small approximation error. A unique observation in this study shows the importance of incorporating the protection device upgrade in urban system planning on charging stations.
1
0
0
0
0
0
16,872
The Steinberg linkage class for a reductive algebraic group
Let G be a reductive algebraic group over a field of positive characteristic and denote by C(G) the category of rational G-modules. In this note we investigate the subcategory of C(G) consisting of those modules whose composition factors all have highest weights linked to the Steinberg weight. This subcategory is denoted ST and called the Steinberg component. We give an explicit equivalence between ST and C(G) and we derive some consequences. In particular, our result allows us to relate the Frobenius contracting functor to the projection functor from C(G) onto ST .
0
0
1
0
0
0
16,873
Detection and Resolution of Rumours in Social Media: A Survey
Despite the increasing use of social media platforms for information and news gathering, its unmoderated nature often leads to the emergence and spread of rumours, i.e. pieces of information that are unverified at the time of posting. At the same time, the openness of social media platforms provides opportunities to study how users share and discuss rumours, and to explore how natural language processing and data mining techniques may be used to find ways of determining their veracity. In this survey we introduce and discuss two types of rumours that circulate on social media; long-standing rumours that circulate for long periods of time, and newly-emerging rumours spawned during fast-paced events such as breaking news, where reports are released piecemeal and often with an unverified status in their early stages. We provide an overview of research into social media rumours with the ultimate goal of developing a rumour classification system that consists of four components: rumour detection, rumour tracking, rumour stance classification and rumour veracity classification. We delve into the approaches presented in the scientific literature for the development of each of these four components. We summarise the efforts and achievements so far towards the development of rumour classification systems and conclude with suggestions for avenues for future research in social media mining for detection and resolution of rumours.
1
0
0
0
0
0
16,874
On harmonic analysis of spherical convolutions on semisimple Lie groups
This paper contains a non-trivial generalization of the Harish-Chandra transforms on a connected semisimple Lie group $G,$ with finite center, into what we term spherical convolutions. Among other results we show that its integral over the collection of bounded spherical functions at the identity element $e \in G$ is a weighted Fourier transforms of the Abel transform at $0.$ Being a function on $G,$ the restriction of this integral of its spherical Fourier transforms to the positive-definite spherical functions is then shown to be (the non-zero constant multiple of) a positive-definite distribution on $G,$ which is tempered and invariant on $G=SL(2,\mathbb{R}).$ These results suggest the consideration of a calculus on the Schwartz algebras of spherical functions. The Plancherel measure of the spherical convolutions is also explicitly computed.
0
0
1
0
0
0
16,875
Relaxation-based viscosity mapping for magnetic particle imaging
Magnetic Particle Imaging (MPI) has been shown to provide remarkable contrast for imaging applications such as angiography, stem cell tracking, and cancer imaging. Recently, there is growing interest in the functional imaging capabilities of MPI, where color MPI techniques have explored separating different nanoparticles, which could potentially be used to distinguish nanoparticles in different states or environments. Viscosity mapping is a promising functional imaging application for MPI, as increased viscosity levels in vivo have been associated with numerous diseases such as hypertension, atherosclerosis, and cancer. In this work, we propose a viscosity mapping technique for MPI through the estimation of the relaxation time constant of the nanoparticles. Importantly, the proposed time constant estimation scheme does not require any prior information regarding the nanoparticles. We validate this method with extensive experiments in an in-house magnetic particle spectroscopy (MPS) setup at four different frequencies (between 250 Hz and 10.8 kHz) and at three different field strengths (between 5 mT and 15 mT) for viscosities ranging between 0.89 mPa.s to 15.33 mPa.s. Our results demonstrate the viscosity mapping ability of MPI in the biologically relevant viscosity range.
0
1
0
0
0
0
16,876
Detecting Statistically Significant Communities
Community detection is a key data analysis problem across different fields. During the past decades, numerous algorithms have been proposed to address this issue. However, most work on community detection does not address the issue of statistical significance. Although some research efforts have been made towards mining statistically significant communities, deriving an analytical solution of p-value for one community under the configuration model is still a challenging mission that remains unsolved. To partially fulfill this void, we present a tight upper bound on the p-value of a single community under the configuration model, which can be used for quantifying the statistical significance of each community analytically. Meanwhile, we present a local search method to detect statistically significant communities in an iterative manner. Experimental results demonstrate that our method is comparable with the competing methods on detecting statistically significant communities.
1
0
0
0
0
0
16,877
On the effectivity of spectra representing motivic cohomology theories
Let k be an infinite perfect field. We provide a general criterion for a spectrum in the stable homotopy category over k to be effective, i.e. to be in the localizing subcategory generated by the suspension spectra of smooth schemes. As a consequence, we show that two recent versions of generalized motivic cohomology theories coincide.
0
0
1
0
0
0
16,878
Aggregated Momentum: Stability Through Passive Damping
Momentum is a simple and widely used trick which allows gradient-based optimizers to pick up speed along low curvature directions. Its performance depends crucially on a damping coefficient $\beta$. Large $\beta$ values can potentially deliver much larger speedups, but are prone to oscillations and instability; hence one typically resorts to small values such as 0.5 or 0.9. We propose Aggregated Momentum (AggMo), a variant of momentum which combines multiple velocity vectors with different $\beta$ parameters. AggMo is trivial to implement, but significantly dampens oscillations, enabling it to remain stable even for aggressive $\beta$ values such as 0.999. We reinterpret Nesterov's accelerated gradient descent as a special case of AggMo and analyze rates of convergence for quadratic objectives. Empirically, we find that AggMo is a suitable drop-in replacement for other momentum methods, and frequently delivers faster convergence.
0
0
0
1
0
0
16,879
On the Number of Bins in Equilibria for Signaling Games
We investigate the equilibrium behavior for the decentralized quadratic cheap talk problem in which an encoder and a decoder, viewed as two decision makers, have misaligned objective functions. In prior work, we have shown that the number of bins under any equilibrium has to be at most countable, generalizing a classical result due to Crawford and Sobel who considered sources with density supported on $[0,1]$. In this paper, we refine this result in the context of exponential and Gaussian sources. For exponential sources, a relation between the upper bound on the number of bins and the misalignment in the objective functions is derived, the equilibrium costs are compared, and it is shown that there also exist equilibria with infinitely many bins under certain parametric assumptions. For Gaussian sources, it is shown that there exist equilibria with infinitely many bins.
1
0
0
0
0
0
16,880
The Dantzig selector for a linear model of diffusion processes
In this paper, a linear model of diffusion processes with unknown drift and diagonal diffusion matrices is discussed. We will consider the estimation problems for unknown parameters based on the discrete time observation in high-dimensional and sparse settings. To estimate drift matrices, the Dantzig selector which was proposed by Candés and Tao in 2007 will be applied. Then, we will prove two types of consistency of the estimator of drift matrix; one is the consistency in the sense of $l_q$ norm for every $q \in [1,\infty]$ and the other is the variable selection consistency. Moreover, we will construct an asymptotically normal estimator of the drift matrix by using the variable selection consistency of the Dantzig selector.
0
0
1
1
0
0
16,881
A Spatio-Temporal Multivariate Shared Component Model with an Application in Iran Cancer Data
Among the proposals for joint disease mapping, the shared component model has become more popular. Another recent advance to strengthen inference of disease data has been the extension of purely spatial models to include time and space-time interaction. Such analyses have additional benefits over purely spatial models. However, only a few proposed spatio-temporal models could address analysing multiple diseases jointly. In the proposed model, each component is shared by different subsets of diseases, spatial and temporal trends are considered for each component, and the relative weight of these trends for each component for each relevant disease can be estimated. We present an application of the proposed method on incidence rates of seven prevalent cancers in Iran. The effect of the shared components on the individual cancer types can be identified. Regional and temporal variation in relative risks is shown. We present a model which combines the benefits of shared-components with spatio-temporal techniques for multivariate data. We show, how the model allows to analyse geographical and temporal variation among diseases beyond previous approaches.
0
0
0
1
0
0
16,882
The dynamo effect in decaying helical turbulence
We show that in decaying hydromagnetic turbulence with initial kinetic helicity, a weak magnetic field eventually becomes fully helical. The sign of magnetic helicity is opposite to that of the kinetic helicity - regardless of whether or not the initial magnetic field was helical. The magnetic field undergoes inverse cascading with the magnetic energy decaying approximately like t^{-1/2}. This is even slower than in the fully helical case, where it decays like t^{-2/3}. In this parameter range, the product of magnetic energy and correlation length raised to a certain power slightly larger than unity, is approximately constant. This scaling of magnetic energy persists over long time scales. At very late times and for domain sizes large enough to accommodate the growing spatial scales, we expect a cross-over to the t^{-2/3} decay law that is commonly observed for fully helical magnetic fields. Regardless of the presence or absence of initial kinetic helicity, the magnetic field experiences exponential growth during the first few turnover times, which is suggestive of small-scale dynamo action. Our results have applications to a wide range of experimental dynamos and astrophysical time-dependent plasmas, including primordial turbulence in the early universe.
0
1
0
0
0
0
16,883
Geometrically finite amalgamations of hyperbolic 3-manifold groups are not LERF
We prove that, for any two finite volume hyperbolic $3$-manifolds, the amalgamation of their fundamental groups along any nontrivial geometrically finite subgroup is not LERF. This generalizes the author's previous work on nonLERFness of amalgamations of hyperbolic $3$-manifold groups along abelian subgroups. A consequence of this result is that closed arithmetic hyperbolic $4$-manifolds have nonLERF fundamental groups. Along with the author's previous work, we get that, for any arithmetic hyperbolic manifold with dimension at least $4$, with possible exceptions in $7$-dimensional manifolds defined by the octonion, its fundamental group is not LERF.
0
0
1
0
0
0
16,884
Dining Philosophers, Leader Election and Ring Size problems, in the quantum setting
We provide the first quantum (exact) protocol for the Dining Philosophers problem (DP), a central problem in distributed algorithms. It is well known that the problem cannot be solved exactly in the classical setting. We then use our DP protocol to provide a new quantum protocol for the tightly related problem of exact leader election (LE) on a ring, improving significantly in both time and memory complexity over the known LE protocol by Tani et. al. To do this, we show that in some sense the exact DP and exact LE problems are equivalent; interestingly, in the classical non-exact setting they are not. Hopefully, the results will lead to exact quantum protocols for other important distributed algorithmic questions; in particular, we discuss interesting connections to the ring size problem, as well as to a physically motivated question of breaking symmetry in 1D translationally invariant systems.
1
0
0
0
0
0
16,885
An Online Secretary Framework for Fog Network Formation with Minimal Latency
Fog computing is seen as a promising approach to perform distributed, low-latency computation for supporting Internet of Things applications. However, due to the unpredictable arrival of available neighboring fog nodes, the dynamic formation of a fog network can be challenging. In essence, a given fog node must smartly select the set of neighboring fog nodes that can provide low-latency computations. In this paper, this problem of fog network formation and task distribution is studied considering a hybrid cloud-fog architecture. The goal of the proposed framework is to minimize the maximum computational latency by enabling a given fog node to form a suitable fog network, under uncertainty on the arrival process of neighboring fog nodes. To solve this problem, a novel approach based on the online secretary framework is proposed. To find the desired set of neighboring fog nodes, an online algorithm is developed to enable a task initiating fog node to decide on which other nodes can be used as part of its fog network, to offload computational tasks, without knowing any prior information on the future arrivals of those other nodes. Simulation results show that the proposed online algorithm can successfully select an optimal set of neighboring fog nodes while achieving a latency that is as small as the one resulting from an ideal, offline scheme that has complete knowledge of the system. The results also show how, using the proposed approach, the computational tasks can be properly distributed between the fog network and a remote cloud server.
1
0
0
0
0
0
16,886
Computer-assisted proof of heteroclinic connections in the one-dimensional Ohta-Kawasaki model
We present a computer-assisted proof of heteroclinic connections in the one-dimensional Ohta-Kawasaki model of diblock copolymers. The model is a fourth-order parabolic partial differential equation subject to homogeneous Neumann boundary conditions, which contains as a special case the celebrated Cahn-Hilliard equation. While the attractor structure of the latter model is completely understood for one-dimensional domains, the diblock copolymer extension exhibits considerably richer long-term dynamical behavior, which includes a high level of multistability. In this paper, we establish the existence of certain heteroclinic connections between the homogeneous equilibrium state, which represents a perfect copolymer mixture, and all local and global energy minimizers. In this way, we show that not every solution originating near the homogeneous state will converge to the global energy minimizer, but rather is trapped by a stable state with higher energy. This phenomenon can not be observed in the one-dimensional Cahn-Hillard equation, where generic solutions are attracted by a global minimizer.
1
0
1
0
0
0
16,887
Dust and Gas in Star Forming Galaxies at z~3 - Extending Galaxy Uniformity to 11.5 Billion Years
We present millimetre dust emission measurements of two Lyman Break Galaxies at z~3 and construct for the first time fully sampled infrared spectral energy distributions (SEDs), from mid-IR to the Rayleigh-Jeans tail, of individually detected, unlensed, UV-selected, main sequence (MS) galaxies at $z=3$. The SED modelling of the two sources confirms previous findings, based on stacked ensembles, of an increasing mean radiation field <U> with redshift, consistent with a rapidly decreasing gas metallicity in z > 2 galaxies. Complementing our study with CO[3-2] emission line observations, we measure the molecular gas mass (M_H2) reservoir of the systems using three independent approaches: 1) CO line observations, 2) the dust to gas mass ratio vs metallicity relation and 3) a single band, dust emission flux on the Rayleigh-Jeans side of the SED. All techniques return consistent M_H2 estimates within a factor of ~2 or less, yielding gas depletion time-scales (tau_dep ~ 0.35 Gyrs) and gas-to-stellar mass ratios (M_H2/M* ~ 0.5-1) for our z~3 massive MS galaxies. The overall properties of our galaxies are consistent with trends and relations established at lower redshifts, extending the apparent uniformity of star-forming galaxies over the last 11.5 billion years.
0
1
0
0
0
0
16,888
Flow speed has little impact on propulsive characteristics of oscillating foils
Experiments are reported on the performance of a pitching and heaving two-dimensional foil in a water channel in either continuous or intermittent motion. We find that the thrust and power are independent of the mean freestream velocity for two-fold changes in the mean velocity (four-fold in the dynamic pressure), and for oscillations in the velocity up to 38\% of the mean, where the oscillations are intended to mimic those of freely swimming motions where the thrust varies during the flapping cycle. We demonstrate that the correct velocity scale is not the flow velocity but the mean velocity of the trailing edge. We also find little or no impact of streamwise velocity change on the wake characteristics such as vortex organization, vortex strength, and time-averaged velocity profile development---the wake is both qualitatively and quantitatively unchanged. Our results suggest that constant velocity studies can be used to make robust conclusions about swimming performance without a need to explore the free-swimming condition.
0
1
0
0
0
0
16,889
Switching and Data Injection Attacks on Stochastic Cyber-Physical Systems: Modeling, Resilient Estimation and Attack Mitigation
In this paper, we consider the problem of attack-resilient state estimation, that is to reliably estimate the true system states despite two classes of attacks: (i) attacks on the switching mechanisms and (ii) false data injection attacks on actuator and sensor signals, in the presence of unbounded stochastic process and measurement noise signals. We model the systems under attack as hidden mode stochastic switched linear systems with unknown inputs and propose the use of a multiple-model inference algorithm to tackle these security issues. Moreover, we characterize fundamental limitations to resilient estimation (e.g., upper bound on the number of tolerable signal attacks) and discuss the topics of attack detection, identification and mitigation under this framework. Simulation examples of switching and false data injection attacks on a benchmark system and an IEEE 68-bus test system show the efficacy of our approach to recover resilient (i.e., asymptotically unbiased) state estimates as well as to identify and mitigate the attacks.
1
0
1
0
0
0
16,890
Generating Sentence Planning Variations for Story Telling
There has been a recent explosion in applications for dialogue interaction ranging from direction-giving and tourist information to interactive story systems. Yet the natural language generation (NLG) component for many of these systems remains largely handcrafted. This limitation greatly restricts the range of applications; it also means that it is impossible to take advantage of recent work in expressive and statistical language generation that can dynamically and automatically produce a large number of variations of given content. We propose that a solution to this problem lies in new methods for developing language generation resources. We describe the ES-Translator, a computational language generator that has previously been applied only to fables, and quantitatively evaluate the domain independence of the EST by applying it to personal narratives from weblogs. We then take advantage of recent work on language generation to create a parameterized sentence planner for story generation that provides aggregation operations, variations in discourse and in point of view. Finally, we present a user evaluation of different personal narrative retellings.
1
0
0
0
0
0
16,891
Detection of planet candidates around K giants, HD 40956, HD 111591, and HD 113996
Aims. The purpose of this paper is to detect and investigate the nature of long-term radial velocity (RV) variations of K-type giants and to confirm planetary companions around the stars. Methods. We have conducted two planet search programs by precise RV measurement using the 1.8 m telescope at Bohyunsan Optical Astronomy Observatory (BOAO) and the 1.88 m telescope at Okayama Astrophysical Observatory (OAO). The BOAO program searches for planets around 55 early K giants. The OAO program is looking for 190 G-K type giants. Results. In this paper, we report the detection of long-period RV variations of three K giant stars, HD 40956, HD 111591, and HD 113996. We investigated the cause of the observed RV variations and conclude the substellar companions are most likely the cause of the RV variations. The orbital analyses yield P = 578.6 $\pm$ 3.3 d, $m$ sin $i$ = 2.7 $\pm$ 0.6 $M_{\rm{J}}$, $a$ = 1.4 $\pm$ 0.1 AU for HD 40956; P = 1056.4 $\pm$ 14.3 d, $m$ sin $i$ = 4.4 $\pm$ 0.4 $M_{\rm{J}}$, $a$ = 2.5 $\pm$ 0.1 AU for HD 111591; P = 610.2 $\pm$ 3.8 d, $m$ sin $i$ = 6.3 $\pm$ 1.0 $M_{\rm{J}}$, $a$ = 1.6 $\pm$ 0.1 AU for HD 113996.
0
1
0
0
0
0
16,892
Perfect Half Space Games
We introduce perfect half space games, in which the goal of Player 2 is to make the sums of encountered multi-dimensional weights diverge in a direction which is consistent with a chosen sequence of perfect half spaces (chosen dynamically by Player 2). We establish that the bounding games of Jurdziński et al. (ICALP 2015) can be reduced to perfect half space games, which in turn can be translated to the lexicographic energy games of Colcombet and Niwiński, and are positionally determined in a strong sense (Player 2 can play without knowing the current perfect half space). We finally show how perfect half space games and bounding games can be employed to solve multi-dimensional energy parity games in pseudo-polynomial time when both the numbers of energy dimensions and of priorities are fixed, regardless of whether the initial credit is given as part of the input or existentially quantified. This also yields an optimal 2-EXPTIME complexity with given initial credit, where the best known upper bound was non-elementary.
1
0
0
0
0
0
16,893
Energy-efficient Analog Sensing for Large-scale, High-density Persistent Wireless Monitoring
The research challenge of current Wireless Sensor Networks~(WSNs) is to design energy-efficient, low-cost, high-accuracy, self-healing, and scalable systems for applications such as environmental monitoring. Traditional WSNs consist of low density, power-hungry digital motes that are expensive and cannot remain functional for long periods on a single charge. In order to address these challenges, a \textit{dumb-sensing and smart-processing} architecture that splits sensing and computation capabilities among tiers is proposed. Tier-1 consists of dumb sensors that only sense and transmit, while the nodes in Tier-2 do all the smart processing on Tier-1 sensor data. A low-power and low-cost solution for Tier-1 sensors has been proposed using Analog Joint Source Channel Coding~(AJSCC). An analog circuit that realizes the rectangular type of AJSCC has been proposed and realized on a Printed Circuit Board for feasibility analysis. A prototype consisting of three Tier-1 sensors (sensing temperature and humidity) communicating to a Tier-2 Cluster Head has been demonstrated to verify the proposed approach. Results show that our framework is indeed feasible to support large scale high density and persistent WSN deployment.
1
0
0
0
0
0
16,894
Computation of ground-state properties in molecular systems: back-propagation with auxiliary-field quantum Monte Carlo
We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties for several substances, including constituents of the primordial terrestrial atmosphere and small organic molecules.
0
1
0
0
0
0
16,895
Demonstration of the Relationship between Sensitivity and Identifiability for Inverse Uncertainty Quantification
Inverse Uncertainty Quantification (UQ), or Bayesian calibration, is the process to quantify the uncertainties of random input parameters based on experimental data. The introduction of model discrepancy term is significant because "over-fitting" can theoretically be avoided. But it also poses challenges in the practical applications. One of the mostly concerned and unresolved problem is the "lack of identifiability" issue. With the presence of model discrepancy, inverse UQ becomes "non-identifiable" in the sense that it is difficult to precisely distinguish between the parameter uncertainties and model discrepancy when estimating the calibration parameters. Previous research to alleviate the non-identifiability issue focused on using informative priors for the calibration parameters and the model discrepancy, which is usually not a viable solution because one rarely has such accurate and informative prior knowledge. In this work, we show that identifiability is largely related to the sensitivity of the calibration parameters with regards to the chosen responses. We adopted an improved modular Bayesian approach for inverse UQ that does not require priors for the model discrepancy term. The relationship between sensitivity and identifiability was demonstrated with a practical example in nuclear engineering. It was shown that, in order for a certain calibration parameter to be statistically identifiable, it should be significant to at least one of the responses whose data are used for inverse UQ. Good identifiability cannot be achieved for a certain calibration parameter if it is not significant to any of the responses. It is also demonstrated that "fake identifiability" is possible if model responses are not appropriately chosen, or inaccurate but informative priors are specified.
0
0
0
1
0
0
16,896
The GENIUS Approach to Robust Mendelian Randomization Inference
Mendelian randomization (MR) is a popular instrumental variable (IV) approach. A key IV identification condition known as the exclusion restriction requires no direct effect of an IV on the outcome not through the exposure which is unrealistic in most MR analyses. As a result, possible violation of the exclusion restriction can seldom be ruled out in such studies. To address this concern, we introduce a new class of IV estimators which are robust to violation of the exclusion restriction under a large collection of data generating mechanisms consistent with parametric models commonly assumed in the MR literature. Our approach named "MR G-Estimation under No Interaction with Unmeasured Selection" (MR GENIUS) may be viewed as a modification to Robins' G-estimation approach that is robust to both additive unmeasured confounding and violation of the exclusion restriction assumption. We also establish that estimation with MR GENIUS may also be viewed as a robust generalization of the well-known Lewbel estimator for a triangular system of structural equations with endogeneity. Specifically, we show that unlike Lewbel estimation, MR GENIUS is under fairly weak conditions also robust to unmeasured confounding of the effects of the genetic IVs, another possible violation of a key IV Identification condition. Furthermore, while Lewbel estimation involves specification of linear models both for the outcome and the exposure, MR GENIUS generally does not require specification of a structural model for the direct effect of invalid IVs on the outcome, therefore allowing the latter model to be unrestricted. Finally, unlike Lewbel estimation, MR GENIUS is shown to equally apply for binary, discrete or continuous exposure and outcome variables and can be used under prospective sampling, or retrospective sampling such as in a case-control study.
0
0
0
1
0
0
16,897
Evaluation of Trace Alignment Quality and its Application in Medical Process Mining
Trace alignment algorithms have been used in process mining for discovering the consensus treatment procedures and process deviations. Different alignment algorithms, however, may produce very different results. No widely-adopted method exists for evaluating the results of trace alignment. Existing reference-free evaluation methods cannot adequately and comprehensively assess the alignment quality. We analyzed and compared the existing evaluation methods, identifying their limitations, and introduced improvements in two reference-free evaluation methods. Our approach assesses the alignment result globally instead of locally, and therefore helps the algorithm to optimize overall alignment quality. We also introduced a novel metric to measure the alignment complexity, which can be used as a constraint on alignment algorithm optimization. We tested our evaluation methods on a trauma resuscitation dataset and provided the medical explanation of the activities and patterns identified as deviations using our proposed evaluation methods.
1
0
0
0
0
0
16,898
Size Constraints on Majorana Beamsplitter Interferometer: Majorana Coupling and Surface-Bulk Scattering
Topological insulator surfaces in proximity to superconductors have been proposed as a way to produce Majorana fermions in condensed matter physics. One of the simplest proposed experiments with such a system is Majorana interferometry. Here, we consider two possibly conflicting constraints on the size of such an interferometer. Coupling of a Majorana mode from the edge (the arms) of the interferometer to vortices in the centre of the device sets a lower bound on the size of the device. On the other hand, scattering to the usually imperfectly insulating bulk sets an upper bound. From estimates of experimental parameters, we find that typical samples may have no size window in which the Majorana interferometer can operate, implying that a new generation of more highly insulating samples must be explored.
0
1
0
0
0
0
16,899
Counting Arithmetical Structures on Paths and Cycles
Let $G$ be a finite, simple, connected graph. An arithmetical structure on $G$ is a pair of positive integer vectors $\mathbf{d},\mathbf{r}$ such that $(\mathrm{diag}(\mathbf{d})-A)\mathbf{r}=0$, where $A$ is the adjacency matrix of $G$. We investigate the combinatorics of arithmetical structures on path and cycle graphs, as well as the associated critical groups (the cokernels of the matrices $(\mathrm{diag}(\mathbf{d})-A)$). For paths, we prove that arithmetical structures are enumerated by the Catalan numbers, and we obtain refined enumeration results related to ballot sequences. For cycles, we prove that arithmetical structures are enumerated by the binomial coefficients $\binom{2n-1}{n-1}$, and we obtain refined enumeration results related to multisets. In addition, we determine the critical groups for all arithmetical structures on paths and cycles.
0
0
1
0
0
0
16,900
From synaptic interactions to collective dynamics in random neuronal networks models: critical role of eigenvectors and transient behavior
The study of neuronal interactions is currently at the center of several neuroscience big collaborative projects (including the Human Connectome, the Blue Brain, the Brainome, etc.) which attempt to obtain a detailed map of the entire brain matrix. Under certain constraints, mathematical theory can advance predictions of the expected neural dynamics based solely on the statistical properties of such synaptic interaction matrix. This work explores the application of free random variables (FRV) to the study of large synaptic interaction matrices. Besides recovering in a straightforward way known results on eigenspectra of neural networks, we extend them to heavy-tailed distributions of interactions. More importantly, we derive analytically the behavior of eigenvector overlaps, which determine stability of the spectra. We observe that upon imposing the neuronal excitation/inhibition balance, although the eigenvalues remain unchanged, their stability dramatically decreases due to strong non-orthogonality of associated eigenvectors. It leads us to the conclusion that the understanding of the temporal evolution of asymmetric neural networks requires considering the entangled dynamics of both eigenvectors and eigenvalues, which might bear consequences for learning and memory processes in these models. Considering the success of FRV analysis in a wide variety of branches disciplines, we hope that the results presented here foster additional application of these ideas in the area of brain sciences.
0
0
0
0
1
0