text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " The nature of the bipolar, $\\gamma$-ray Fermi bubbles (FB) is still unclear,\nin part because their faint, high-latitude X-ray counterpart has until now\neluded a clear detection. We stack ROSAT data at varying distances from the FB\nedges, thus boosting the signal and identifying an expanding shell behind the\nsouthwest, southeast, and northwest edges, albeit not in the dusty northeast\nsector near Loop I. A Primakoff-like model for the underlying flow is invoked\nto show that the signals are consistent with halo gas heated by a strong,\nforward shock to $\\sim$keV temperatures. Assuming ion--electron thermal\nequilibrium then implies a $\\sim10^{56}$ erg event near the Galactic centre\n$\\sim7$ Myr ago. However, the reported high absorption-line velocities suggest\na preferential shock-heating of ions, and thus more energetic ($\\sim 10^{57}$\nerg), younger ($\\lesssim 3$ Myr) FBs.\n", "title": "Fermi bubbles: high latitude X-ray supersonic shell" }
null
null
null
null
true
null
18901
null
Default
null
null
null
{ "abstract": " In this paper, we revisit primal-dual dynamics for convex optimization and\npresent a generalization of the dynamics based on the concept of passivity. It\nis then proved that supplying a stable zero to one of the integrators in the\ndynamics allows one to eliminate the assumption of strict convexity on the cost\nfunction based on the passivity paradigm together with the invariance principle\nfor Caratheodory systems. We then show that the present algorithm is also a\ngeneralization of existing augmented Lagrangian-based primal-dual dynamics, and\ndiscuss the benefit of the present generalization in terms of noise reduction\nand convergence speed.\n", "title": "Passivity-Based Generalization of Primal-Dual Dynamics for Non-Strictly Convex Cost Functions" }
null
null
[ "Computer Science" ]
null
true
null
18902
null
Validated
null
null
null
{ "abstract": " The distribution of the sum of r-th power of standard normal random variables\nis a generalization of the chi-squared distribution. In this paper, we\nrepresent the probability density function of the random variable by an\none-dimensional absolutely convergent integral with the characteristic\nfunction. Our integral formula is expected to be applied for evaluation of the\ndensity function. Our integral formula is based on the inversion formula, and\nwe utilize a summation method. We also discuss on our formula in the view point\nof hyperfunctions.\n", "title": "An integral formula for the powered sum of the independent, identically and normally distributed random variables" }
null
null
null
null
true
null
18903
null
Default
null
null
null
{ "abstract": " We propose a dynamic boosted ensemble learning method based on random forest\n(DBRF), a novel ensemble algorithm that incorporates the notion of hard example\nmining into Random Forest (RF) and thus combines the high accuracy of Boosting\nalgorithm with the strong generalization of Bagging algorithm. Specifically, we\npropose to measure the quality of each leaf node of every decision tree in the\nrandom forest to determine hard examples. By iteratively training and then\nremoving easy examples from training data, we evolve the random forest to focus\non hard examples dynamically so as to learn decision boundaries better. Data\ncan be cascaded through these random forests learned in each iteration in\nsequence to generate predictions, thus making RF deep. We also propose to use\nevolution mechanism and smart iteration mechanism to improve the performance of\nthe model. DBRF outperforms RF on three UCI datasets and achieved\nstate-of-the-art results compared to other deep models. Moreover, we show that\nDBRF is also a new way of sampling and can be very useful when learning from\nimbalanced data.\n", "title": "A Dynamic Boosted Ensemble Learning Method Based on Random Forest" }
null
null
[ "Statistics" ]
null
true
null
18904
null
Validated
null
null
null
{ "abstract": " In this paper, a novel framework is proposed for optimizing the operation and\nperformance of a large-scale, multi-hop millimeter wave (mmW) backhaul within a\nwireless small cell network (SCN) that encompasses multiple mobile network\noperators (MNOs). The proposed framework enables the small base stations (SBSs)\nto jointly decide on forming the multi-hop, mmW links over backhaul\ninfrastructure that belongs to multiple, independent MNOs, while properly\nallocating resources across those links. In this regard, the problem is\naddressed using a novel framework based on matching theory that is composed to\ntwo, highly inter-related stages: a multi-hop network formation stage and a\nresource management stage. One unique feature of this framework is that it\njointly accounts for both wireless channel characteristics and economic factors\nduring both network formation and resource management. The multi-hop network\nformation stage is formulated as a one-to-many matching game which is solved\nusing a novel algorithm, that builds on the so-called deferred acceptance\nalgorithm and is shown to yield a stable and Pareto optimal multi-hop mmW\nbackhaul network. Then, a one-to-many matching game is formulated to enable\nproper resource allocation across the formed multi-hop network. This game is\nthen shown to exhibit peer effects and, as such, a novel algorithm is developed\nto find a stable and optimal resource management solution that can properly\ncope with these peer effects. Simulation results show that the proposed\nframework yields substantial gains, in terms of the average sum rate, reaching\nup to 27% and 54%, respectively, compared to a non-cooperative scheme in which\ninter-operator sharing is not allowed and a random allocation approach. The\nresults also show that our framework provides insights on how to manage pricing\nand the cost of the cooperative mmW backhaul network for the MNOs.\n", "title": "Inter-Operator Resource Management for Millimeter Wave, Multi-Hop Backhaul Networks" }
null
null
null
null
true
null
18905
null
Default
null
null
null
{ "abstract": " We consider the problem of training generative models with a Generative\nAdversarial Network (GAN). Although GANs can accurately model complex\ndistributions, they are known to be difficult to train due to instabilities\ncaused by a difficult minimax optimization problem. In this paper, we view the\nproblem of training GANs as finding a mixed strategy in a zero-sum game.\nBuilding on ideas from online learning we propose a novel training method named\nChekhov GAN 1 . On the theory side, we show that our method provably converges\nto an equilibrium for semi-shallow GAN architectures, i.e. architectures where\nthe discriminator is a one layer network and the generator is arbitrary. On the\npractical side, we develop an efficient heuristic guided by our theoretical\nresults, which we apply to commonly used deep GAN architectures. On several\nreal world tasks our approach exhibits improved stability and performance\ncompared to standard GAN training.\n", "title": "An Online Learning Approach to Generative Adversarial Networks" }
null
null
null
null
true
null
18906
null
Default
null
null
null
{ "abstract": " Sound event detection (SED) is typically posed as a supervised learning\nproblem requiring training data with strong temporal labels of sound events.\nHowever, the production of datasets with strong labels normally requires\nunaffordable labor cost. It limits the practical application of supervised SED\nmethods. The recent advances in SED approaches focuses on detecting sound\nevents by taking advantages of weakly labeled or unlabeled training data. In\nthis paper, we propose a joint framework to solve the SED task using\nlarge-scale unlabeled in-domain data. In particular, a state-of-the-art general\naudio tagging model is first employed to predict weak labels for unlabeled\ndata. On the other hand, a weakly supervised architecture based on the\nconvolutional recurrent neural network (CRNN) is developed to solve the strong\nannotations of sound events with the aid of the unlabeled data with predicted\nlabels. It is found that the SED performance generally increases as more\nunlabeled data is added into the training. To address the noisy label problem\nof unlabeled data, an ensemble strategy is applied to increase the system\nrobustness. The proposed system is evaluated on the SED dataset of DCASE 2018\nchallenge. It reaches a F1-score of 21.0%, resulting in an improvement of 10%\nover the baseline system.\n", "title": "Weakly supervised CRNN system for sound event detection with large-scale unlabeled in-domain data" }
null
null
null
null
true
null
18907
null
Default
null
null
null
{ "abstract": " We study optimally doped\nBi$_{2}$Sr$_{2}$Ca$_{0.92}$Y$_{0.08}$Cu$_{2}$O$_{8+\\delta}$ (Bi2212) using\nangle-resolved two-photon photoemission spectroscopy. Three spectral features\nare resolved near 1.5, 2.7, and 3.6 eV above the Fermi level. By tuning the\nphoton energy, we determine that the 2.7-eV feature arises predominantly from\nunoccupied states. The 1.5- and 3.6-eV features reflect unoccupied states whose\nspectral intensities are strongly modulated by the corresponding occupied\nstates. These unoccupied states are consistent with the prediction from a\ncluster perturbation theory based on the single-band Hubbard model. Through\nthis comparison, a Coulomb interaction strength U of 2.7 eV is extracted. Our\nstudy complements equilibrium photoemission spectroscopy and provides a direct\nspectroscopic measurement of the unoccupied states in cuprates. The determined\nCoulomb U indicates that the charge-transfer gap of optimally doped Bi2212 is\n1.1 eV.\n", "title": "Revealing the Coulomb interaction strength in a cuprate superconductor" }
null
null
null
null
true
null
18908
null
Default
null
null
null
{ "abstract": " Ordering dynamics of self-propelled particles in an inhomogeneous medium in\ntwo-dimensions is studied. We write coarse-grained hydrodynamic equations of\nmotion for coarse-grained density and velocity fields in the presence of an\nexternal random disorder field, which is quenched in time. The strength of\ninhomogeneity is tuned from zero disorder (clean system) to large disorder. In\nthe clean system, the velocity field grows algebraically as $L_{\\rm V} \\sim\nt^{0.5}$. The density field does not show clean power-law growth; however, it\nfollows $L_{\\rm \\rho} \\sim t^{0.8}$ approximately. In the inhomogeneous system,\nwe find a disorder dependent growth. For both the density and the velocity,\ngrowth slow down with increasing strength of disorder. The velocity shows a\ndisorder dependent power-law growth $L_{\\rm V}(t,\\Delta) \\sim t^{1/\\bar z_{\\rm\nV}(\\Delta)}$ for intermediate times. At late times, there is a crossover to\nlogarithmic growth $L_{\\rm V}(t,\\Delta) \\sim (\\ln t)^{1/\\varphi}$, where\n$\\varphi$ is a disorder independent exponent. Two-point correlation functions\nfor the velocity shows dynamical scaling, but the density does not.\n", "title": "Ordering dynamics of self-propelled particles in an inhomogeneous medium" }
null
null
null
null
true
null
18909
null
Default
null
null
null
{ "abstract": " We discuss the amplification of loop corrections in quantum many-body systems\nthrough dynamical instabilities. As an example, we investigate both\nanalytically and numerically a two-component ultracold atom system in one\nspatial dimension. The model features a tachyonic instability, which\nincorporates characteristic aspects of the mechanisms for particle production\nin early-universe inflaton models. We establish a direct correspondence between\nmeasureable macroscopic growth rates for occupation numbers of the ultracold\nBose gas and the underlying microscopic processes in terms of Feynman loop\ndiagrams. We analyze several existing ultracold atom setups featuring dynamical\ninstabilities and propose optimized protocols for their experimental\nrealization. We demonstrate that relevant dynamical processes can be enhanced\nusing a seeding procedure for unstable modes and clarify the role of initial\nquantum fluctuations and the generation of a non-linear secondary stage for the\namplification of modes.\n", "title": "Inflationary preheating dynamics with ultracold atoms" }
null
null
null
null
true
null
18910
null
Default
null
null
null
{ "abstract": " Reinforcement learning (RL), while often powerful, can suffer from slow\nlearning speeds, particularly in high dimensional spaces. The autonomous\ndecomposition of tasks and use of hierarchical methods hold the potential to\nsignificantly speed up learning in such domains. This paper proposes a novel\npractical method that can autonomously decompose tasks, by leveraging\nassociation rule mining, which discovers hidden relationship among entities in\ndata mining. We introduce a novel method called ARM-HSTRL (Association Rule\nMining to extract Hierarchical Structure of Tasks in Reinforcement Learning).\nIt extracts temporal and structural relationships of sub-goals in RL, and\nmulti-task RL. In particular,it finds sub-goals and relationship among them. It\nis shown the significant efficiency and performance of the proposed method in\ntwo main topics of RL.\n", "title": "Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning" }
null
null
null
null
true
null
18911
null
Default
null
null
null
{ "abstract": " Motivation: Epigenetic heterogeneity within a tumour can play an important\nrole in tumour evolution and the emergence of resistance to treatment. It is\nincreasingly recognised that the study of DNA methylation (DNAm) patterns along\nthe genome -- so-called `epialleles' -- offers greater insight into epigenetic\ndynamics than conventional analyses which examine DNAm marks individually.\nResults: We have developed a Bayesian model to infer which epialleles are\npresent in multiple regions of the same tumour. We apply our method to reduced\nrepresentation bisulfite sequencing (RRBS) data from multiple regions of one\nlung cancer tumour and a matched normal sample. The model borrows information\nfrom all tumour regions to leverage greater statistical power. The total number\nof epialleles, the epiallele DNAm patterns, and a noise hyperparameter are all\nautomatically inferred from the data. Uncertainty as to which epiallele an\nobserved sequencing read originated from is explicitly incorporated by\nmarginalising over the appropriate posterior densities. The degree to which\ntumour samples are contaminated with normal tissue can be estimated and\ncorrected for. By tracing the distribution of epialleles throughout the tumour\nwe can infer the phylogenetic history of the tumour, identify epialleles that\ndiffer between normal and cancer tissue, and define a measure of global\nepigenetic disorder.\n", "title": "Quantification of tumour evolution and heterogeneity via Bayesian epiallele detection" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
18912
null
Validated
null
null
null
{ "abstract": " Granger-causality in the frequency domain is an emerging tool to analyze the\ncausal relationship between two time series. We propose a bootstrap test on\nunconditional and conditional Granger-causality spectra, as well as on their\ndifference, to catch particularly prominent causality cycles in relative terms.\nIn particular, we consider a stochastic process derived applying independently\nthe stationary bootstrap to the original series. Our null hypothesis is that\neach causality or causality difference is equal to the median across\nfrequencies computed on that process. In this way, we are able to disambiguate\ncausalities which depart significantly from the median one obtained ignoring\nthe causality structure. Our test shows power one as the process tends to\nnon-stationarity, thus being more conservative than parametric alternatives. As\nan example, we infer about the relationship between money stock and GDP in the\nEuro Area via our approach, considering inflation, unemployment and interest\nrates as conditioning variables. We point out that during the period 1999-2017\nthe money stock aggregate M1 had a significant impact on economic output at all\nfrequencies, while the opposite relationship is significant only at high\nfrequencies.\n", "title": "A bootstrap test to detect prominent Granger-causalities across frequencies" }
null
null
null
null
true
null
18913
null
Default
null
null
null
{ "abstract": " Asymmetric segregation of key proteins at cell division -- be it a beneficial\nor deleterious protein -- is ubiquitous in unicellular organisms and often\nconsidered as an evolved trait to increase fitness in a stressed environment.\nHere, we provide a general framework to describe the evolutionary origin of\nthis asymmetric segregation. We compute the population fitness as a function of\nthe protein segregation asymmetry $a$, and show that the value of $a$ which\noptimizes the population growth manifests a phase transition between symmetric\nand asymmetric partitioning phases. Surprisingly, the nature of phase\ntransition is different for the case of beneficial proteins as opposed to\nproteins which decrease the single-cell growth rate. Our study elucidates the\noptimization problem faced by evolution in the context of protein segregation,\nand motivates further investigation of asymmetric protein segregation in\nbiological systems.\n", "title": "Optimal segregation of proteins: phase transitions and symmetry breaking" }
null
null
null
null
true
null
18914
null
Default
null
null
null
{ "abstract": " We consider an exchange who wishes to set suitable make-take fees to attract\nliquidity on its platform. Using a principal-agent approach, we are able to\ndescribe in quasi-explicit form the optimal contract to propose to a market\nmaker. This contract depends essentially on the market maker inventory\ntrajectory and on the volatility of the asset. We also provide the optimal\nquotes that should be displayed by the market maker. The simplicity of our\nformulas allows us to analyze in details the effects of optimal contracting\nwith an exchange, compared to a situation without contract. We show in\nparticular that it leads to higher quality liquidity and lower trading costs\nfor investors.\n", "title": "Optimal make-take fees for market making regulation" }
null
null
[ "Quantitative Finance" ]
null
true
null
18915
null
Validated
null
null
null
{ "abstract": " Spontaneous symmetry breaking (SSB) is an important phenomenon observed in\nvarious fields including physics and biology. In this connection, we here show\nthat the trade-off between attractive and repulsive couplings can induce\nspontaneous symmetry breaking in a homogeneous system of coupled oscillators.\nWith a simple model of a system of two coupled Stuart-Landau oscillators, we\ndemonstrate how the tendency of attractive coupling in inducing in-phase\nsynchronized (IPS) oscillations and the tendency of repulsive coupling in\ninducing out-of-phase synchronized (OPS) oscillations compete with each other\nand give rise to symmetry breaking oscillatory (SBO) states and interesting\nmultistabilities. Further, we provide explicit expressions for synchronized and\nanti-synchronized oscillatory states as well as the so called oscillation death\n(OD) state and study their stability. If the Hopf bifurcation parameter\n(${\\lambda}$) is greater than the natural frequency ($\\omega$) of the system,\nthe attractive coupling favours the emergence of an anti-symmetric OD state via\na Hopf bifurcation whereas the repulsive coupling favours the emergence of a\nsimilar state through a saddle-node bifurcation. We show that an increase in\nthe repulsive coupling not only destabilizes the IPS state but also facilitates\nthe re-entrance of the IPS state.\n", "title": "Spontaneous symmetry breaking due to the trade-off between attractive and repulsive couplings" }
null
null
null
null
true
null
18916
null
Default
null
null
null
{ "abstract": " Evolutionary algorithms have recently been used to create a wide range of\nartistic work. In this paper, we propose a new approach for the composition of\nnew images from existing ones, that retain some salient features of the\noriginal images. We introduce evolutionary algorithms that create new images\nbased on a fitness function that incorporates feature covariance matrices\nassociated with different parts of the images. This approach is very flexible\nin that it can work with a wide range of features and enables targeting\nspecific regions in the images. For the creation of the new images, we propose\na population-based evolutionary algorithm with mutation and crossover operators\nbased on random walks. Our experimental results reveal a spectrum of\naesthetically pleasing images that can be obtained with the aid of our\nevolutionary process.\n", "title": "Evolutionary Image Composition Using Feature Covariance Matrices" }
null
null
null
null
true
null
18917
null
Default
null
null
null
{ "abstract": " In this paper, we prove a sharp limit on the community detection problem with\ncolored edges. We assume two equal-sized communities and there are $m$\ndifferent types of edges. If two vertices are in the same community, the\ndistribution of edges follows $p_i=\\alpha_i\\log{n}/n$ for $1\\leq i \\leq m$,\notherwise the distribution of edges is $q_i=\\beta_i\\log{n}/n$ for $1\\leq i \\leq\nm$, where $\\alpha_i$ and $\\beta_i$ are positive constants and $n$ is the total\nnumber of vertices. Under these assumptions, a fundamental limit on community\ndetection is characterized using the Hellinger distance between the two\ndistributions. If $\\sum_{i=1}^{m} {(\\sqrt{\\alpha_i} - \\sqrt{\\beta_i})}^2 >2$,\nthen the community detection via maximum likelihood (ML) estimator is possible\nwith high probability. If $\\sum_{i=1}^m {(\\sqrt{\\alpha_i} - \\sqrt{\\beta_i})}^2\n< 2$, the probability that the ML estimator fails to detect the communities\ndoes not go to zero.\n", "title": "Community Detection with Colored Edges" }
null
null
null
null
true
null
18918
null
Default
null
null
null
{ "abstract": " Nonlocal neural networks have been proposed and shown to be effective in\nseveral computer vision tasks, where the nonlocal operations can directly\ncapture long-range dependencies in the feature space. In this paper, we study\nthe nature of diffusion and damping effect of nonlocal networks by doing\nspectrum analysis on the weight matrices of the well-trained networks, and then\npropose a new formulation of the nonlocal block. The new block not only learns\nthe nonlocal interactions but also has stable dynamics, thus allowing deeper\nnonlocal structures. Moreover, we interpret our formulation from the general\nnonlocal modeling perspective, where we make connections between the proposed\nnonlocal network and other nonlocal models, such as nonlocal diffusion process\nand Markov jump process.\n", "title": "Nonlocal Neural Networks, Nonlocal Diffusion and Nonlocal Modeling" }
null
null
null
null
true
null
18919
null
Default
null
null
null
{ "abstract": " The main aim of the present paper is to represent an exact and simple proof\nfor FLT by using properties of the algebra identities and linear algebra.\n", "title": "Proof of FLT by Algebra Identities and Linear Algebra" }
null
null
[ "Mathematics" ]
null
true
null
18920
null
Validated
null
null
null
{ "abstract": " The nature of aerosols in hot exoplanet atmospheres is one of the primary\nvexing questions facing the exoplanet field. The complex chemistry, multiple\nformation pathways, and lack of easily identifiable spectral features\nassociated with aerosols make it especially challenging to constrain their key\nproperties. We propose a transmission spectroscopy technique to identify the\nprimary aerosol formation mechanism for the most highly irradiated hot Jupiters\n(HIHJs). The technique is based on the expectation that the two key types of\naerosols -- photochemically generated hazes and equilibrium condensate clouds\n-- are expected to form and persist in different regions of a highly irradiated\nplanet's atmosphere. Haze can only be produced on the permanent daysides of\ntidally-locked hot Jupiters, and will be carried downwind by atmospheric\ndynamics to the evening terminator (seen as the trailing limb during transit).\nClouds can only form in cooler regions on the night side and morning terminator\nof HIHJs (seen as the leading limb during transit). Because opposite limbs are\nexpected to be impacted by different types of aerosols, ingress and egress\nspectra, which primarily probe opposing sides of the planet, will reveal the\ndominant aerosol formation mechanism. We show that the benchmark HIHJ,\nWASP-121b, has a transmission spectrum consistent with partial aerosol coverage\nand that ingress-egress spectroscopy would constrain the location and formation\nmechanism of those aerosols. In general, using this diagnostic we find that\nobservations with JWST and potentially with HST should be able to distinguish\nbetween clouds and haze for currently known HIHJs.\n", "title": "An Observational Diagnostic for Distinguishing Between Clouds and Haze in Hot Exoplanet Atmospheres" }
null
null
null
null
true
null
18921
null
Default
null
null
null
{ "abstract": " We introduce a family of mathematical objects called $\\mathcal{P}$-schemes,\nwhere $\\mathcal{P}$ is a poset of subgroups of a finite group $G$. A\n$\\mathcal{P}$-scheme is a collection of partitions of the right coset spaces\n$H\\backslash G$, indexed by $H\\in\\mathcal{P}$, that satisfies a list of axioms.\nThese objects generalize the classical notion of association schemes as well as\nthe notion of $m$-schemes (Ivanyos et al. 2009).\nBased on $\\mathcal{P}$-schemes, we develop a unifying framework for the\nproblem of deterministic factoring of univariate polynomials over finite fields\nunder the generalized Riemann hypothesis (GRH).\n", "title": "$\\mathcal{P}$-schemes and Deterministic Polynomial Factoring over Finite Fields" }
null
null
null
null
true
null
18922
null
Default
null
null
null
{ "abstract": " A considerable amount of machine learning algorithms take instance-feature\nmatrices as their inputs. As such, they cannot directly analyze time series\ndata due to its temporal nature, usually unequal lengths, and complex\nproperties. This is a great pity since many of these algorithms are effective,\nrobust, efficient, and easy to use. In this paper, we bridge this gap by\nproposing an efficient representation learning framework that is able to\nconvert a set of time series with equal or unequal lengths to a matrix format.\nIn particular, we guarantee that the pairwise similarities between time series\nare well preserved after the transformation. The learned feature representation\nis particularly suitable to the class of learning problems that are sensitive\nto data similarities. Given a set of $n$ time series, we first construct an\n$n\\times n$ partially observed similarity matrix by randomly sampling $O(n \\log\nn)$ pairs of time series and computing their pairwise similarities. We then\npropose an extremely efficient algorithm that solves a highly non-convex and\nNP-hard problem to learn new features based on the partially observed\nsimilarity matrix. We use the learned features to conduct experiments on both\ndata classification and clustering tasks. Our extensive experimental results\ndemonstrate that the proposed framework is both effective and efficient.\n", "title": "Similarity Preserving Representation Learning for Time Series Analysis" }
null
null
null
null
true
null
18923
null
Default
null
null
null
{ "abstract": " We identify the \"organization\" of a human social group as the communication\nnetwork(s) within that group. We then introduce three theoretical approaches to\nanalyzing what determines the structures of human organizations. All three\napproaches adopt a group-selection perspective, so that the group's network\nstructure is (approximately) optimal, given the information-processing\nlimitations of agents within the social group, and the exogenous welfare\nfunction of the overall group. In the first approach we use a new sub-field of\ntelecommunications theory called network coding, and focus on a welfare\nfunction that involves the ability of the organization to convey information\namong the agents. In the second approach we focus on a scenario where agents\nwithin the organization must allocate their future communication resources when\nthe state of the future environment is uncertain. We show how this formulation\ncan be solved with a linear program. In the third approach, we introduce an\ninformation synthesis problem in which agents within an organization receive\ninformation from various sources and must decide how to transform such\ninformation and transmit the results to other agents in the organization. We\npropose leveraging the computational power of neural networks to solve such\nproblems. These three approaches formalize and synthesize work in fields\nincluding anthropology, archeology, economics and psychology that deal with\norganization structure, theory of the firm, span of control and cognitive\nlimits on communication.\n", "title": "Modeling Social Organizations as Communication Networks" }
null
null
null
null
true
null
18924
null
Default
null
null
null
{ "abstract": " We show that for any solvable Lie group of real type, any homogeneous Ricci\nflow solution converges in Cheeger-Gromov topology to a unique non-flat\nsolvsoliton, which is independent of the initial left-invariant metric. As an\napplication, we obtain results on the isometry groups of non-flat solvsoliton\nmetrics and Einstein solvmanifolds.\n", "title": "The Ricci flow on solvmanifolds of real type" }
null
null
null
null
true
null
18925
null
Default
null
null
null
{ "abstract": " Recently, it has become feasible to generate large-scale, multi-tissue gene\nexpression data, where expression profiles are obtained from multiple tissues\nor organs sampled from dozens to hundreds of individuals. When traditional\nclustering methods are applied to this type of data, important information is\nlost, because they either require all tissues to be analyzed independently,\nignoring dependencies and similarities between tissues, or to merge tissues in\na single, monolithic dataset, ignoring individual characteristics of tissues.\nWe developed a Bayesian model-based multi-tissue clustering algorithm, revamp,\nwhich can incorporate prior information on physiological tissue similarity, and\nwhich results in a set of clusters, each consisting of a core set of genes\nconserved across tissues as well as differential sets of genes specific to one\nor more subsets of tissues. Using data from seven vascular and metabolic\ntissues from over 100 individuals in the STockholm Atherosclerosis Gene\nExpression (STAGE) study, we demonstrate that multi-tissue clusters inferred by\nrevamp are more enriched for tissue-dependent protein-protein interactions\ncompared to alternative approaches. We further demonstrate that revamp results\nin easily interpretable multi-tissue gene expression associations to key\ncoronary artery disease processes and clinical phenotypes in the STAGE\nindividuals. Revamp is implemented in the Lemon-Tree software, available at\nthis https URL\n", "title": "Model-based clustering of multi-tissue gene expression data" }
null
null
null
null
true
null
18926
null
Default
null
null
null
{ "abstract": " The Zap Q-learning algorithm introduced in this paper is an improvement of\nWatkins' original algorithm and recent competitors in several respects. It is a\nmatrix-gain algorithm designed so that its asymptotic variance is optimal.\nMoreover, an ODE analysis suggests that the transient behavior is a close match\nto a deterministic Newton-Raphson implementation. This is made possible by a\ntwo time-scale update equation for the matrix gain sequence.\nThe analysis suggests that the approach will lead to stable and efficient\ncomputation even for non-ideal parameterized settings. Numerical experiments\nconfirm the quick convergence, even in such non-ideal cases.\nA secondary goal of this paper is tutorial. The first half of the paper\ncontains a survey on reinforcement learning algorithms, with a focus on minimum\nvariance algorithms.\n", "title": "Fastest Convergence for Q-learning" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
18927
null
Validated
null
null
null
{ "abstract": " In recent years, the rapidly increasing amounts of data created and processed\nthrough the internet resulted in distributed storage systems employing erasure\ncoding based schemes. Aiming to balance the tradeoff between data recovery for\ncorrelated failures and efficient encoding and decoding, distributed storage\nsystems employing maximally recoverable codes came up. Unifying a number of\ntopologies considered both in theory and practice, Gopalan \\cite{Gopalan2017}\ninitiated the study of maximally recoverable codes for grid-like topologies.\nIn this paper, we focus on the maximally recoverable codes that instantiate\ngrid-like topologies $T_{m\\times n}(1,b,0)$. To characterize the property of\ncodes for these topologies, we introduce the notion of \\emph{pseudo-parity\ncheck matrix}. Then, using the hypergraph independent set approach, we\nestablish the first polynomial upper bound on the field size needed for\nachieving the maximal recoverability in topologies $T_{m\\times n}(1,b,0)$, when\n$n$ is large enough. And we further improve this general upper bound for\ntopologies $T_{4\\times n}(1,2,0)$ and $T_{3\\times n}(1,3,0)$. By relating the\nproblem to generalized \\emph{Sidon sets} in $\\mathbb{F}_q$, we also obtain\nnon-trivial lower bounds on the field size for maximally recoverable codes that\ninstantiate topologies $T_{4\\times n}(1,2,0)$ and $T_{3\\times n}(1,3,0)$.\n", "title": "New Bounds on the Field Size for Maximally Recoverable Codes Instantiating Grid-like Topologies" }
null
null
[ "Computer Science" ]
null
true
null
18928
null
Validated
null
null
null
{ "abstract": " In this work, we address the problem of disentanglement of factors that\ngenerate a given data into those that are correlated with the labeling and\nthose that are not. Our solution is simpler than previous solutions and employs\nadversarial training in a straightforward manner. We demonstrate the new method\non visual datasets as well as on financial data. In order to evaluate the\nlatter, we developed a hypothetical trading strategy whose performance is\naffected by the performance of the disentanglement, namely, it trades better\nwhen the factors are better separated.\n", "title": "Two-Step Disentanglement for Financial Data" }
null
null
null
null
true
null
18929
null
Default
null
null
null
{ "abstract": " In the paper \"Optimal control of a Vlasov-Poisson plasma by an external\nmagnetic field - The basics for variational calculus\" [arXiv:1708.02464] we\nhave already introduced a set of admissible magnetic fields and we have proved\nthat each of those fields induces a unique strong solution of the\nVlasov-Poisson system. We have also established that the field-state operator\nthat maps any admissible field onto its corresponding solution is continuous\nand weakly compact. In this paper we will show that this operator is also\nFréchet differentiable and we will continue to analyze the optimal control\nproblem that was introduced in [arXiv:1708.02464]. More precisely, we will\nestablish necessary and sufficient conditions for local optimality and we will\nshow that an optimal solution is unique under certain conditions.\n", "title": "Optimal control of a Vlasov-Poisson plasma by an external magnetic field - Analysis of a tracking type optimal control problem" }
null
null
[ "Mathematics" ]
null
true
null
18930
null
Validated
null
null
null
{ "abstract": " We observe that a certain kind of algebraic proof - which covers essentially\nall known algebraic circuit lower bounds to date - cannot be used to prove\nlower bounds against VP if and only if what we call succinct hitting sets exist\nfor VP. This is analogous to the Razborov-Rudich natural proofs barrier in\nBoolean circuit complexity, in that we rule out a large class of lower bound\ntechniques under a derandomization assumption. We also discuss connections\nbetween this algebraic natural proofs barrier, geometric complexity theory, and\n(algebraic) proof complexity.\n", "title": "Towards an algebraic natural proofs barrier via polynomial identity testing" }
null
null
null
null
true
null
18931
null
Default
null
null
null
{ "abstract": " In recent years, a great deal of interest has focused on conducting inference\non the parameters in a linear model in the high-dimensional setting. In this\npaper, we consider a simple and very naïve two-step procedure for this\ntask, in which we (i) fit a lasso model in order to obtain a subset of the\nvariables; and (ii) fit a least squares model on the lasso-selected set.\nConventional statistical wisdom tells us that we cannot make use of the\nstandard statistical inference tools for the resulting least squares model\n(such as confidence intervals and $p$-values), since we peeked at the data\ntwice: once in running the lasso, and again in fitting the least squares model.\nHowever, in this paper, we show that under a certain set of assumptions, with\nhigh probability, the set of variables selected by the lasso is deterministic.\nConsequently, the naïve two-step approach can yield confidence intervals\nthat have asymptotically correct coverage, as well as p-values with proper\nType-I error control. Furthermore, this two-step approach unifies two existing\ncamps of work on high-dimensional inference: one camp has focused on inference\nbased on a sub-model selected by the lasso, and the other has focused on\ninference using a debiased version of the lasso estimator.\n", "title": "In Defense of the Indefensible: A Very Naive Approach to High-Dimensional Inference" }
null
null
null
null
true
null
18932
null
Default
null
null
null
{ "abstract": " Discussions about the choice of a tree hash mode of operation for a\nstandardization have recently been undertaken. It appears that a single tree\nmode cannot address adequately all possible uses and specifications of a\nsystem. In this paper, we review the tree modes which have been proposed, we\ndiscuss their problems and propose remedies. We make the reasonable assumption\nthat communicating systems have different specifications and that software\napplications are of different types (securing stored content or live-streamed\ncontent). Finally, we propose new modes of operation that address the resource\nusage problem for the three most representative categories of devices and we\nanalyse their asymptotic behavior.\n", "title": "Asymptotic Analysis of Plausible Tree Hash Modes for SHA-3" }
null
null
null
null
true
null
18933
null
Default
null
null
null
{ "abstract": " In this paper we present a neurally plausible model of robot reaching\ninspired by human infant reaching that is based on embodied artificial\nintelligence, which emphasizes the importance of the sensory-motor interaction\nof an agent and the world. This model encompasses both learning sensory-motor\ncorrelations through motor babbling and also arm motion planning using\nspreading activation. This model is organized in three layers of neural maps\nwith parallel structures representing the same sensory-motor space. The motor\nbabbling period shapes the structure of the three neural maps as well as the\nconnections within and between them. We describe an implementation of this\nmodel and an investigation of this implementation using a simple reaching task\non a humanoid robot. The robot has learned successfully to plan reaching\nmotions from a test set with high accuracy and smoothness.\n", "title": "Neurally Plausible Model of Robot Reaching Inspired by Infant Motor Babbling" }
null
null
null
null
true
null
18934
null
Default
null
null
null
{ "abstract": " Advances in artificial intelligence (AI) will transform modern life by\nreshaping transportation, health, science, finance, and the military. To adapt\npublic policy, we need to better anticipate these advances. Here we report the\nresults from a large survey of machine learning researchers on their beliefs\nabout progress in AI. Researchers predict AI will outperform humans in many\nactivities in the next ten years, such as translating languages (by 2024),\nwriting high-school essays (by 2026), driving a truck (by 2027), working in\nretail (by 2031), writing a bestselling book (by 2049), and working as a\nsurgeon (by 2053). Researchers believe there is a 50% chance of AI\noutperforming humans in all tasks in 45 years and of automating all human jobs\nin 120 years, with Asian respondents expecting these dates much sooner than\nNorth Americans. These results will inform discussion amongst researchers and\npolicymakers about anticipating and managing trends in AI.\n", "title": "When Will AI Exceed Human Performance? Evidence from AI Experts" }
null
null
null
null
true
null
18935
null
Default
null
null
null
{ "abstract": " Recurrent Neural Networks (RNNs) are a key technology for emerging\napplications such as automatic speech recognition, machine translation or image\ndescription. Long Short Term Memory (LSTM) networks are the most successful RNN\nimplementation, as they can learn long term dependencies to achieve high\naccuracy. Unfortunately, the recurrent nature of LSTM networks significantly\nconstrains the amount of parallelism and, hence, multicore CPUs and many-core\nGPUs exhibit poor efficiency for RNN inference. In this paper, we present\nE-PUR, an energy-efficient processing unit tailored to the requirements of LSTM\ncomputation. The main goal of E-PUR is to support large recurrent neural\nnetworks for low-power mobile devices. E-PUR provides an efficient hardware\nimplementation of LSTM networks that is flexible to support diverse\napplications. One of its main novelties is a technique that we call Maximizing\nWeight Locality (MWL), which improves the temporal locality of the memory\naccesses for fetching the synaptic weights, reducing the memory requirements by\na large extent. Our experimental results show that E-PUR achieves real-time\nperformance for different LSTM networks, while reducing energy consumption by\norders of magnitude with respect to general-purpose processors and GPUs, and it\nrequires a very small chip area. Compared to a modern mobile SoC, an NVIDIA\nTegra X1, E-PUR provides an average energy reduction of 92x.\n", "title": "E-PUR: An Energy-Efficient Processing Unit for Recurrent Neural Networks" }
null
null
null
null
true
null
18936
null
Default
null
null
null
{ "abstract": " Mathematical modelling has shown that activity of the Geminid meteor shower\nshould rise with time, and that was confirmed by analysis of visual\nobservations 1985--2016. We do not expect any outburst activity of the Geminid\nshower in 2017, even though the asteroid (3200) Phaethon has close approach to\nEarth in December of 2017. A small probability to observe dust ejected at\nperihelia 2009--2016 still exists.\n", "title": "Increasing Geminid meteor shower activity" }
null
null
null
null
true
null
18937
null
Default
null
null
null
{ "abstract": " Lederer and van de Geer (2013) introduced a new Orlicz norm, the\nBernstein-Orlicz norm, which is connected to Bernstein type inequalities. Here\nwe introduce another Orlicz norm, the Bennett-Orlicz norm, which is connected\nto Bennett type inequalities. The new Bennett-Orlicz norm yields inequalities\nfor expectations of maxima which are potentially somewhat tighter than those\nresulting from the Bernstein-Orlicz norm when they are both applicable. We\ndiscuss cross connections between these norms, exponential inequalities of the\nBernstein, Bennett, and Prokhorov types, and make comparisons with results of\nTalagrand (1989, 1994), and Boucheron, Lugosi, and Massart (2013).\n", "title": "The Bennett-Orlicz norm" }
null
null
null
null
true
null
18938
null
Default
null
null
null
{ "abstract": " Justification logics are modal-like logics with the additional capability of\nrecording the reason, or justification, for modalities in syntactic structures,\ncalled justification terms. Justification logics can be seen as explicit\ncounterparts to modal logics. The behavior and interaction of agents in\ndistributed system is often modeled using logics of knowledge and time. In this\npaper, we sketch some preliminary ideas on how the modal knowledge part of such\nlogics of knowledge and time could be replaced with an appropriate\njustification logic.\n", "title": "Temporal Justification Logic" }
null
null
null
null
true
null
18939
null
Default
null
null
null
{ "abstract": " In the Number On the Forehead (NOF) multiparty communication model, $k$\nplayers want to evaluate a function $F : X_1 \\times\\cdots\\times X_k\\rightarrow\nY$ on some input $(x_1,\\dots,x_k)$ by broadcasting bits according to a\npredetermined protocol. The input is distributed in such a way that each player\n$i$ sees all of it except $x_i$. In the simultaneous setting, the players\ncannot speak to each other but instead send information to a referee. The\nreferee does not know the players' input, and cannot give any information back.\nAt the end, the referee must be able to recover $F(x_1,\\dots,x_k)$ from what\nshe obtained.\nA central open question, called the $\\log n$ barrier, is to find a function\nwhich is hard to compute for $polylog(n)$ or more players (where the $x_i$'s\nhave size $poly(n)$) in the simultaneous NOF model. This has important\napplications in circuit complexity, as it could help to separate $ACC^0$ from\nother complexity classes. One of the candidates belongs to the family of\ncomposed functions. The input to these functions is represented by a $k\\times\n(t\\cdot n)$ boolean matrix $M$, whose row $i$ is the input $x_i$ and $t$ is a\nblock-width parameter. A symmetric composed function acting on $M$ is specified\nby two symmetric $n$- and $kt$-variate functions $f$ and $g$, that output\n$f\\circ g(M)=f(g(B_1),\\dots,g(B_n))$ where $B_j$ is the $j$-th block of width\n$t$ of $M$. As the majority function $MAJ$ is conjectured to be outside of\n$ACC^0$, Babai et. al. suggested to study $MAJ\\circ MAJ_t$, with $t$ large\nenough.\nSo far, it was only known that $t=1$ is not enough for $MAJ\\circ MAJ_t$ to\nbreak the $\\log n$ barrier in the simultaneous deterministic NOF model. In this\npaper, we extend this result to any constant block-width $t>1$, by giving a\nprotocol of cost $2^{O(2^t)}\\log^{2^{t+1}}(n)$ for any symmetric composed\nfunction when there are $2^{\\Omega(2^t)}\\log n$ players.\n", "title": "Simultaneous Multiparty Communication Complexity of Composed Functions" }
null
null
null
null
true
null
18940
null
Default
null
null
null
{ "abstract": " A key challenge in online learning is that classical algorithms can be slow\nto adapt to changing environments. Recent studies have proposed \"meta\"\nalgorithms that convert any online learning algorithm to one that is adaptive\nto changing environments, where the adaptivity is analyzed in a quantity called\nthe strongly-adaptive regret. This paper describes a new meta algorithm that\nhas a strongly-adaptive regret bound that is a factor of $\\sqrt{\\log(T)}$\nbetter than other algorithms with the same time complexity, where $T$ is the\ntime horizon. We also extend our algorithm to achieve a first-order (i.e.,\ndependent on the observed losses) strongly-adaptive regret bound for the first\ntime, to our knowledge. At its heart is a new parameter-free algorithm for the\nlearning with expert advice (LEA) problem in which experts sometimes do not\noutput advice for consecutive time steps (i.e., \\emph{sleeping} experts). This\nalgorithm is derived by a reduction from optimal algorithms for the so-called\ncoin betting problem. Empirical results show that our algorithm outperforms\nstate-of-the-art methods in both learning with expert advice and metric\nlearning scenarios.\n", "title": "Online Learning for Changing Environments using Coin Betting" }
null
null
null
null
true
null
18941
null
Default
null
null
null
{ "abstract": " Let $\\phi$ be a spherical Hecke-Maass cusp form on the non-compact space\n$\\mathrm{PGL}_3(\\mathbb{Z})\\backslash\\mathrm{PGL}_3(\\mathbb{R})$. We establish\nvarious pointwise upper bounds for $\\phi$ in terms of its Laplace eigenvalue\n$\\lambda_\\phi$. These imply, for $\\phi$ arithmetically normalized and tempered\nat the archimedean place, the bound $\\|\\phi\\|_\\infty\\ll_\\epsilon\n\\lambda_{\\phi}^{39/40+\\epsilon}$ for the global sup-norm (without restriction\nto a compact subset). On the way, we derive a new uniform upper bound for the\n$\\mathrm{GL}_3$ Jacquet-Whittaker function.\n", "title": "On the global sup-norm of GL(3) cusp forms" }
null
null
null
null
true
null
18942
null
Default
null
null
null
{ "abstract": " Newton's mechanical revolution unifies the motion of planets in the sky and\nfalling of apple on earth. Maxwell's electromagnetic revolution unifies\nelectricity, magnetism, and light. Einstein's relativity revolution unifies\nspace with time, and gravity with space-time distortion. The quantum revolution\nunifies particle with waves, and energy with frequency. Each of those\nrevolution changes our world view. In this article, we will describe a\nrevolution that is happening now: the second quantum revolution which unifies\nmatter/space with information. In other words, the new world view suggests that\nelementary particles (the bosonic force particles and fermionic matter\nparticles) all originated from quantum information (qubits): they are\ncollective excitations of an entangled qubit ocean that corresponds to our\nspace. The beautiful geometric Yang-Mills gauge theory and the strange Fermi\nstatistics of matter particles now have a common algebraic quantum\ninformational origin.\n", "title": "Four revolutions in physics and the second quantum revolution -- a unification of force and matter by quantum information" }
null
null
null
null
true
null
18943
null
Default
null
null
null
{ "abstract": " Data-driven predictive analytics are in use today across a number of\nindustrial applications, but further integration is hindered by the requirement\nof similarity among model training and test data distributions. This paper\naddresses the need of learning from possibly nonstationary data streams, or\nunder concept drift, a commonly seen phenomenon in practical applications. A\nsimple dual-learner ensemble strategy, alternating learners framework, is\nproposed. A long-memory model learns stable concepts from a long relevant time\nwindow, while a short-memory model learns transient concepts from a small\nrecent window. The difference in prediction performance of these two models is\nmonitored and induces an alternating policy to select, update and reset the two\nmodels. The method features an online updating mechanism to maintain the\nensemble accuracy, and a concept-dependent trigger to focus on relevant data.\nThrough empirical studies the method demonstrates effective tracking and\nprediction when the steaming data carry abrupt and/or gradual changes.\n", "title": "Concept Drift Learning with Alternating Learners" }
null
null
null
null
true
null
18944
null
Default
null
null
null
{ "abstract": " We give an new proof of the well-known competitive exclusion principle in the\nchemostat model with $n$ species competing for a single resource, for any set\nof increasing growth functions. The proof is constructed by induction on the\nnumber of the species, after being ordered. It uses elementary analysis and\ncomparisons of solutions of ordinary differential equations.\n", "title": "A new proof of the competitive exclusion principle in the chemostat" }
null
null
null
null
true
null
18945
null
Default
null
null
null
{ "abstract": " Random Differential Equations provide a natural extension of Ordinary\nDifferential Equations to the stochastic setting. We show how, and under which\nconditions, every equilibrium state of a Random Differential Equation (RDE) can\nbe described by a Structural Causal Model (SCM), while pertaining the causal\nsemantics. This provides an SCM that captures the stochastic and causal\nbehavior of the RDE, which can model both cycles and confounders. This enables\nthe study of the equilibrium states of the RDE by applying the theory and\nstatistical tools available for SCMs, for example, marginalizations and Markov\nproperties, as we illustrate by means of an example. Our work thus provides a\ndirect connection between two fields that so far have been developing in\nisolation.\n", "title": "From Random Differential Equations to Structural Causal Models: the stochastic case" }
null
null
null
null
true
null
18946
null
Default
null
null
null
{ "abstract": " Given a set of attributed subgraphs known to be from different classes, how\ncan we discover their differences? There are many cases where collections of\nsubgraphs may be contrasted against each other. For example, they may be\nassigned ground truth labels (spam/not-spam), or it may be desired to directly\ncompare the biological networks of different species or compound networks of\ndifferent chemicals.\nIn this work we introduce the problem of characterizing the differences\nbetween attributed subgraphs that belong to different classes. We define this\ncharacterization problem as one of partitioning the attributes into as many\ngroups as the number of classes, while maximizing the total attributed quality\nscore of all the given subgraphs.\nWe show that our attribute-to-class assignment problem is NP-hard and an\noptimal $(1 - 1/e)$-approximation algorithm exists. We also propose two\ndifferent faster heuristics that are linear-time in the number of attributes\nand subgraphs. Unlike previous work where only attributes were taken into\naccount for characterization, here we exploit both attributes and social ties\n(i.e. graph structure).\nThrough extensive experiments, we compare our proposed algorithms, show\nfindings that agree with human intuition on datasets from Amazon co-purchases,\nCongressional bill sponsorships, and DBLP co-authorships. We also show that our\napproach of characterizing subgraphs is better suited for sense-making than\ndiscriminating classification approaches.\n", "title": "Ties That Bind - Characterizing Classes by Attributes and Social Ties" }
null
null
null
null
true
null
18947
null
Default
null
null
null
{ "abstract": " We present a new technique to probe the central dark matter (DM) density\nprofile of galaxies that harnesses both the survival and observed properties of\nstar clusters. As a first application, we apply our method to the `ultra-faint'\ndwarf Eridanus II (Eri II) that has a lone star cluster ~45 pc from its centre.\nUsing a grid of collisional $N$-body simulations, incorporating the effects of\nstellar evolution, external tides and dynamical friction, we show that a DM\ncore for Eri II naturally reproduces the size and the projected position of its\nstar cluster. By contrast, a dense cusped galaxy requires the cluster to lie\nimplausibly far from the centre of Eri II (>1 kpc), with a high inclination\norbit that must be observed at a particular orbital phase. Our results,\ntherefore, favour a dark matter core. This implies that either a cold DM cusp\nwas `heated up' at the centre of Eri II by bursty star formation, or we are\nseeing an evidence for physics beyond cold DM.\n", "title": "Probing dark matter with star clusters: a dark matter core in the ultra-faint dwarf Eridanus II" }
null
null
[ "Physics" ]
null
true
null
18948
null
Validated
null
null
null
{ "abstract": " We propose the ambiguity problem for the foreground object segmentation task\nand motivate the importance of estimating and accounting for this ambiguity\nwhen designing vision systems. Specifically, we distinguish between images\nwhich lead multiple annotators to segment different foreground objects\n(ambiguous) versus minor inter-annotator differences of the same object. Taking\nimages from eight widely used datasets, we crowdsource labeling the images as\n\"ambiguous\" or \"not ambiguous\" to segment in order to construct a new dataset\nwe call STATIC. Using STATIC, we develop a system that automatically predicts\nwhich images are ambiguous. Experiments demonstrate the advantage of our\nprediction system over existing saliency-based methods on images from vision\nbenchmarks and images taken by blind people who are trying to recognize objects\nin their environment. Finally, we introduce a crowdsourcing system to achieve\ncost savings for collecting the diversity of all valid \"ground truth\"\nforeground object segmentations by collecting extra segmentations only when\nambiguity is expected. Experiments show our system eliminates up to 47% of\nhuman effort compared to existing crowdsourcing methods with no loss in\ncapturing the diversity of ground truths.\n", "title": "Predicting Foreground Object Ambiguity and Efficiently Crowdsourcing the Segmentation(s)" }
null
null
null
null
true
null
18949
null
Default
null
null
null
{ "abstract": " Generalized polyhedral convex sets, generalized polyhedral convex functions\non locally convex Hausdorff topological vector spaces, and the related\nconstructions such as sum of sets, sum of functions, directional derivative,\ninfimal convolution, normal cone, conjugate function, subdifferential, are\nstudied thoroughly in this paper. Among other things, we show how a generalized\npolyhedral convex set can be characterized via the finiteness of the number of\nits faces. In addition, it is proved that the infimal convolution of a\ngeneralized polyhedral convex function and a polyhedral convex function is a\npolyhedral convex function. The obtained results can be applied to scalar\noptimization problems described by generalized polyhedral convex sets and\ngeneralized polyhedral convex functions.\n", "title": "On Some Generalized Polyhedral Convex Constructions" }
null
null
[ "Mathematics" ]
null
true
null
18950
null
Validated
null
null
null
{ "abstract": " Proteins are commonly used by biochemical industry for numerous processes.\nRefining these proteins' properties via mutations causes stability effects as\nwell. Accurate computational method to predict how mutations affect protein\nstability are necessary to facilitate efficient protein design. However,\naccuracy of predictive models is ultimately constrained by the limited\navailability of experimental data. We have developed mGPfusion, a novel\nGaussian process (GP) method for predicting protein's stability changes upon\nsingle and multiple mutations. This method complements the limited experimental\ndata with large amounts of molecular simulation data. We introduce a Bayesian\ndata fusion model that re-calibrates the experimental and in silico data\nsources and then learns a predictive GP model from the combined data. Our\nprotein-specific model requires experimental data only regarding the protein of\ninterest and performs well even with few experimental measurements. The\nmGPfusion models proteins by contact maps and infers the stability effects\ncaused by mutations with a mixture of graph kernels. Our results show that\nmGPfusion outperforms state-of-the-art methods in predicting protein stability\non a dataset of 15 different proteins and that incorporating molecular\nsimulation data improves the model learning and prediction accuracy.\n", "title": "mGPfusion: Predicting protein stability changes with Gaussian process kernel learning and data fusion" }
null
null
null
null
true
null
18951
null
Default
null
null
null
{ "abstract": " In order to handle intense time pressure and survive in dynamic market,\nsoftware startups have to make crucial decisions constantly on whether to\nchange directions or stay on chosen courses, or in the terms of Lean Startup,\nto pivot or to persevere. The existing research and knowledge on software\nstartup pivots are very limited. In this study, we focused on understanding the\npivoting processes of software startups, and identified the triggering factors\nand pivot types. To achieve this, we employed a multiple case study approach,\nand analyzed the data obtained from four software startups. The initial\nfindings show that different software startups make different types of pivots\nrelated to business and technology during their product development life cycle.\nThe pivots are triggered by various factors including negative customer\nfeedback.\n", "title": "How Do Software Startups Pivot? Empirical Results from a Multiple Case Study" }
null
null
null
null
true
null
18952
null
Default
null
null
null
{ "abstract": " Recently, a link between Lorentzian and Finslerian Geometries has been\ncarried out, leading to the notion of wind Riemannian structure (WRS), a\ngeneralization of Finslerian Randers metrics. Here, we further develop this\nnotion and its applications to spacetimes, by introducing some\ncharacterizations and criteria for the completeness of WRS's.\nAs an application, we consider a general class of spacetimes admitting a time\nfunction $t$ generated by the flow of a complete Killing vector field\n(generalized standard stationary spacetimes or, more precisely, SSTK ones) and\nderive simple criteria ensuring that its slices $t=$ constant are Cauchy.\nMoreover, a brief summary on the Finsler/Lorentz link for readers with some\nacquaintance in Lorentzian Geometry, plus some simple examples in Mathematical\nRelativity, are provided.\n", "title": "Some criteria for Wind Riemannian completeness and existence of Cauchy hypersurfaces" }
null
null
null
null
true
null
18953
null
Default
null
null
null
{ "abstract": " Recently, Grynkiewicz et al. [{\\it Israel J. Math.} {\\bf 193} (2013),\n359--398], using tools from additive combinatorics and group theory, proved\nnecessary and sufficient conditions under which the linear congruence\n$a_1x_1+\\cdots +a_kx_k\\equiv b \\pmod{n}$, where $a_1,\\ldots,a_k,b,n$ ($n\\geq\n1$) are arbitrary integers, has a solution $\\langle x_1,\\ldots,x_k \\rangle \\in\n\\Z_{n}^k$ with all $x_i$ distinct modulo $n$. So, it would be an interesting\nproblem to give an explicit formula for the number of such solutions. Quite\nsurprisingly, this problem was first considered, in a special case, by\nSchönemann almost two centuries ago(!) but his result seems to have been\nforgotten. Schönemann [{\\it J. Reine Angew. Math.} {\\bf 1839} (1839),\n231--243] proved an explicit formula for the number of such solutions when\n$b=0$, $n=p$ a prime, and $\\sum_{i=1}^k a_i \\equiv 0 \\pmod{p}$ but $\\sum_{i \\in\nI} a_i \\not\\equiv 0 \\pmod{p}$ for all $I\\varsubsetneq \\lbrace 1, \\ldots,\nk\\rbrace$. In this paper, we generalize Schönemann's theorem using a result\non the number of solutions of linear congruences due to D. N. Lehmer and also a\nresult on graph enumeration recently obtained by Ardila et al. [{\\it Int. Math.\nRes. Not.} {\\bf 2015} (2015), 3830--3877]. This seems to be a rather uncommon\nmethod in the area; besides, our proof technique or its modifications may be\nuseful for dealing with other cases of this problem (or even the general case)\nor other relevant problems.\n", "title": "A generalization of Schönemann's theorem via a graph theoretic method" }
null
null
null
null
true
null
18954
null
Default
null
null
null
{ "abstract": " In order for robots to perform mission-critical tasks, it is essential that\nthey are able to quickly adapt to changes in their environment as well as to\ninjuries and or other bodily changes. Deep reinforcement learning has been\nshown to be successful in training robot control policies for operation in\ncomplex environments. However, existing methods typically employ only a single\npolicy. This can limit the adaptability since a large environmental\nmodification might require a completely different behavior compared to the\nlearning environment. To solve this problem, we propose Map-based Multi-Policy\nReinforcement Learning (MMPRL), which aims to search and store multiple\npolicies that encode different behavioral features while maximizing the\nexpected reward in advance of the environment change. Thanks to these policies,\nwhich are stored into a multi-dimensional discrete map according to its\nbehavioral feature, adaptation can be performed within reasonable time without\nretraining the robot. An appropriate pre-trained policy from the map can be\nrecalled using Bayesian optimization. Our experiments show that MMPRL enables\nrobots to quickly adapt to large changes without requiring any prior knowledge\non the type of injuries that could occur. A highlight of the learned behaviors\ncan be found here: this https URL .\n", "title": "Map-based Multi-Policy Reinforcement Learning: Enhancing Adaptability of Robots by Deep Reinforcement Learning" }
null
null
[ "Computer Science" ]
null
true
null
18955
null
Validated
null
null
null
{ "abstract": " Based on the convex least-squares estimator, we propose two different\nprocedures for testing convexity of a probability mass function supported on N\nwith an unknown finite support. The procedures are shown to be asymptotically\ncalibrated.\n", "title": "Testing convexity of a discrete distribution" }
null
null
null
null
true
null
18956
null
Default
null
null
null
{ "abstract": " There is often a significant trade-off between formulation strength and size\nin mixed integer programming (MIP). When modeling convex disjunctive\nconstraints (e.g. unions of convex sets), adding auxiliary continuous variables\ncan sometimes help resolve this trade-off. However, standard formulations that\nuse such auxiliary continuous variables can have a worse-than-expected\ncomputational effectiveness, which is often attributed precisely to these\nauxiliary continuous variables. For this reason, there has been considerable\ninterest in constructing strong formulations that do not use continuous\nauxiliary variables. We introduce a technique to construct formulations without\nthese detrimental continuous auxiliary variables. To develop this technique we\nintroduce a natural non-polyhedral generalization of the Cayley embedding of a\nfamily of polytopes and show it inherits many geometric properties of the\noriginal embedding. We then show how the associated formulation technique can\nbe used to construct small and strong formulation for a wide range of\ndisjunctive constraints. In particular, we show it can recover and generalize\nall known strong formulations without continuous auxiliary variables.\n", "title": "Small and Strong Formulations for Unions of Convex Sets from the Cayley Embedding" }
null
null
null
null
true
null
18957
null
Default
null
null
null
{ "abstract": " This paper addresses the problem of selecting from a choice of possible\ngrasps, so that impact forces will be minimised if a collision occurs while the\nrobot is moving the grasped object along a post-grasp trajectory. Such\nconsiderations are important for safety in human-robot interaction, where even\na certified \"human-safe\" (e.g. compliant) arm may become hazardous once it\ngrasps and begins moving an object, which may have significant mass, sharp\nedges or other dangers. Additionally, minimising collision forces is critical\nto preserving the longevity of robots which operate in uncertain and hazardous\nenvironments, e.g. robots deployed for nuclear decommissioning, where removing\na damaged robot from a contaminated zone for repairs may be extremely difficult\nand costly. Also, unwanted collisions between a robot and critical\ninfrastructure (e.g. pipework) in such high-consequence environments can be\ndisastrous. In this paper, we investigate how the safety of the post-grasp\nmotion can be considered during the pre-grasp approach phase, so that the\nselected grasp is optimal in terms applying minimum impact forces if a\ncollision occurs during a desired post-grasp manipulation. We build on the\nmethods of augmented robot-object dynamics models and \"effective mass\" and\npropose a method for combining these concepts with modern grasp and trajectory\nplanners, to enable the robot to achieve a grasp which maximises the safety of\nthe post-grasp trajectory, by minimising potential collision forces. We\ndemonstrate the effectiveness of our approach through several experiments with\nboth simulated and real robots.\n", "title": "Safe Robotic Grasping: Minimum Impact-Force Grasp Selection" }
null
null
null
null
true
null
18958
null
Default
null
null
null
{ "abstract": " We derive estimators of the density of the event times of current status\ndata. The estimators are derived for the situations where the distribution of\nthe observation times is known and where this distribution is unknown. The\ndensity estimators are constructed from kernel estimators of the density of\ntransformed current status data, which have a distribution similar to uniform\ndeconvolution data. Expansions of the expectation and variance as well as\nasymptotic normality are derived. A reference density based bandwidth selection\nmethod is proposed. A simulated example is presented.\n", "title": "Nonparametric Kernel Density Estimation for Univariate Curent Status Data" }
null
null
null
null
true
null
18959
null
Default
null
null
null
{ "abstract": " We propose a new approach to the Mirror Symmetry Conjecture in a form\nsuitable to possibly non-Kähler compact complex manifolds whose canonical\nbundle is trivial. We apply our methods by proving that the Iwasawa manifold\n$X$, a well-known non-Kähler compact complex manifold of dimension $3$, is\nits own mirror dual to the extent that its Gauduchon cone, replacing the\nclassical Kähler cone that is empty in this case, corresponds to what we call\nthe local universal family of essential deformations of $X$. These are obtained\nby removing from the Kuranishi family the two \"superfluous\" dimensions of\ncomplex parallelisable deformations that have a similar geometry to that of the\nIwasawa manifold. The remaining four dimensions are shown to have a clear\ngeometric meaning including in terms of the degeneration at $E_2$ of the\nFrölicher spectral sequence. On the local moduli space of \"essential\" complex\nstructures, we obtain a canonical Hodge decomposition of weight $3$ and a\nvariation of Hodge structures, construct coordinates and Yukawa couplings while\nimplicitly proving a local Torelli theorem. On the metric side of the mirror,\nwe construct a variation of Hodge structures parametrised by a subset of the\ncomplexified Gauduchon cone of the Iwasawa manifold using the sGG property of\nall the small deformations of this manifold proved in earlier joint work of the\nauthor with L. Ugarte. Finally, we define a mirror map linking the two\nvariations of Hodge structures and we highlight its properties.\n", "title": "Non-Kähler Mirror Symmetry of the Iwasawa Manifold" }
null
null
null
null
true
null
18960
null
Default
null
null
null
{ "abstract": " Empirical researchers often trim observations with small denominator A when\nthey estimate moments of the form E[B/A]. Large trimming is a common practice\nto mitigate variance, but it incurs large trimming bias. This paper provides a\nnovel method of correcting large trimming bias. If a researcher is willing to\nassume that the joint distribution between A and B is smooth, then a large\ntrimming bias may be estimated well. With the bias correction, we also develop\na valid and robust inference result for E[B/A].\n", "title": "Estimation and Inference for Moments of Ratios with Robustness against Large Trimming Bias" }
null
null
null
null
true
null
18961
null
Default
null
null
null
{ "abstract": " In this paper, a real-time 105-channel data acquisition platform based on\nFPGA for imaging will be implemented for mm-wave imaging systems. PC platform\nis also realized for imaging results monitoring purpose. Mm-wave imaging\nexpands our vision by letting us see things under poor visibility conditions.\nWith this extended vision ability, a wide range of military imaging missions\nwould benefit, such as surveillance, precision targeting, navigation, and\nrescue. Based on the previously designed imager modules, this project would go\non finishing the PCB design (both schematic and layout) of the following signal\nprocessing systems consisting of Programmable Gain Amplifier(PGA) (4 PGA for\neach ADC) and 16-channel Analog to Digital Converter (ADC) (7 ADC in total).\nThen the system verification would be performed on the Artix-7 35T Arty FPGA\nwith the developing of proper controlling code to configure the ADC and realize\nthe communication between the FPGA and the PC (through both UART and Ethernet).\nFor the verification part, a simple test on a breadboard with a simple analog\ninput (generated from a resistor divider) would first be performed. After the\nPCB design is finished, the whole system would be tested again with a precise\nreference and analog input.\n", "title": "FPGA-based real-time 105-channel data acquisition platform for imaging system" }
null
null
null
null
true
null
18962
null
Default
null
null
null
{ "abstract": " Substitution of isovalent non-magnetic defects, such as Zn, in CuO2 plane\nstrongly modifies the magnetic properties of strongly electron correlated hole\ndoped cuprate superconductors. The reason for enhanced uniform magnetic\nsusceptibility, \\c{hi}, in Zn substituted cuprates is debatable. So far, the\nobserved magnetic behavior has been analyzed mainly in terms of two somewhat\ncontrasting scenarios, (a) that due to independent localized moments appearing\nin the vicinity of Zn arising because of the strong electronic/magnetic\ncorrelations present in the host compound and (b) that due to transfer of\nquasiparticle spectral weight and creation of weakly localized low energy\nelectronic states associated with each Zn atom in place of an in-plane Cu. If\nthe second scenario is correct, one should expect a direct correspondence\nbetween Zn induced suppression of superconducting transition temperature, Tc,\nand the extent of the enhanced magnetic susceptibility at low temperature. In\nthis case, the low-T enhancement of \\c{hi} would be due to weakly localized\nquasiparticle states at low energy and these electronic states will be\nprecluded from taking part in Cooper pairing. We explore this second\npossibility by analyzing the \\c{hi}(T) data for La2-xSrxCu1-yZnyO4 with\ndifferent hole contents, p (= x), and Zn concentrations (y) in this paper.\nResults of our analysis support this scenario.\n", "title": "Zn-induced in-gap electronic states in La214 probed by uniform magnetic susceptibility: relevance to the suppression of superconducting Tc" }
null
null
[ "Physics" ]
null
true
null
18963
null
Validated
null
null
null
{ "abstract": " Higher-order logic programming is an interesting extension of traditional\nlogic programming that allows predicates to appear as arguments and variables\nto be used where predicates typically occur. Higher-order characteristics are\nindeed desirable but on the other hand they are also usually more expensive to\nsupport. In this paper we propose a program specialization technique based on\npartial evaluation that can be applied to a modest but useful class of\nhigher-order logic programs and can transform them into first-order programs\nwithout introducing additional data structures. The resulting first-order\nprograms can be executed by conventional logic programming interpreters and\nbenefit from other optimizations that might be available. We provide an\nimplementation and experimental results that suggest the efficiency of the\ntransformation.\n", "title": "Predicate Specialization for Definitional Higher-order Logic Programs" }
null
null
null
null
true
null
18964
null
Default
null
null
null
{ "abstract": " We describe an approach, based on direct numerical solution of the Usadel\nequation, to finding stationary points of the free energy of superconducting\nnanorings. We consider both uniform (equilibrium) solutions and the critical\ndroplets that mediate activated transitions between them. For the uniform\nsolutions, we compute the critical current as a function of the temperature,\nthus obtaining a correction factor to Bardeen's 1962 interpolation formula. For\nthe droplets, we present a metastability chart that shows the activation energy\nas a function of the temperature and current. A comparison of the activation\nenergy for a ring to experimental results for a wire connected to\nsuperconducting leads reveals a discrepancy at large currents. We discuss\npossible reasons for it. We also discuss the nature of the bifurcation point at\nwhich the droplet merges with the uniform solution.\n", "title": "Metastability and bifurcation in superconducting nanorings" }
null
null
null
null
true
null
18965
null
Default
null
null
null
{ "abstract": " While all organisms on Earth descend from a common ancestor, there is no\nconsensus on whether the origin of this ancestral self-replicator was a one-off\nevent or whether it was only the final survivor of multiple origins. Here we\nuse the digital evolution system Avida to study the origin of self-replicating\ncomputer programs. By using a computational system, we avoid many of the\nuncertainties inherent in any biochemical system of self-replicators (while\nrunning the risk of ignoring a fundamental aspect of biochemistry). We\ngenerated the exhaustive set of minimal-genome self-replicators and analyzed\nthe network structure of this fitness landscape. We further examined the\nevolvability of these self-replicators and found that the evolvability of a\nself-replicator is dependent on its genomic architecture. We studied the\ndifferential ability of replicators to take over the population when competed\nagainst each other (akin to a primordial-soup model of biogenesis) and found\nthat the probability of a self-replicator out-competing the others is not\nuniform. Instead, progenitor (most-recent common ancestor) genotypes are\nclustered in a small region of the replicator space. Our results demonstrate\nhow computational systems can be used as test systems for hypotheses concerning\nthe origin of life.\n", "title": "Origin of life in a digital microcosm" }
null
null
null
null
true
null
18966
null
Default
null
null
null
{ "abstract": " We introduce two models of taxation, the latent and natural tax processes,\nwhich have both been used to represent loss-carry-forward taxation on the\ncapital of an insurance company. In the natural tax process, the tax rate is a\nfunction of the current level of capital, whereas in the latent tax process,\nthe tax rate is a function of the capital that would have resulted if no tax\nhad been paid. Whereas up to now these two types of tax processes have been\ntreated separately, we show that, in fact, they are essentially equivalent.\nThis allows a unified treatment, translating results from one model to the\nother. Significantly, we solve the question of existence and uniqueness for the\nnatural tax process, which is defined via an integral equation. Our results\nclarify the existing literature on processes with tax.\n", "title": "The equivalence of two tax processes" }
null
null
null
null
true
null
18967
null
Default
null
null
null
{ "abstract": " We provide a novel approach to model space-time random fields where the\ntemporal argument is decomposed into two parts. The former captures the linear\nargument, which is related, for instance, to the annual evolution of the field.\nThe latter is instead a circular variable describing, for instance, monthly\nobservations. The basic intuition behind this construction is to consider a\nrandom field defined over space (a compact set of the $d$-dimensional Euclidean\nspace) across time, which is considered as the product space $\\mathbb{R} \\times\n\\mathbb{S}^1$, with $\\mathbb{S}^1$ being the unit circle. Under such framework,\nwe derive new parametric families of covariance functions. In particular, we\nfocus on two classes of parametric families. The former being parenthetical to\nthe Gneiting class of covariance functions. The latter is instead obtained by\nproposing a new Lagrangian framework for the space-time domain considered in\nthe manuscript. Our findings are illustrated through a real dataset of surface\nair temperatures. We show that the incorporation of both temporal variables can\nproduce significant improvements in the predictive performances of the model.\nWe also discuss the extension of this approach for fields defined spatially on\na sphere, which allows to model space-time phenomena over large portions of\nplanet Earth.\n", "title": "Space-Time Geostatistical Models with both Linear and Seasonal Structures in the Temporal Components" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
18968
null
Validated
null
null
null
{ "abstract": " We study the stability of coupled impedance passive regular linear systems\nunder power-preserving interconnections. We present new conditions for strong,\nexponential, and non-uniform stability of the closed-loop system. We apply the\nstability results to the construction of passive error feedback controllers for\nrobust output tracking and disturbance rejection for strongly stabilizable\npassive systems. In the case of nonsmooth reference and disturbance signals we\npresent conditions for non-uniform rational and logarithmic rates of\nconvergence of the output. The results are illustrated with examples on\ndesigning controllers for linear wave and heat equations, and on studying the\nstability of a system of coupled partial differential equations.\n", "title": "Stability and Robust Regulation of Passive Linear Systems" }
null
null
[ "Mathematics" ]
null
true
null
18969
null
Validated
null
null
null
{ "abstract": " In this paper, we consider the estimation of a mean vector of a multivariate\nnormal population where the mean vector is suspected to be nearly equal to mean\nvectors of $k-1$ other populations. As an alternative to the preliminary test\nestimator based on the test statistic for testing hypothesis of equal means, we\nderive empirical and hierarchical Bayes estimators which shrink the sample mean\nvector toward a pooled mean estimator given under the hypothesis. The\nminimaxity of those Bayesian estimators are shown, and their performances are\ninvestigated by simulation.\n", "title": "Bayes Minimax Competitors of Preliminary Test Estimators in k Sample Problems" }
null
null
null
null
true
null
18970
null
Default
null
null
null
{ "abstract": " In this paper, we consider a dense vehicular communication network where each\nvehicle broadcasts its safety information to its neighborhood in each\ntransmission period. Such applications require low latency and high\nreliability, and thus, we propose a non-orthogonal multiple access scheme to\nreduce the latency and to improve the packet reception probability. In the\nproposed scheme, the BS performs the semi-persistent scheduling to optimize the\ntime scheduling and allocate frequency resources in a non-orthogonal manner\nwhile the vehicles autonomously perform distributed power control. We formulate\nthe centralized scheduling and resource allocation problem as equivalent to a\nmulti-dimensional stable roommate matching problem, in which the users and\ntime/frequency resources are considered as disjoint sets of players to be\nmatched with each other. We then develop a novel rotation matching algorithm,\nwhich converges to a q-exchange stable matching after a limited number of\niterations. Simulation results show that the proposed scheme outperforms the\ntraditional orthogonal multiple access scheme in terms of the latency and\nreliability.\n", "title": "Non-orthogonal Multiple Access for High-reliable and Low-latency V2X Communications" }
null
null
null
null
true
null
18971
null
Default
null
null
null
{ "abstract": " We relate the old and new cohomology monoids of an arbitrary monoid $M$ with\ncoefficients in semimodules over $M$, introduced in the author's previous\npapers, to monoid and group extensions. More precisely, the old and new second\ncohomology monoids describe Schreier extensions of semimodules by monoids, and\nthe new third cohomology monoid is related to a certain group extension\nproblem.\n", "title": "Cohomology monoids of monoids with coefficients in semimodules II" }
null
null
null
null
true
null
18972
null
Default
null
null
null
{ "abstract": " State-of-the-art methods in convex and non-convex optimization employ\nhigher-order derivative information, either implicitly or explicitly. We\nexplore the limitations of higher-order optimization and prove that even for\nconvex optimization, a polynomial dependence on the approximation guarantee and\nhigher-order smoothness parameters is necessary. As a special case, we show\nNesterov's accelerated cubic regularization method to be nearly tight.\n", "title": "Lower Bounds for Higher-Order Convex Optimization" }
null
null
null
null
true
null
18973
null
Default
null
null
null
{ "abstract": " We report the effects of Ce substitution on structural, electronic, and\nmagnetic properties of layered bismuth-chalcogenide La1-xCexOBiSSe (x = 0-0.9),\nwhich are newly obtained in this study. Metallic conductivity was observed for\nx > 0.1 because of electron carriers induced by mixed valence of Ce ions, as\nrevealed by bond valence sum calculation and magnetization measurements. Zero\nresistivity and clear diamagnetic susceptibility were obtained for x = 0.2-0.6,\nindicating the emergence of bulk superconductivity in these compounds.\nDome-shaped superconductivity phase diagram with the highest transition\ntemperature (Tc) of 3.1 K, which is slightly lower than that of F-doped\nLaOBiSSe (Tc = 3.7 K), was established. The present study clearly shows that\nthe mixed valence of Ce ions can be utilized as an alternative approach for\nelectron-doping in layered bismuth-chalcogenides to induce superconductivity.\n", "title": "Superconductivity in La1-xCexOBiSSe: carrier doping by mixed valence of Ce ions" }
null
null
null
null
true
null
18974
null
Default
null
null
null
{ "abstract": " We propose Gaussian processes for signals over graphs (GPG) using the apriori\nknowledge that the target vectors lie over a graph. We incorporate this\ninformation using a graph- Laplacian based regularization which enforces the\ntarget vectors to have a specific profile in terms of graph Fourier transform\ncoeffcients, for example lowpass or bandpass graph signals. We discuss how the\nregularization affects the mean and the variance in the prediction output. In\nparticular, we prove that the predictive variance of the GPG is strictly\nsmaller than the conventional Gaussian process (GP) for any non-trivial graph.\nWe validate our concepts by application to various real-world graph signals.\nOur experiments show that the performance of the GPG is superior to GP for\nsmall training data sizes and under noisy training.\n", "title": "Gaussian Processes Over Graphs" }
null
null
null
null
true
null
18975
null
Default
null
null
null
{ "abstract": " To derive recommendations on how to analyze longitudinal data, we examined\nType I error rates of Multilevel Linear Models (MLM) and repeated measures\nAnalysis of Variance (rANOVA) using SAS and SPSS.We performed a simulation with\nthe following specifications: To explore the effects of high numbers of\nmeasurement occasions and small sample sizes on Type I error, measurement\noccasions of m = 9 and 12 were investigated as well as sample sizes of n = 15,\n20, 25 and 30. Effects of non-sphericity in the population on Type I error were\nalso inspected: 5,000 random samples were drawn from two populations containing\nneither a within-subject nor a between-group effect. They were analyzed\nincluding the most common options to correct rANOVA and MLM-results: The\nHuynh-Feldt-correction for rANOVA (rANOVA-HF) and the Kenward-Roger-correction\nfor MLM (MLM-KR), which could help to correct progressive bias of MLM with an\nunstructured covariance matrix (MLM-UN). Moreover, uncorrected rANOVA and MLM\nassuming a compound symmetry covariance structure (MLM-CS) were also taken into\naccount. The results showed a progressive bias for MLM-UN for small samples\nwhich was stronger in SPSS than in SAS. Moreover, an appropriate bias\ncorrection for Type I error via rANOVA-HF and an insufficient correction by\nMLM-UN-KR for n < 30 were found. These findings suggest MLM-CS or rANOVA if\nsphericity holds and a correction of a violation via rANOVA-HF. If an analysis\nrequires MLM, SPSS yields more accurate Type I error rates for MLM-CS and SAS\nyields more accurate Type I error rates for MLM-UN.\n", "title": "Differences of Type I error rates for ANOVA and Multilevel-Linear-Models using SAS and SPSS for repeated measures designs" }
null
null
null
null
true
null
18976
null
Default
null
null
null
{ "abstract": " We consider alternate formulations of recently proposed hierarchical Nearest\nNeighbor Gaussian Process (NNGP) models (Datta et al., 2016a) for improved\nconvergence, faster computing time, and more robust and reproducible Bayesian\ninference. Algorithms are defined that improve CPU memory management and\nexploit existing high-performance numerical linear algebra libraries.\nComputational and inferential benefits are assessed for alternate NNGP\nspecifications using simulated datasets and remotely sensed light detection and\nranging (LiDAR) data collected over the US Forest Service Tanana Inventory Unit\n(TIU) in a remote portion of Interior Alaska. The resulting data product is the\nfirst statistically robust map of forest canopy for the TIU.\n", "title": "Efficient algorithms for Bayesian Nearest Neighbor Gaussian Processes" }
null
null
null
null
true
null
18977
null
Default
null
null
null
{ "abstract": " During the flyby in 2010, the OSIRIS camera on-board Rosetta acquired\nhundreds of high-resolution images of asteroid Lutetia's surface through a\nrange of narrow-band filters. While Lutetia appears very bland in the visible\nwavelength range, Magrin et al. (2012) tentatively identified UV color\nvariations in the Baetica cluster, a group of relatively young craters close to\nthe north pole. As Lutetia remains a poorly understood asteroid, such color\nvariations may provide clues to the nature of its surface. We take the color\nanalysis one step further. First we orthorectify the images using a shape model\nand improved camera pointing, then apply a variety of techniques (photometric\ncorrection, principal component analysis) to the resulting color cubes. We\ncharacterize variegation in the Baetica crater cluster at high spatial\nresolution, identifying crater rays and small, fresh impact craters. We argue\nthat at least some of the color variation is due to space weathering, which\nmakes Lutetia's regolith redder and brighter.\n", "title": "Variegation and space weathering on asteroid 21 Lutetia" }
null
null
null
null
true
null
18978
null
Default
null
null
null
{ "abstract": " We show that, for a given compact or discrete quantum group $G$, the class of\nactions of $G$ on C*-algebras is first-order axiomatizable in the logic for\nmetric structures. As an application, we extend the notion of Rokhlin property\nfor $G$-C*-algebra, introduced by Barlak, Szabó, and Voigt in the case when\n$G$ is second countable and coexact, to an arbitrary compact quantum group $G$.\nAll the the preservations and rigidity results for Rokhlin actions of second\ncountable coexact compact quantum groups obtained by Barlak, Szabó, and\nVoigt are shown to hold in this general context. As a further application, we\nextend the notion of equivariant order zero dimension for equivariant\n*-homomorphisms, introduced in the classical setting by the first and third\nauthors, to actions of compact quantum groups. This allows us to define the\nRokhlin dimension of an action of a compact quantum group on a C*-algebra,\nrecovering the Rokhlin property as Rokhlin dimension zero. We conclude by\nestablishing a preservation result for finite nuclear dimension and finite\ndecomposition rank when passing to fixed point algebras and crossed products by\ncompact quantum group actions with finite Rokhlin dimension.\n", "title": "Rokhlin dimension for compact quantum group actions" }
null
null
null
null
true
null
18979
null
Default
null
null
null
{ "abstract": " In identification of dynamical systems, the prediction error method using a\nquadratic cost function provides asymptotically efficient estimates under\nGaussian noise and additional mild assumptions, but in general it requires\nsolving a non-convex optimization problem. An alternative class of methods uses\na non-parametric model as intermediate step to obtain the model of interest.\nWeighted null-space fitting (WNSF) belongs to this class. It is a weighted\nleast-squares method consisting of three steps. In the first step, a high-order\nARX model is estimated. In a second least-squares step, this high-order\nestimate is reduced to a parametric estimate. In the third step, weighted least\nsquares is used to reduce the variance of the estimates. The method is flexible\nin parametrization and suitable for both open- and closed-loop data. In this\npaper, we show that WNSF provides estimates with the same asymptotic properties\nas PEM with a quadratic cost function when the model orders are chosen\naccording to the true system. Also, simulation studies indicate that WNSF may\nbe competitive with state-of-the-art methods.\n", "title": "Parametric Identification Using Weighted Null-Space Fitting" }
null
null
null
null
true
null
18980
null
Default
null
null
null
{ "abstract": " Learning graphical models from data is an important problem with wide\napplications, ranging from genomics to the social sciences. Nowadays datasets\noften have upwards of thousands---sometimes tens or hundreds of thousands---of\nvariables and far fewer samples. To meet this challenge, we have developed a\nnew R package called sparsebn for learning the structure of large, sparse\ngraphical models with a focus on Bayesian networks. While there are many\nexisting software packages for this task, this package focuses on the unique\nsetting of learning large networks from high-dimensional data, possibly with\ninterventions. As such, the methods provided place a premium on scalability and\nconsistency in a high-dimensional setting. Furthermore, in the presence of\ninterventions, the methods implemented here achieve the goal of learning a\ncausal network from data. Additionally, the sparsebn package is fully\ncompatible with existing software packages for network analysis.\n", "title": "Learning Large-Scale Bayesian Networks with the sparsebn Package" }
null
null
null
null
true
null
18981
null
Default
null
null
null
{ "abstract": " We consider restless multi-armed bandit (RMAB) with a finite horizon and\nmultiple pulls per period. Leveraging the Lagrangian relaxation, we approximate\nthe problem with a collection of single arm problems. We then propose an\nindex-based policy that uses optimal solutions of the single arm problems to\nindex individual arms, and offer a proof that it is asymptotically optimal as\nthe number of arms tends to infinity. We also use simulation to show that this\nindex-based policy performs better than the state-of-art heuristics in various\nproblem settings.\n", "title": "An Asymptotically Optimal Index Policy for Finite-Horizon Restless Bandits" }
null
null
null
null
true
null
18982
null
Default
null
null
null
{ "abstract": " We use Gauge/Gravity duality to write down an effective low energy\nholographic theory of charge density waves. We consider a simple gravity model\nwhich breaks translations spontaneously in the dual field theory in a\nhomogeneous manner, capturing the low energy dynamics of phonons coupled to\nconserved currents. We first focus on the leading two-derivative action, which\nleads to excited states with non-zero strain. We show that including subleading\nquartic derivative terms leads to dynamical instabilities of AdS$_2$\ntranslation invariant states and to stable phases breaking translations\nspontaneously. We compute analytically the real part of the electric\nconductivity. The model allows to construct Lifshitz-like hyperscaling\nviolating quantum critical ground states breaking translations spontaneously.\nAt these critical points, the real part of the dc conductivity can be metallic\nor insulating.\n", "title": "Effective holographic theory of charge density waves" }
null
null
[ "Physics" ]
null
true
null
18983
null
Validated
null
null
null
{ "abstract": " We develop and analyze new protocols to verify the correctness of various\ncomputations on matrices over F[x], where F is a field. The properties we\nverify concern an F[x]-module and therefore cannot simply rely on\npreviously-developed linear algebra certificates which work only for vector\nspaces. Our protocols are interactive certificates, often randomized, and\nfeaturing a constant number of rounds of communication between the prover and\nverifier. We seek to minimize the communication cost so that the amount of data\nsent during the protocol is significantly smaller than the size of the result\nbeing verified, which can be useful when combining protocols or in some\nmulti-party settings. The main tools we use are reductions to existing linear\nalgebra certificates and a new protocol to verify that a given vector is in the\nF[x]-linear span of a given matrix.\n", "title": "Interactive Certificates for Polynomial Matrices with Sub-Linear Communication" }
null
null
[ "Computer Science" ]
null
true
null
18984
null
Validated
null
null
null
{ "abstract": " We focus on autonomously generating robot motion for day to day physical\ntasks that is expressive of a certain style or emotion. Because we seek\ngeneralization across task instances and task types, we propose to capture\nstyle via cost functions that the robot can use to augment its nominal task\ncost and task constraints in a trajectory optimization process. We compare two\napproaches to representing such cost functions: a weighted linear combination\nof hand-designed features, and a neural network parameterization operating on\nraw trajectory input. For each cost type, we learn weights for each style from\nuser feedback. We contrast these approaches to a nominal motion across\ndifferent tasks and for different styles in a user study, and find that they\nboth perform on par with each other, and significantly outperform the baseline.\nEach approach has its advantages: featurized costs require learning fewer\nparameters and can perform better on some styles, but neural network\nrepresentations do not require expert knowledge to design features and could\neven learn more complex, nuanced costs than an expert can easily design.\n", "title": "Cost Functions for Robot Motion Style" }
null
null
null
null
true
null
18985
null
Default
null
null
null
{ "abstract": " The planar equilateral restricted four-body problem where two of the\nprimaries have equal masses is used in order to determine the Newton-Raphson\nbasins of convergence associated with the equilibrium points. The parametric\nvariation of the position of the libration points is monitored when the value\nof the mass parameter $m_3$ varies in predefined intervals. The regions on the\nconfiguration $(x,y)$ plane occupied by the basins of attraction are revealed\nusing the multivariate version of the Newton-Raphson iterative scheme. The\ncorrelations between the attracting domains of the equilibrium points and the\ncorresponding number of iterations needed for obtaining the desired accuracy\nare also illustrated. We perform a thorough and systematic numerical\ninvestigation by demonstrating how the dynamical parameter $m_3$ influences the\nshape, the geometry and the degree of fractality of the converging regions. Our\nnumerical outcomes strongly indicate that the mass parameter is indeed one of\nthe most influential factors in this dynamical system.\n", "title": "Revealing the basins of convergence in the planar equilateral restricted four-body problem" }
null
null
null
null
true
null
18986
null
Default
null
null
null
{ "abstract": " The secular approximation of the hierarchical three body systems has been\nproven to be very useful in addressing many astrophysical systems, from\nplanets, stars to black holes. In such a system two objects are on a tight\norbit, and the tertiary is on a much wider orbit. Here we study the dynamics of\na system by taking the tertiary mass to zero and solve the hierarchical three\nbody system up to the octupole level of approximation. We find a rich dynamics\nthat the outer orbit undergoes due to gravitational perturbations from the\ninner binary. The nominal result of the precession of the nodes is mostly\nlimited for the lowest order of approximation, however, when the octupole-level\nof approximation is introduced the system becomes chaotic, as expected, and the\ntertiary oscillates below and above 90deg, similarly to the non-test particle\nflip behavior (e.g., Naoz 2016). We provide the Hamiltonian of the system and\ninvestigate the dynamics of the system from the quadrupole to the octupole\nlevel of approximations. We also analyze the chaotic and quasi-periodic orbital\nevolution by studying the surfaces of sections. Furthermore, including general\nrelativity, we show case the long term evolution of individual debris disk\nparticles under the influence of a far away interior eccentric planet. We show\nthat this dynamics can naturally result in retrograde objects and a puffy disk\nafter a long timescale evolution (few Gyr) for initially aligned configuration.\n", "title": "The Eccentric Kozai-Lidov mechanism for Outer Test Particle" }
null
null
null
null
true
null
18987
null
Default
null
null
null
{ "abstract": " For C*-algebras $A$ and $B$, we generalize the notion of a quasihomomorphism\nfrom $A$ to $B$, due to Cuntz, by considering quasihomomorphisms from some\nC*-algebra $C$ to $B$ such that $C$ surjects onto $A$, and the two maps forming\na quasihomomorphism agree on the kernel of this surjection. Under an additional\nassumption, the group of homotopy classes of such generalized\nquasihomomorphisms coincides with $KK(A,B)$. This makes the definition of\nKasparov's bifunctor slightly more symmetric and gives more flexibility for\nconstructing elements of $KK$-groups. These generalized quasihomomorphisms can\nbe viewed as pairs of maps directly from $A$ (instead of various $C$'s), but\nthese maps need not be $*$-homomorphisms.\n", "title": "A more symmetric picture for Kasparov's KK-bifunctor" }
null
null
null
null
true
null
18988
null
Default
null
null
null
{ "abstract": " We study pattern formation in a 2-D reaction-diffusion (RD) sub-cellular\nmodel characterizing the effect of a spatial gradient of a plant hormone\ndistribution on a family of G-proteins associated with root-hair (RH)\ninitiation in the plant cell Arabidopsis thaliana. The activation of these\nG-proteins, known as the Rho of Plants (ROPs), by the plant hormone auxin, is\nknown to promote certain protuberances on root hair cells, which are crucial\nfor both anchorage and the uptake of nutrients from the soil. Our mathematical\nmodel for the activation of ROPs by the auxin gradient is an extension of the\nmodel of Payne and Grierson [PLoS ONE, 12(4), (2009)], and consists of a\ntwo-component Schnakenberg-type RD system with spatially heterogeneous\ncoefficients on a 2-D domain. The nonlinear kinetics in this RD system model\nthe nonlinear interactions between the active and inactive forms of ROPs. By\nusing a singular perturbation analysis to study 2-D localized spatial patterns\nof active ROPs, it is shown that the spatial variations in the nonlinear\nreaction kinetics, due to the auxin gradient, lead to a slow spatial alignment\nof the localized regions of active ROPs along the longitudinal midline of the\nplant cell. Numerical bifurcation analysis, together with time-dependent\nnumerical simulations of the RD system are used to illustrate both 2-D\nlocalized patterns in the model, and the spatial alignment of localized\nstructures.\n", "title": "Spot dynamics in a reaction-diffusion model of plant root hair initiation" }
null
null
null
null
true
null
18989
null
Default
null
null
null
{ "abstract": " In modern biomedical research, it is ubiquitous to have multiple data sets\nmeasured on the same set of samples from different views (i.e., multi-view\ndata). For example, in genetic studies, multiple genomic data sets at different\nmolecular levels or from different cell types are measured for a common set of\nindividuals to investigate genetic regulation. Integration and reduction of\nmulti-view data have the potential to leverage information in different data\nsets, and to reduce the magnitude and complexity of data for further\nstatistical analysis and interpretation. In this paper, we develop a novel\nstatistical model, called supervised integrated factor analysis (SIFA), for\nintegrative dimension reduction of multi-view data while incorporating\nauxiliary covariates. The model decomposes data into joint and individual\nfactors, capturing the joint variation across multiple data sets and the\nindividual variation specific to each set respectively. Moreover, both joint\nand individual factors are partially informed by auxiliary covariates via\nnonparametric models. We devise a computationally efficient\nExpectation-Maximization (EM) algorithm to fit the model under some\nidentifiability conditions. We apply the method to the Genotype-Tissue\nExpression (GTEx) data, and provide new insights into the variation\ndecomposition of gene expression in multiple tissues. Extensive simulation\nstudies and an additional application to a pediatric growth study demonstrate\nthe advantage of the proposed method over competing methods.\n", "title": "Incorporating Covariates into Integrated Factor Analysis of Multi-View Data" }
null
null
null
null
true
null
18990
null
Default
null
null
null
{ "abstract": " Improving distant speech recognition is a crucial step towards flexible\nhuman-machine interfaces. Current technology, however, still exhibits a lack of\nrobustness, especially when adverse acoustic conditions are met. Despite the\nsignificant progress made in the last years on both speech enhancement and\nspeech recognition, one potential limitation of state-of-the-art technology\nlies in composing modules that are not well matched because they are not\ntrained jointly. To address this concern, a promising approach consists in\nconcatenating a speech enhancement and a speech recognition deep neural network\nand to jointly update their parameters as if they were within a single bigger\nnetwork. Unfortunately, joint training can be difficult because the output\ndistribution of the speech enhancement system may change substantially during\nthe optimization procedure. The speech recognition module would have to deal\nwith an input distribution that is non-stationary and unnormalized. To mitigate\nthis issue, we propose a joint training approach based on a fully\nbatch-normalized architecture. Experiments, conducted using different datasets,\ntasks and acoustic conditions, revealed that the proposed framework\nsignificantly overtakes other competitive solutions, especially in challenging\nenvironments.\n", "title": "Batch-normalized joint training for DNN-based distant speech recognition" }
null
null
null
null
true
null
18991
null
Default
null
null
null
{ "abstract": " This paper describes a preliminary study for producing and distributing a\nlarge-scale database of embeddings from the Portuguese Twitter stream. We start\nby experimenting with a relatively small sample and focusing on three\nchallenges: volume of training data, vocabulary size and intrinsic evaluation\nmetrics. Using a single GPU, we were able to scale up vocabulary size from 2048\nwords embedded and 500K training examples to 32768 words over 10M training\nexamples while keeping a stable validation loss and approximately linear trend\non training time per epoch. We also observed that using less than 50\\% of the\navailable training examples for each vocabulary size might result in\noverfitting. Results on intrinsic evaluation show promising performance for a\nvocabulary size of 32768 words. Nevertheless, intrinsic evaluation metrics\nsuffer from over-sensitivity to their corresponding cosine similarity\nthresholds, indicating that a wider range of metrics need to be developed to\ntrack progress.\n", "title": "Learning Word Embeddings from the Portuguese Twitter Stream: A Study of some Practical Aspects" }
null
null
[ "Computer Science" ]
null
true
null
18992
null
Validated
null
null
null
{ "abstract": " The combination of high-contrast imaging and high-dispersion spectroscopy,\nwhich has successfully been used to detect the atmosphere of a giant planet, is\none of the most promising potential probes of the atmosphere of Earth-size\nworlds. The forthcoming generation of extremely large telescopes (ELTs) may\nobtain sufficient contrast with this technique to detect O$_2$ in the\natmosphere of those worlds that orbit low-mass M dwarfs. This is strong\nmotivation to carry out a census of planets around cool stars for which\nhabitable zones can be resolved by ELTs, i.e. for M dwarfs within $\\sim$5\nparsecs. Our HARPS survey has been a major contributor to that sample of nearby\nplanets. Here we report on our radial velocity observations of Ross 128\n(Proxima Virginis, GJ447, HIP 57548), an M4 dwarf just 3.4 parsec away from our\nSun. This source hosts an exo-Earth with a projected mass $m \\sin i = 1.35\nM_\\oplus$ and an orbital period of 9.9 days. Ross 128 b receives $\\sim$1.38\ntimes as much flux as Earth from the Sun and its equilibrium ranges in\ntemperature between 269 K for an Earth-like albedo and 213 K for a Venus-like\nalbedo. Recent studies place it close to the inner edge of the conventional\nhabitable zone. An 80-day long light curve from K2 campaign C01 demonstrates\nthat Ross~128~b does not transit. Together with the All Sky Automated Survey\n(ASAS) photometry and spectroscopic activity indices, the K2 photometry shows\nthat Ross 128 rotates slowly and has weak magnetic activity. In a habitability\ncontext, this makes survival of its atmosphere against erosion more likely.\nRoss 128 b is the second closest known exo-Earth, after Proxima Centauri b (1.3\nparsec), and the closest temperate planet known around a quiet star. The 15 mas\nplanet-star angular separation at maximum elongation will be resolved by ELTs\n($>$ 3$\\lambda/D$) in the optical bands of O$_2$.\n", "title": "A temperate exo-Earth around a quiet M dwarf at 3.4 parsecs" }
null
null
null
null
true
null
18993
null
Default
null
null
null
{ "abstract": " Low-rank structures play important role in recent advances of many problems\nin image science and data science. As a natural extension of low-rank\nstructures for data with nonlinear structures, the concept of the\nlow-dimensional manifold structure has been considered in many data processing\nproblems. Inspired by this concept, we consider a manifold based low-rank\nregularization as a linear approximation of manifold dimension. This\nregularization is less restricted than the global low-rank regularization, and\nthus enjoy more flexibility to handle data with nonlinear structures. As\napplications, we demonstrate the proposed regularization to classical inverse\nproblems in image sciences and data sciences including image inpainting, image\nsuper-resolution, X-ray computer tomography (CT) image reconstruction and\nsemi-supervised learning. We conduct intensive numerical experiments in several\nimage restoration problems and a semi-supervised learning problem of\nclassifying handwritten digits using the MINST data. Our numerical tests\ndemonstrate the effectiveness of the proposed methods and illustrate that the\nnew regularization methods produce outstanding results by comparing with many\nexisting methods.\n", "title": "Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning" }
null
null
null
null
true
null
18994
null
Default
null
null
null
{ "abstract": " Many scenarios require a robot to be able to explore its 3D environment\nonline without human supervision. This is especially relevant for inspection\ntasks and search and rescue missions. To solve this high-dimensional path\nplanning problem, sampling-based exploration algorithms have proven successful.\nHowever, these do not necessarily scale well to larger environments or spaces\nwith narrow openings. This paper presents a 3D exploration planner based on the\nprinciples of Next-Best Views (NBVs). In this approach, a Micro-Aerial Vehicle\n(MAV) equipped with a limited field-of-view depth sensor randomly samples its\nconfiguration space to find promising future viewpoints. In order to obtain\nhigh sampling efficiency, our planner maintains and uses a history of visited\nplaces, and locally optimizes the robot's orientation with respect to\nunobserved space. We evaluate our method in several simulated scenarios, and\ncompare it against a state-of-the-art exploration algorithm. The experiments\nshow substantial improvements in exploration time ($2\\times$ faster),\ncomputation time, and path length, and advantages in handling difficult\nsituations such as escaping dead-ends (up to $20\\times$ faster). Finally, we\nvalidate the on-line capability of our algorithm on a computational constrained\nreal world MAV.\n", "title": "History-aware Autonomous Exploration in Confined Environments using MAVs" }
null
null
null
null
true
null
18995
null
Default
null
null
null
{ "abstract": " In 2012, JPMorgan accumulated a USD~6.2 billion loss on a credit derivatives\nportfolio, the so-called `London Whale', partly as a consequence of\nde-correlations of non-perfectly correlated positions that were supposed to\nhedge each other. Motivated by this case, we devise a factor model for\ncorrelations that allows for scenario-based stress testing of correlations. We\nderive a number of analytical results related to a portfolio of homogeneous\nassets. Using the concept of Mahalanobis distance, we show how to identify\nadverse scenarios of correlation risk. In addition, we demonstrate how\ncorrelation and volatility stress tests can be combined. As an example, we\napply the factor-model approach to the \"London Whale\" portfolio and determine\nthe value-at-risk impact from correlation changes. Since our findings are\nparticularly relevant for large portfolios, where even small correlation\nchanges can have a large impact, a further application would be to stress test\nportfolios of central counterparties, which are of systemically relevant size.\n", "title": "A factor-model approach for correlation scenarios and correlation stress-testing" }
null
null
null
null
true
null
18996
null
Default
null
null
null
{ "abstract": " We develop a topology data analysis-based method to detect early signs for\ncritical transitions in financial data. From the time-series of multiple stock\nprices, we build time-dependent correlation networks, which exhibit topological\nstructures. We compute the persistent homology associated to these structures\nin order to track the changes in topology when approaching a critical\ntransition. As a case study, we investigate a portfolio of stocks during a\nperiod prior to the US financial crisis of 2007-2008, and show the presence of\nearly signs of the critical transition.\n", "title": "Topology data analysis of critical transitions in financial networks" }
null
null
null
null
true
null
18997
null
Default
null
null
null
{ "abstract": " One of the defining features of many-body localization is the presence of\nextensively many quasi-local conserved quantities. These constants of motion\nconstitute a corner-stone to an intuitive understanding of much of the\nphenomenology of many-body localized systems arising from effective\nHamiltonians. They may be seen as local magnetization operators smeared out by\na quasi-local unitary. However, accurately identifying such constants of motion\nremains a challenging problem. Current numerical constructions often capture\nthe conserved operators only approximately restricting a conclusive\nunderstanding of many-body localization. In this work, we use methods from the\ntheory of quantum many-body systems out of equilibrium to establish a new\napproach for finding a complete set of exact constants of motion which are in\naddition guaranteed to represent Pauli-$z$ operators. By this we are able to\nconstruct and investigate the proposed effective Hamiltonian using exact\ndiagonalization. Hence, our work provides an important tool expected to further\nboost inquiries into the breakdown of transport due to quenched disorder.\n", "title": "Construction of exact constants of motion and effective models for many-body localized systems" }
null
null
null
null
true
null
18998
null
Default
null
null
null
{ "abstract": " We study the action of the dihedral group on the (equivariant) cohomology of\nthe toric manifolds associated with cycle graphs.\n", "title": "Toric manifolds over cyclohedra" }
null
null
null
null
true
null
18999
null
Default
null
null
null
{ "abstract": " As a living information and communications system, the genome encodes\npatterns in single nucleotide polymorphisms (SNPs) reflecting human adaption\nthat optimizes population survival in differing environments. This paper\nmathematically models environmentally induced adaptive forces that quantify\nchanges in the distribution of SNP frequencies between populations. We make\ndirect connections between biophysical methods (e.g. minimizing genomic free\nenergy) and concepts in population genetics. Our unbiased computer program\nscanned a large set of SNPs in the major histocompatibility complex region, and\nflagged an altitude dependency on a SNP associated with response to oxygen\ndeprivation. The statistical power of our double-blind approach is demonstrated\nin the flagging of mathematical functional correlations of SNP\ninformation-based potentials in multiple populations with specific\nenvironmental parameters. Furthermore, our approach provides insights for new\ndiscoveries on the biology of common variants. This paper demonstrates the\npower of biophysical modeling of population diversity for better understanding\ngenome-environment interactions in biological phenomenon.\n", "title": "Use of Genome Information-Based Potentials to Characterize Human Adaptation" }
null
null
null
null
true
null
19000
null
Default
null
null