text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Modern society generates an incredible amount of data about individuals, and\nreleasing summary statistics about this data in a manner that provably protects\nindividual privacy would offer a valuable resource for researchers in many\nfields. We present the first algorithm for analysis of variance (ANOVA) that\npreserves differential privacy, allowing this important statistical test to be\nconducted (and the results released) on databases of sensitive information. In\naddition to our private algorithm for the F test statistic, we show a rigorous\nway to compute p-values that accounts for the added noise needed to preserve\nprivacy. Finally, we present experimental results quantifying the statistical\npower of this differentially private version of the test, finding that a sample\nof several thousand observations is frequently enough to detect variation\nbetween groups. The differentially private ANOVA algorithm is a promising\napproach for releasing a common test statistic that is valuable in fields in\nthe sciences and social sciences.\n", "title": "Differentially Private ANOVA Testing" }
null
null
null
null
true
null
13901
null
Default
null
null
null
{ "abstract": " We give a new proof of the strong Arnold conjecture for $1$-periodic\nsolutions of Hamiltonian systems on tori, that was first shown by C. Conley and\nE. Zehnder in 1983. Our proof uses other methods and is shorter than the\nprevious one. We first show that the $E$-cohomological Conley index, that was\nintroduced by the first author recently, has a natural module structure. This\nyields a new cup-length and a lower bound for the number of critical points of\nfunctionals. Then an existence result for the $E$-cohomological Conley index,\nwhich applies to the setting of the Arnold conjecture, paves the way to a new\nproof of it on tori.\n", "title": "The $E$-cohomological Conley Index, Cup-Lengths and the Arnold Conjecture on $T^{2n}$" }
null
null
[ "Mathematics" ]
null
true
null
13902
null
Validated
null
null
null
{ "abstract": " Diderot is a parallel domain-specific language for analysis and visualization\nof multidimensional scientific images, such as those produced by CT and MRI\nscanners. In particular, it supports algorithms where tensor fields (i.e.,\nfunctions from 3D points to tensor values) are used to represent the underlying\nphysical objects that were scanned by the imaging device. Diderot supports\nhigher-order programming where tensor fields are first-class values and where\ndifferential operators and lifted linear-algebra operators can be used to\nexpress mathematical reasoning directly in the language. While such lifted\nfield operations are central to the definition and computation of many\nscientific visualization algorithms, to date they have required extensive\nmanual derivations and laborious implementation.\nThe challenge for the Diderot compiler is to effectively translate the\nhigh-level mathematical concepts that are expressible in the surface language\nto a low-level and efficient implementation in C. This paper describes our\napproach to this challenge, which is based around the careful design of an\nintermediate representation (IR), called EIN, and a number of compiler\ntransformations that lower the program from tensor calculus to C while avoiding\ncombinatorial explosion in the size of the IR. We describe the challenges in\ncompiling a language like Diderot, the design of EIN, and the transformation\nused by the compiler. We also present an evaluation of EIN with respect to both\ncompiler efficiency and quality of generated code.\n", "title": "Compiling Diderot: From Tensor Calculus to C" }
null
null
null
null
true
null
13903
null
Default
null
null
null
{ "abstract": " We consider the problem of semi-supervised few-shot classification where a\nclassifier needs to adapt to new tasks using a few labeled examples and\n(potentially many) unlabeled examples. We propose a clustering approach to the\nproblem. The features extracted with Prototypical Networks are clustered using\n$K$-means with the few labeled examples guiding the clustering process. We note\nthat in many real-world applications the adaptation performance can be\nsignificantly improved by requesting the few labels through user feedback. We\ndemonstrate good performance of the active adaptation strategy using image\ndata.\n", "title": "Semi-Supervised and Active Few-Shot Learning with Prototypical Networks" }
null
null
null
null
true
null
13904
null
Default
null
null
null
{ "abstract": " We introduce and study the inhomogeneous exponential jump model - an\nintegrable stochastic interacting particle system on the continuous half line\nevolving in continuous time. An important feature of the system is the presence\nof arbitrary spatial inhomogeneity on the half line which does not break the\nintegrability. We completely characterize the macroscopic limit shape and\nasymptotic fluctuations of the height function (= integrated current) in the\nmodel. In particular, we explain how the presence of inhomogeneity may lead to\nmacroscopic phase transitions in the limit shape such as shocks or traffic\njams. Away from these singularities the asymptotic fluctuations of the height\nfunction around its macroscopic limit shape are governed by the GUE Tracy-Widom\ndistribution. A surprising result is that while the limit shape is\ndiscontinuous at a traffic jam caused by a macroscopic slowdown in the\ninhomogeneity, fluctuations on both sides of such a traffic jam still have the\nGUE Tracy-Widom distribution (but with different non-universal normalizations).\nThe integrability of the model comes from the fact that it is a degeneration\nof the inhomogeneous stochastic higher spin six vertex models studied earlier\nin arXiv:1601.05770 [math.PR]. Our results on fluctuations are obtained via an\nasymptotic analysis of Fredholm determinantal formulas arising from contour\nintegral expressions for the q-moments in the stochastic higher spin six vertex\nmodel. We also discuss \"product-form\" translation invariant stationary\ndistributions of the exponential jump model which lead to an alternative\nhydrodynamic-type heuristic derivation of the macroscopic limit shape.\n", "title": "Inhomogeneous exponential jump model" }
null
null
null
null
true
null
13905
null
Default
null
null
null
{ "abstract": " Evidence accumulation models of simple decision-making have long assumed that\nthe brain estimates a scalar decision variable corresponding to the\nlog-likelihood ratio of the two alternatives. Typical neural implementations of\nthis algorithmic cognitive model assume that large numbers of neurons are each\nnoisy exemplars of the scalar decision variable. Here we propose a neural\nimplementation of the diffusion model in which many neurons construct and\nmaintain the Laplace transform of the distance to each of the decision bounds.\nAs in classic findings from brain regions including LIP, the firing rate of\nneurons coding for the Laplace transform of net accumulated evidence grows to a\nbound during random dot motion tasks. However, rather than noisy exemplars of a\nsingle mean value, this approach makes the novel prediction that firing rates\ngrow to the bound exponentially, across neurons there should be a distribution\nof different rates. A second set of neurons records an approximate inversion of\nthe Laplace transform, these neurons directly estimate net accumulated\nevidence. In analogy to time cells and place cells observed in the hippocampus\nand other brain regions, the neurons in this second set have receptive fields\nalong a \"decision axis.\" This finding is consistent with recent findings from\nrodent recordings. This theoretical approach places simple evidence\naccumulation models in the same mathematical language as recent proposals for\nrepresenting time and space in cognitive models for memory.\n", "title": "Evidence accumulation in a Laplace domain decision space" }
null
null
null
null
true
null
13906
null
Default
null
null
null
{ "abstract": " The radiative lifetime of molecules or atoms can be increased by placing them\nwithin a tuned conductive cavity that inhibits spontaneous emission. This was\nexamined as a possible means of enhancing three-body, singlet-based\nupconversion, known as energy pooling. Achieving efficient upconversion of\nlight has potential applications in the fields of photovoltaics, biofuels, and\nmedicine. The affect of the photonically constrained environment on pooling\nefficiency was quantified using a kinetic model populated with data from\nmolecular quantum electrodynamics, perturbation theory, and ab initio\ncalculations. This model was applied to a system with fluorescein donors and a\nhexabenzocoronene acceptor. Placing the molecules within a conducting cavity\nwas found to increase the efficiency of energy pooling by increasing both the\ndonor lifetime and the acceptor emission rate--i.e. a combination of inhibited\nspontaneous emission and the Purcell effect. A model system with a free-space\npooling efficiency of 23% was found to have an efficiency of 47% in a\nrectangular cavity.\n", "title": "Improved Energy Pooling Efficiency Through Inhibited Spontaneous Emission" }
null
null
null
null
true
null
13907
null
Default
null
null
null
{ "abstract": " The two most fundamental processes describing change in biology -development\nand evolution- occur over drastically different timescales, difficult to\nreconcile within a unified framework. Development involves a temporal sequence\nof cell states controlled by a hierarchy of regulatory structures. It occurs\nover the lifetime of a single individual, and is associated to the gene\nexpression level change of a given genotype. Evolution, by contrast entails\ngenotypic change through the acquisition or loss of genes, and involves the\nemergence of new, environmentally selected phenotypes over the lifetimes of\nmany individ- uals. Here we present a model of regulatory network evolution\nthat accounts for both timescales. We extend the framework of boolean models of\ngene regulatory network (GRN)-currently only applicable to describing\ndevelopment-to include evolutionary processes. As opposed to one-to-one maps to\nspecific attractors, we identify the phenotypes of the cells as the relevant\nmacrostates of the GRN. A pheno- type may now correspond to multiple\nattractors, and its formal definition no longer require a fixed size for the\ngenotype. This opens the possibility for a quantitative study of the phenotypic\nchange of a genotype, which is itself changing over evolutionary timescales. We\nshow how the realization of specific phenotypes can be controlled by gene\nduplication events, and how successive events of gene duplication lead to new\nregulatory structures via selection. It is these structures that enable control\nof macroscale patterning, as in development. The proposed framework therefore\nprovides a mechanistic explanation for the emergence of regulatory structures\ncontrolling development over evolutionary time.\n", "title": "A unified, mechanistic framework for developmental and evolutionary change" }
null
null
[ "Quantitative Biology" ]
null
true
null
13908
null
Validated
null
null
null
{ "abstract": " A system modeling bacteriophage treatments with coinfections in a noisy\ncontext is analyzed. We prove that in a small noise regime, the system\nconverges in the long term to a bacteria free equilibrium. Moreover, we compare\nthe treatment with coinfection with the treatment without coinfection, showing\nhow the coinfection affects the dose of bacteriophages that is needed to\neliminate the bacteria and the velocity of convergence to the free bacteria\nequilibrium.\n", "title": "Coinfection in a stochastic model for bacteriophage systems" }
null
null
null
null
true
null
13909
null
Default
null
null
null
{ "abstract": " Spin waves in chiral magnetic materials are strongly influenced by the\nDzyaloshinskii-Moriya interaction resulting in intriguing phenomena like\nnon-reciprocal magnon propagation and magnetochiral dichroism. Here, we study\nthe non-reciprocal magnon spectrum of the archetypical chiral magnet MnSi and\nits evolution as a function of magnetic field covering the field-polarized and\nconical helix phase. Using inelastic neutron scattering, the magnon energies\nand their spectral weights are determined quantitatively after deconvolution\nwith the instrumental resolution. In the field-polarized phase the imaginary\npart of the dynamical susceptibility $\\chi''(\\varepsilon, {\\bf q})$ is shown to\nbe asymmetric with respect to wavevectors ${\\bf q}$ longitudinal to the applied\nmagnetic field ${\\bf H}$, which is a hallmark of chiral magnetism. In the\nhelimagnetic phase, $\\chi''(\\varepsilon, {\\bf q})$ becomes increasingly\nsymmetric with decreasing ${\\bf H}$ due to the formation of helimagnon bands\nand the activation of additional spinflip and non-spinflip scattering channels.\nThe neutron spectra are in excellent quantitative agreement with the low-energy\ntheory of cubic chiral magnets with a single fitting parameter being the\ndamping rate of spin waves.\n", "title": "Field dependence of non-reciprocal magnons in chiral MnSi" }
null
null
null
null
true
null
13910
null
Default
null
null
null
{ "abstract": " Moran or Wright-Fisher processes are probably the most well known model to\nstudy the evolution of a population under various effects. Our object of study\nwill be the Simpson index which measures the level of diversity of the\npopulation, one of the key parameter for ecologists who study for example\nforest dynamics. Following ecological motivations, we will consider here the\ncase where there are various species with fitness and immigration parameters\nbeing random processes (and thus time evolving). To measure biodiversity,\necologists generally use the Simpson index, who has no closed formula, except\nin the neutral (no selection) case via a backward approach, and which is\ndifficult to evaluate even numerically when the population size is large. Our\napproach relies on the large population limit in the \"weak\" selection case, and\nthus to give a procedure which enable us to approximate, with controlled rate,\nthe expectation of the Simpson index at fixed time. Our approach will be\nforward and valid for all time, which is the main difference with the\nhistorical approach of Kingman, or Krone-Neuhauser. We will also study the long\ntime behaviour of the Wright-Fisher process in a simplified setting, allowing\nus to get a full picture for the approximation of the expectation of the\nSimpson index.\n", "title": "On the Simpson index for the Moran process with random selection and immigration" }
null
null
null
null
true
null
13911
null
Default
null
null
null
{ "abstract": " Preclinical magnetic resonance imaging often requires the entire body of an\nanimal to be imaged with sufficient quality. This is usually performed by\ncombining regions scanned with small coils with high sensitivity or long scans\nusing large coils with low sensitivity. Here, a metamaterial-inspired design\nemploying of a parallel array of wires operating on the principle of eigenmode\nhybridization is used to produce a small animal whole-body imaging coil. The\ncoil field distribution responsible for the coil field of view and sensitivity\nis simulated in an electromagnetic simulation package and the coil geometrical\nparameters are optimized for the chosen application. A prototype coil is then\nmanufactured and assembled using brass telescopic tubes and copper plates as\ndistributed capacitance, its field distribution is measured experimentally\nusing B1+ mapping technique and found to be in close correspondence with\nsimulated results. The coil field distribution is found to be suitable for\nwhole-body small animal imaging and coil image quality is compared with a\nnumber of commercially available coils by whole-body living mice scanning.\nSignal to noise measurements in living mice show outstanding coil performance\ncompared to commercially available coils with large receptive fields, and\nrivaling performance compared to small receptive field and high-sensitivity\ncoils. The coil is deemed suitable for whole-body small animal preclinical\napplications.\n", "title": "Small animal whole body imaging with metamaterial-inspired RF coil" }
null
null
null
null
true
null
13912
null
Default
null
null
null
{ "abstract": " The scalable calculation of matrix determinants has been a bottleneck to the\nwidespread application of many machine learning methods such as determinantal\npoint processes, Gaussian processes, generalised Markov random fields, graph\nmodels and many others. In this work, we estimate log determinants under the\nframework of maximum entropy, given information in the form of moment\nconstraints from stochastic trace estimation. The estimates demonstrate a\nsignificant improvement on state-of-the-art alternative methods, as shown on a\nwide variety of UFL sparse matrices. By taking the example of a general Markov\nrandom field, we also demonstrate how this approach can significantly\naccelerate inference in large-scale learning methods involving the log\ndeterminant.\n", "title": "Entropic Trace Estimates for Log Determinants" }
null
null
null
null
true
null
13913
null
Default
null
null
null
{ "abstract": " Over the years data has become increasingly higher dimensional, which has\nprompted an increased need for dimension reduction techniques. This is perhaps\nespecially true for clustering (unsupervised classification) as well as\nsemi-supervised and supervised classification. Although dimension reduction in\nthe area of clustering for multivariate data has been quite thoroughly\ndiscussed within the literature, there is relatively little work in the area of\nthree-way, or matrix variate, data. Herein, we develop a mixture of matrix\nvariate bilinear factor analyzers (MMVBFA) model for use in clustering\nhigh-dimensional matrix variate data. This work can be considered both the\nfirst matrix variate bilinear factor analysis model as well as the first MMVBFA\nmodel. Parameter estimation is discussed, and the MMVBFA model is illustrated\nusing simulated and real data.\n", "title": "A Mixture of Matrix Variate Bilinear Factor Analyzers" }
null
null
null
null
true
null
13914
null
Default
null
null
null
{ "abstract": " This review paper fits in the context of the adequate matching of training to\nemployment, which is one of the main challenges that universities around the\nworld strive to meet. In higher education, the revision of curricula\nnecessitates a return to the skills required by the labor market to train\nskilled labors.\nIn this research, we started with the presentation of the conceptual\nframework. Then we quoted different currents that discussed the problematic of\nthe job training match from various perspectives. We proceeded to choose some\nstudies that have attempted to remedy this problem by adopting the\ncompetency-based approach that involves the referential line. This approach has\nas a main characteristic the attainment of the match between training and\nemployment. Therefore, it is a relevant solution for this problem. We\nscrutinized the selected studies, presenting their objectives, methodologies\nand results, and we provided our own analysis. Then, we focused on the Moroccan\ncontext through observations and studies already conducted. And finally, we\nintroduced the problematic of our future project.\n", "title": "The application of the competency-based approach to assess the training and employment adequacy problem" }
null
null
[ "Computer Science" ]
null
true
null
13915
null
Validated
null
null
null
{ "abstract": " Understanding the generative mechanism of a natural system is a vital\ncomponent of the scientific method. Here, we investigate one of the fundamental\nsteps toward this goal by presenting the minimal generator of an arbitrary\nbinary Markov process. This is a class of processes whose predictive model is\nwell known. Surprisingly, the generative model requires three distinct\ntopologies for different regions of parameter space. We show that a previously\nproposed generator for a particular set of binary Markov processes is, in fact,\nnot minimal. Our results shed the first quantitative light on the relative\n(minimal) costs of prediction and generation. We find, for instance, that the\ndifference between prediction and generation is maximized when the process is\napproximately independently, identically distributed.\n", "title": "Prediction and Generation of Binary Markov Processes: Can a Finite-State Fox Catch a Markov Mouse?" }
null
null
null
null
true
null
13916
null
Default
null
null
null
{ "abstract": " We investigate the self-organization of strongly interacting particles\nconfined in 1D and 2D. We consider hardcore bosons in spinless Hubbard lattice\nmodels with short range interactions. We show that, many-body orders with\ntopological characteristics emerge, at different energy bands separated by\nlarge gaps. These topological orders manifest in the way the particles organize\nin real space to form states with different energy. Each of these states\ncontains topological defects/condensations whose Euler characteristic can be\nused as a topological number to categorize states belonging to the same energy\nband. We provide analytical formulas for this topological number and the full\nenergy spectrum of the system for both sparsely and densely filled systems.\nFurthermore, we discuss the connection with the Gauss-Bonnet theorem of\ndifferential geometry, by using the curvature generated in real space by the\nparticle structures. Our result is a demonstration of how topological orders\ncan arise in strongly interacting many-body systems with simple underlying\nrules, without considering the spin, long-range microscopic interactions, or\nexternal fields.\n", "title": "Topological orders of strongly interacting particles" }
null
null
null
null
true
null
13917
null
Default
null
null
null
{ "abstract": " The virtual unknotting number of a virtual knot is the minimal number of\ncrossing changes that makes the virtual knot to be the unknot, which is defined\nonly for virtual knots virtually homotopic to the unknot. We focus on the\nvirtual knot obtained from the standard (p,q)-torus knot diagram by replacing\nall crossings on one overstrand into virtual crossings and prove that its\nvirtual unknotting number is equal to the unknotting number of the\n$(p,q)$-torus knot, i.e. it is (p-1)(q-1)/2.\n", "title": "Virtual unknotting numbers of certain virtual torus knots" }
null
null
null
null
true
null
13918
null
Default
null
null
null
{ "abstract": " We describe an algorithm to evaluate all the complex branches of the Lambert\nW function with rigorous error bounds in interval arithmetic, which has been\nimplemented in the Arb library. The classic 1996 paper on the Lambert W\nfunction by Corless et al. provides a thorough but partly heuristic numerical\nanalysis which needs to be complemented with some explicit inequalities and\npractical observations about managing precision and branch cuts.\n", "title": "Computing the Lambert W function in arbitrary-precision complex interval arithmetic" }
null
null
null
null
true
null
13919
null
Default
null
null
null
{ "abstract": " The rise in life expectancy is one of the great achievements of the twentieth\ncentury. This phenomenon originates a still increasing interest in Ambient\nAssisted Living (AAL) technological solutions that may support people in their\ndaily routines allowing an independent and safe lifestyle as long as possible.\nAAL systems generally acquire data from the field and reason on them and the\ncontext to accomplish their tasks. Very often, AAL systems are vertical\nsolutions, thus making hard their reuse and adaptation to different domains\nwith respect to the ones for which they have been developed. In this paper we\npropose an architectural solution that allows the acquisition level of an ALL\nsystem to be easily built, configured, and extended without affecting the\nreasoning level of the system. We experienced our proposal in a fall detection\nsystem.\n", "title": "An Architecture for Embedded Systems Supporting Assisted Living" }
null
null
null
null
true
null
13920
null
Default
null
null
null
{ "abstract": " We explore the Hunters and Rabbits game on the hypercube. In the process, we\nfind the solution for all classes of graphs with an isoperimetric nesting\nproperty and find the exact hunter number of $Q^n$ to be\n$1+\\sum\\limits_{i=0}^{n-2} \\binom{i}{\\lfloor i/2 \\rfloor}$. In addition, we\nextend results to the situation where we allow the rabbit to not move between\nshots.\n", "title": "Hunting Rabbits on the Hypercube" }
null
null
null
null
true
null
13921
null
Default
null
null
null
{ "abstract": " Quantifying the relation between gut microbiome and body weight can provide\ninsights into personalized strategies for improving digestive health. In this\npaper, we present an algorithm that predicts weight fluctuations using gut\nmicrobiome in a healthy cohort of newborns from a previously published dataset.\nMicrobial data has been known to present unique statistical challenges that\ndefy most conventional models. We propose a mixed effect Dirichlet-tree\nmultinomial (DTM) model to untangle these difficulties as well as incorporate\ncovariate information and account for species relatedness. The DTM setup allows\none to easily invoke empirical Bayes shrinkage on each node for enhanced\ninference of microbial proportions. Using these estimates, we subsequently\napply random forest for weight prediction and obtain a microbiome-inferred\nweight metric. Our result demonstrates that microbiome-inferred weight is\nsignificantly associated with weight changes in the future and its non-trivial\neffect size makes it a viable candidate to forecast weight progression.\n", "title": "Mixed Effect Dirichlet-Tree Multinomial for Longitudinal Microbiome Data and Weight Prediction" }
null
null
null
null
true
null
13922
null
Default
null
null
null
{ "abstract": " In this paper, we study a wireless packet broadcast system that uses linear\nnetwork coding (LNC) to help receivers recover data packets that are missing\ndue to packet erasures. We study two intertwined performance metrics, namely\nthroughput and average packet decoding delay (APDD) and establish strong/weak\napproximation relations based on whether the approximation holds for the\nperformance of every receiver (strong) or for the average performance across\nall receivers (weak). We prove an equivalence between strong throughput\napproximation and strong APDD approximation. We prove that throughput-optimal\nLNC techniques can strongly approximate APDD, and partition-based LNC\ntechniques may weakly approximate throughput. We also prove that memoryless LNC\ntechniques, including instantly decodable network coding techniques, are not\nstrong throughput and APDD approximation nor weak throughput approximation\ntechniques.\n", "title": "Approximating Throughput and Packet Decoding Delay in Linear Network Coded Wireless Broadcast" }
null
null
null
null
true
null
13923
null
Default
null
null
null
{ "abstract": " Any considerations on propagation of particles through the Universe must\ninvolve particle interactions: processes leading to production of particle\ncascades. While one expects existence of such cascades, the state of the art\ncosmic-ray research is oriented purely on a detection of single particles,\ngamma rays or associated extensive air showers. The natural extension of the\ncosmic-ray research with the studies on ensembles of particles and air showers\nis being proposed by the CREDO Collaboration. Within the CREDO strategy the\nfocus is put on generalized super-preshowers (SPS): spatially and/or temporally\nextended cascades of particles originated above the Earth atmosphere, possibly\neven at astrophysical distances. With CREDO we want to find out whether SPS can\nbe at least partially observed by a network of terrestrial and/or satellite\ndetectors receiving primary or secondary cosmic-ray signal. This paper\naddresses electromagnetic SPS, e.g. initiated by VHE photons interacting with\nthe cosmic microwave background, and the SPS signatures that can be seen by\ngamma-ray telescopes, exploring the exampleof Cherenkov Telescope Array. The\nenergy spectrum of secondary electrons and photons in an electromagnetic\nsuper-preshower might be extended over awide range of energy, down to TeV or\neven lower, as it is evident from the simulation results. This means that\nelectromagnetic showers induced by such particles in the Earth atmosphere could\nbe observed by imaging atmospheric Cherenkov telescopes. We present preliminary\nresults from the study of response of the Cherenkov Telescope Array to SPS\nevents, including the analysis of the simulated shower images on the camera\nfocal plane and implementedgeneric reconstruction chains based on the Hillas\nparameters.\n", "title": "Search for electromagnetic super-preshowers using gamma-ray telescopes" }
null
null
null
null
true
null
13924
null
Default
null
null
null
{ "abstract": " We consider a point cloud $X_n := \\{ x_1, \\dots, x_n \\}$ uniformly\ndistributed on the flat torus $\\mathbb{T}^d : = \\mathbb{R}^d / \\mathbb{Z}^d $,\nand construct a geometric graph on the cloud by connecting points that are\nwithin distance $\\epsilon$ of each other. We let $\\mathcal{P}(X_n)$ be the\nspace of probability measures on $X_n$ and endow it with a discrete Wasserstein\ndistance $W_n$ as introduced independently by Maas and Zhou et al. for general\nfinite Markov chains. We show that as long as $\\epsilon= \\epsilon_n$ decays\ntowards zero slower than an explicit rate depending on the level of uniformity\nof $X_n$, then the space $(\\mathcal{P}(X_n), W_n)$ converges in the\nGromov-Hausdorff sense towards the space of probability measures on\n$\\mathbb{T}^d$ endowed with the Wasserstein distance.\n", "title": "Gromov-Hausdorff limit of Wasserstein spaces on point clouds" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
13925
null
Validated
null
null
null
{ "abstract": " We modify the definable ultrapower construction of Kanovei and Shelah (2004)\nto develop a ZF-definable extension of the continuum with transfer provable\nusing countable choice only, with an additional mild hypothesis on\nwell-ordering implying properness. Under the same assumptions, we also prove\nthe existence of a definable, proper elementary extension of the standard\nsuperstructure over the reals.\nKeywords: definability; hyperreal; superstructure; elementary embedding.\n", "title": "Minimal axiomatic frameworks for definable hyperreals with transfer" }
null
null
null
null
true
null
13926
null
Default
null
null
null
{ "abstract": " We present a novel notion of complexity that interpolates between and\ngeneralizes some classic existing complexity notions in learning theory: for\nestimators like empirical risk minimization (ERM) with arbitrary bounded\nlosses, it is upper bounded in terms of data-independent Rademacher complexity;\nfor generalized Bayesian estimators, it is upper bounded by the data-dependent\ninformation complexity (also known as stochastic or PAC-Bayesian,\n$\\mathrm{KL}(\\text{posterior} \\operatorname{\\|} \\text{prior})$ complexity. For\n(penalized) ERM, the new complexity reduces to (generalized) normalized maximum\nlikelihood (NML) complexity, i.e. a minimax log-loss individual-sequence\nregret. Our first main result bounds excess risk in terms of the new\ncomplexity. Our second main result links the new complexity via Rademacher\ncomplexity to $L_2(P)$ entropy, thereby generalizing earlier results of Opper,\nHaussler, Lugosi, and Cesa-Bianchi who did the log-loss case with $L_\\infty$.\nTogether, these results recover optimal bounds for VC- and large (polynomial\nentropy) classes, replacing localized Rademacher complexity by a simpler\nanalysis which almost completely separates the two aspects that determine the\nachievable rates: 'easiness' (Bernstein) conditions and model complexity.\n", "title": "A Tight Excess Risk Bound via a Unified PAC-Bayesian-Rademacher-Shtarkov-MDL Complexity" }
null
null
null
null
true
null
13927
null
Default
null
null
null
{ "abstract": " Given a graph on $n$ vertices and an integer $k$, the feedback vertex set\nproblem asks for the deletion of at most $k$ vertices to make the graph\nacyclic. We show that a greedy branching algorithm, which always branches on an\nundecided vertex with the largest degree, runs in single-exponential time,\ni.e., $O(c^k\\cdot n^2)$ for some constant $c$.\n", "title": "A Naive Algorithm for Feedback Vertex Set" }
null
null
null
null
true
null
13928
null
Default
null
null
null
{ "abstract": " Suppose, we are given a set of $n$ elements to be clustered into $k$\n(unknown) clusters, and an oracle/expert labeler that can interactively answer\npair-wise queries of the form, \"do two elements $u$ and $v$ belong to the same\ncluster?\". The goal is to recover the optimum clustering by asking the minimum\nnumber of queries. In this paper, we initiate a rigorous theoretical study of\nthis basic problem of query complexity of interactive clustering, and provide\nstrong information theoretic lower bounds, as well as nearly matching upper\nbounds. Most clustering problems come with a similarity matrix, which is used\nby an automated process to cluster similar points together. Our main\ncontribution in this paper is to show the dramatic power of side information\naka similarity matrix on reducing the query complexity of clustering. A\nsimilarity matrix represents noisy pair-wise relationships such as one computed\nby some function on attributes of the elements. A natural noisy model is where\nsimilarity values are drawn independently from some arbitrary probability\ndistribution $f_+$ when the underlying pair of elements belong to the same\ncluster, and from some $f_-$ otherwise. We show that given such a similarity\nmatrix, the query complexity reduces drastically from $\\Theta(nk)$ (no\nsimilarity matrix) to $O(\\frac{k^2\\log{n}}{\\cH^2(f_+\\|f_-)})$ where $\\cH^2$\ndenotes the squared Hellinger divergence. Moreover, this is also\ninformation-theoretic optimal within an $O(\\log{n})$ factor. Our algorithms are\nall efficient, and parameter free, i.e., they work without any knowledge of $k,\nf_+$ and $f_-$, and only depend logarithmically with $n$. Along the way, our\nwork also reveals intriguing connection to popular community detection models\nsuch as the {\\em stochastic block model}, significantly generalizes them, and\nopens up many venues for interesting future research.\n", "title": "Query Complexity of Clustering with Side Information" }
null
null
null
null
true
null
13929
null
Default
null
null
null
{ "abstract": " Kriging based on Gaussian random fields is widely used in reconstructing\nunknown functions. The kriging method has pointwise predictive distributions\nwhich are computationally simple. However, in many applications one would like\nto predict for a range of untried points simultaneously. In this work we obtain\nsome error bounds for the (simple) kriging predictor under the uniform metric.\nIt works for a scattered set of input points in an arbitrary dimension, and\nalso covers the case where the covariance function of the Gaussian process is\nmisspecified. These results lead to a better understanding of the rate of\nconvergence of kriging under the Gaussian or the Matérn correlation\nfunctions, the relationship between space-filling designs and kriging models,\nand the robustness of the Matérn correlation functions.\n", "title": "On Prediction Properties of Kriging: Uniform Error Bounds and Robustness" }
null
null
null
null
true
null
13930
null
Default
null
null
null
{ "abstract": " We study the problem of maximizing a monotone submodular function subject to\na cardinality constraint $k$, with the added twist that a number of items\n$\\tau$ from the returned set may be removed. We focus on the worst-case setting\nconsidered in (Orlin et al., 2016), in which a constant-factor approximation\nguarantee was given for $\\tau = o(\\sqrt{k})$. In this paper, we solve a key\nopen problem raised therein, presenting a new Partitioned Robust (PRo)\nsubmodular maximization algorithm that achieves the same guarantee for more\ngeneral $\\tau = o(k)$. Our algorithm constructs partitions consisting of\nbuckets with exponentially increasing sizes, and applies standard submodular\noptimization subroutines on the buckets in order to construct the robust\nsolution. We numerically demonstrate the performance of PRo in data\nsummarization and influence maximization, demonstrating gains over both the\ngreedy algorithm and the algorithm of (Orlin et al., 2016).\n", "title": "Robust Submodular Maximization: A Non-Uniform Partitioning Approach" }
null
null
null
null
true
null
13931
null
Default
null
null
null
{ "abstract": " Multi-armed bandits are a quintessential machine learning problem requiring\nthe balancing of exploration and exploitation. While there has been progress in\ndeveloping algorithms with strong theoretical guarantees, there has been less\nfocus on practical near-optimal finite-time performance. In this paper, we\npropose an algorithm for Bayesian multi-armed bandits that utilizes\nvalue-function-driven online planning techniques. Building on previous work on\nUCB and Gittins index, we introduce linearly-separable value functions that\ntake both the expected return and the benefit of exploration into consideration\nto perform n-step lookahead. The algorithm enjoys a sub-linear performance\nguarantee and we present simulation results that confirm its strength in\nproblems with structured priors. The simplicity and generality of our approach\nmakes it a strong candidate for analyzing more complex multi-armed bandit\nproblems.\n", "title": "Value Directed Exploration in Multi-Armed Bandits with Structured Priors" }
null
null
null
null
true
null
13932
null
Default
null
null
null
{ "abstract": " Mantel's test (MT) for association is conducted by testing the linear\nrelationship of similarity of all pairs of subjects between two observational\ndomains. Motivated by applications to neuroimaging and genetics data, and\nfollowing the succes of shrinkage and kernel methods for prediction with\nhigh-dimensional data, we here introduce the adaptive Mantel test as an\nextension of the MT. By utilizing kernels and penalized similarity measures,\nthe adaptive Mantel test is able to achieve higher statistical power relative\nto the classical MT in many settings. Furthermore, the adaptive Mantel test is\ndesigned to simultaneously test over multiple similarity measures such that the\ncorrect type I error rate under the null hypothesis is maintained without the\nneed to directly adjust the significance threshold for multiple testing. The\nperformance of the adaptive Mantel test is evaluated on simulated data, and is\nused to investigate associations between genetics markers related to\nAlzheimer's Disease and heatlhy brain physiology with data from a working\nmemory study of 350 college students from Beijing Normal University.\n", "title": "Adaptive Mantel Test for AssociationTesting in Imaging Genetics Data" }
null
null
[ "Statistics" ]
null
true
null
13933
null
Validated
null
null
null
{ "abstract": " We present an algorithm for approximating a function defined over a\n$d$-dimensional manifold utilizing only noisy function values at locations\nsampled from the manifold with noise. To produce the approximation we do not\nrequire any knowledge regarding the manifold other than its dimension $d$. The\napproximation scheme is based upon the Manifold Moving Least-Squares (MMLS).\nThe proposed algorithm is resistant to noise in both the domain and function\nvalues. Furthermore, the approximant is shown to be smooth and of approximation\norder of $\\mathcal{O}(h^{m+1})$ for non-noisy data, where $h$ is the mesh size\nwith respect to the manifold domain, and $m$ is the degree of a local\npolynomial approximation utilized in our algorithm. In addition, the proposed\nalgorithm is linear in time with respect to the ambient-space's dimension.\nThus, in case of extremely large ambient space dimension, we are able to avoid\nthe curse of dimensionality without having to perform non-linear dimension\nreduction, which introduces distortions to the manifold data. Using numerical\nexperiments, we compare the presented method to state-of-the-art algorithms for\nregression over manifolds and show its potential.\n", "title": "Approximation of Functions over Manifolds: A Moving Least-Squares Approach" }
null
null
null
null
true
null
13934
null
Default
null
null
null
{ "abstract": " Time delay in general leads to instability in some systems, while a specific\nfeedback with delay can control fluctuated motion in nonlinear deterministic\nsystems to a stable state. In this paper, we consider a non-stationary\nstochastic process, i.e., a random walk and observe its diffusion phenomenon\nwith time delayed feedback. Surprisingly, the diffusion coefficient decreases\nwith increasing the delay time. We analytically illustrate this suppression of\ndiffusion by using stochastic delay differential equations and justify the\nfeasibility of this suppression by applying the time-delay feedback to a\nmolecular dynamics model.\n", "title": "Delay sober up drunkers: Control of diffusion in random walkers" }
null
null
null
null
true
null
13935
null
Default
null
null
null
{ "abstract": " Primordial Black Holes (PBH) could be the cold dark matter of the universe.\nThey could have arisen from large (order one) curvature fluctuations produced\nduring inflation that reentered the horizon in the radiation era. At reentry,\nthese fluctuations source gravitational waves (GW) via second order anisotropic\nstresses. These GW, together with those (possibly) sourced during inflation by\nthe same mechanism responsible for the large curvature fluctuations, constitute\na primordial stochastic GW background (SGWB) that unavoidably accompanies the\nPBH formation. We study how the amplitude and the range of frequencies of this\nsignal depend on the statistics (Gaussian versus $\\chi^2$) of the primordial\ncurvature fluctuations, and on the evolution of the PBH mass function due to\naccretion and merging. We then compare this signal with the sensitivity of\npresent and future detectors, at PTA and LISA scales. We find that this SGWB\nwill help to probe, or strongly constrain, the early universe mechanism of PBH\nproduction. The comparison between the peak mass of the PBH distribution and\nthe peak frequency of this SGWB will provide important information on the\nmerging and accretion evolution of the PBH mass distribution from their\nformation to the present era. Different assumptions on the statistics and on\nthe PBH evolution also result in different amounts of CMB $\\mu$-distortions.\nTherefore the above results can be complemented by the detection (or the\nabsence) of $\\mu$-distortions with an experiment such as PIXIE.\n", "title": "Gravitational Wave signatures of inflationary models from Primordial Black Hole Dark Matter" }
null
null
[ "Physics" ]
null
true
null
13936
null
Validated
null
null
null
{ "abstract": " PEGs were formalized by Ford in 2004, and have several pragmatic operators\n(such as ordered choice and unlimited lookahead) for better expressing modern\nprogramming language syntax. Since these operators are not explicitly defined\nin the classic formal language theory, it is significant and still challenging\nto argue PEGs' expressiveness in the context of formal language theory.Since\nPEGs are relatively new, there are several unsolved problems.One of the\nproblems is revealing a subclass of PEGs that is equivalent to DFAs. This\nallows application of some techniques from the theory of regular grammar to\nPEGs. In this paper, we define Linear PEGs (LPEGs), a subclass of PEGs that is\nequivalent to DFAs. Surprisingly, LPEGs are formalized by only excluding some\npatterns of recursive nonterminal in PEGs, and include the full set of ordered\nchoice, unlimited lookahead, and greedy repetition, which are characteristic of\nPEGs. Although the conversion judgement of parsing expressions into DFAs is\nundecidable in general, the formalism of LPEGs allows for a syntactical\njudgement of parsing expressions.\n", "title": "Linear Parsing Expression Grammars" }
null
null
null
null
true
null
13937
null
Default
null
null
null
{ "abstract": " A central question in neuroscience is how to develop realistic models that\npredict output firing behavior based on provided external stimulus. Given a set\nof external inputs and a set of output spike trains, the objective is to\ndiscover a network structure which can accomplish the transformation as\naccurately as possible. Due to the difficulty of this problem in its most\ngeneral form, approximations have been made in previous work. Past\napproximations have sacrificed network size, recurrence, allowed spiked count,\nor have imposed layered network structure. Here we present a learning rule\nwithout these sacrifices, which produces a weight matrix of a leaky\nintegrate-and-fire (LIF) network to match the output activity of both\ndeterministic LIF networks as well as probabilistic integrate-and-fire (PIF)\nnetworks. Inspired by synaptic scaling, our pre-synaptic pool modification\n(PSPM) algorithm outputs deterministic, fully recurrent spiking neural networks\nthat can provide a novel generative model for given spike trains. Similarity in\noutput spike trains is evaluated with a variety of metrics including a\nvan-Rossum like measure and a numerical comparison of inter-spike interval\ndistributions. Application of our algorithm to randomly generated networks\nimproves similarity to the reference spike trains on both of these stated\nmeasures. In addition, we generated LIF networks that operate near criticality\nwhen trained on critical PIF outputs. Our results establish that learning rules\nbased on synaptic homeostasis can be used to represent input-output\nrelationships in fully recurrent spiking neural networks.\n", "title": "Pre-Synaptic Pool Modification (PSPM): A Supervised Learning Procedure for Spiking Neural Networks" }
null
null
null
null
true
null
13938
null
Default
null
null
null
{ "abstract": " We present rest-frame optical spectra from the FMOS-COSMOS survey of twelve\n$z \\sim 1.6$ \\textit{Herschel} starburst galaxies, with Star Formation Rate\n(SFR) elevated by $\\times$8, on average, above the star-forming Main Sequence\n(MS). Comparing the H$\\alpha$ to IR luminosity ratio and the Balmer Decrement\nwe find that the optically-thin regions of the sources contain on average only\n$\\sim 10$ percent of the total SFR whereas $\\sim90$ percent comes from an\nextremely obscured component which is revealed only by far-IR observations and\nis optically-thick even in H$\\alpha$. We measure the [NII]$_{6583}$/H$\\alpha$\nratio, suggesting that the less obscured regions have a metal content similar\nto that of the MS population at the same stellar masses and redshifts. However,\nour objects appear to be metal-rich outliers from the metallicity-SFR\nanticorrelation observed at fixed stellar mass for the MS population. The\n[SII]$_{6732}$/[SII]$_{6717}$ ratio from the average spectrum indicates an\nelectron density $n_{\\rm e} \\sim 1,100\\ \\mathrm{cm}^{-3}$, larger than what\nestimated for MS galaxies but only at the 1.5$\\sigma$ level. Our results\nprovide supporting evidence that high-$z$ MS outliers are the analogous of\nlocal ULIRGs, and are consistent with a major merger origin for the starburst\nevent.\n", "title": "The Bright and Dark Sides of High-Redshift starburst galaxies from {\\it Herschel} and {\\it Subaru} observations" }
null
null
[ "Physics" ]
null
true
null
13939
null
Validated
null
null
null
{ "abstract": " We study the task of estimating the number of edges in a graph with access to\nonly an independent set oracle. Independent set queries draw motivation from\ngroup testing and have applications to the complexity of decision versus\ncounting problems. We give two algorithms to estimate the number of edges in an\n$n$-vertex graph, using (i) $\\mathrm{polylog}(n)$ bipartite independent set\nqueries, or (ii) ${n}^{2/3} \\cdot\\mathrm{polylog}(n)$ independent set queries.\n", "title": "Edge Estimation with Independent Set Oracles" }
null
null
null
null
true
null
13940
null
Default
null
null
null
{ "abstract": " We explore Random Scale-Free networks of populations, modelled by chaotic\nRicker maps, connected by transport that is triggered when population density\nin a patch is in excess of a critical threshold level. Our central result is\nthat threshold-activated dispersal leads to stable fixed populations, for a\nwide range of threshold levels. Further, suppression of chaos is facilitated\nwhen the threshold-activated migration is more rapid than the intrinsic\npopulation dynamics of a patch. Additionally, networks with large number of\nnodes open to the environment, readily yield stable steady states. Lastly we\ndemonstrate that in networks with very few open nodes, the degree and\nbetweeness centrality of the node open to the environment has a pronounced\ninfluence on control. All qualitative trends are corroborated by quantitative\nmeasures, reflecting the efficiency of control, and the width of the steady\nstate window.\n", "title": "Threshold-activated transport stabilizes chaotic populations to steady states" }
null
null
null
null
true
null
13941
null
Default
null
null
null
{ "abstract": " We investigate the complexity of deep neural networks (DNN) that represent\npiecewise linear (PWL) functions. In particular, we study the number of linear\nregions, i.e. pieces, that a PWL function represented by a DNN can attain, both\ntheoretically and empirically. We present (i) tighter upper and lower bounds\nfor the maximum number of linear regions on rectifier networks, which are exact\nfor inputs of dimension one; (ii) a first upper bound for multi-layer maxout\nnetworks; and (iii) a first method to perform exact enumeration or counting of\nthe number of regions by modeling the DNN with a mixed-integer linear\nformulation. These bounds come from leveraging the dimension of the space\ndefining each linear region. The results also indicate that a deep rectifier\nnetwork can only have more linear regions than every shallow counterpart with\nsame number of neurons if that number exceeds the dimension of the input.\n", "title": "Bounding and Counting Linear Regions of Deep Neural Networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
13942
null
Validated
null
null
null
{ "abstract": " We study the long-range, long-time behavior of the reactive-telegraph\nequation and a related reactive-kinetic model. The two problems are equivalent\nin one spatial dimension. We point out that the reactive-telegraph equation,\nmeant to model a population density, does not preserve positivity in higher\ndimensions. In view of this, in dimensions larger than one, we consider a\nreactive-kinetic model and investigate the long-range, long-time limit of the\nsolutions. We provide a general characterization of the speed of propagation\nand we compute it explicitly in one and two dimensions. We show that a phase\ntransition between parabolic and hyperbolic behavior takes place only in one\ndimension. Finally, we investigate the hydrodynamic limit of the limiting\nproblem.\n", "title": "The reactive-telegraph equation and a related kinetic model" }
null
null
null
null
true
null
13943
null
Default
null
null
null
{ "abstract": " We study abelian varieties and K3 surfaces with complex multiplication\ndefined over number fields of fixed degree. We show that these varieties fall\ninto finitely many isomorphism classes over an algebraic closure of the field\nof rational numbers. As an application we confirm finiteness conjectures of\nShafarevich and Coleman in the CM case. In addition we prove the uniform\nboundedness of the Galois invariant subgroup of the geometric Brauer group for\nforms of a smooth projective variety satisfying the integral Mumford--Tate\nconjecture. When applied to K3 surfaces, this affirms a conjecture of\nVárilly-Alvarado in the CM case.\n", "title": "Finiteness theorems for K3 surfaces and abelian varieties of CM type" }
null
null
null
null
true
null
13944
null
Default
null
null
null
{ "abstract": " Considering a granular fluid of inelastic smooth hard spheres we discuss the\nconditions delineating the rheological regimes comprising Newtonian,\nBagnoldian, shear thinning, and shear thickening behavior. Developing a kinetic\ntheory, valid at finite shear rates and densities around the glass transition\ndensity, we predict the viscosity and Bagnold coefficient at practically\nrelevant values of the control parameters. The determination of full flow\ncurves relating the shear stress $\\sigma$ to the shear rate $\\dot\\gamma$, and\npredictions of the yield stress complete our discussion of granular rheology\nderived from first principles.\n", "title": "Rheology of inelastic hard spheres at finite density and shear rate" }
null
null
null
null
true
null
13945
null
Default
null
null
null
{ "abstract": " To better understand the energy response of the Antineutrino Detector (AD),\nthe Daya Bay Reactor Neutrino Experiment installed a full Flash ADC readout\nsystem on one AD that allowed for simultaneous data taking with the current\nreadout system. This paper presents the design, data acquisition, and\nsimulation of the Flash ADC system, and focuses on the PMT waveform\nreconstruction algorithms. For liquid scintillator calorimetry, the most\ncritical requirement to waveform reconstruction is linearity. Several common\nreconstruction methods were tested but the linearity performance was not\nsatisfactory. A new method based on the deconvolution technique was developed\nwith 1% residual non-linearity, which fulfills the requirement. The performance\nwas validated with both data and Monte Carlo (MC) simulations, and 1%\nconsistency between them has been achieved.\n", "title": "The Flash ADC system and PMT waveform reconstruction for the Daya Bay Experiment" }
null
null
null
null
true
null
13946
null
Default
null
null
null
{ "abstract": " The present paper generalises the results of Ray and Buchstaber-Ray,\nBuchstaber-Panov-Ray in unitary cobordism theory. I prove that any class $x\\in\n\\Omega^{*}_{U}$ of the unitary cobordism ring contains a quasitoric totally\nnormally and tangentially split manifold.\n", "title": "Quasitoric totally normally split representatives in unitary cobordism ring" }
null
null
null
null
true
null
13947
null
Default
null
null
null
{ "abstract": " This paper presents an automated approach for interpretable feature\nrecommendation for solving signal data analytics problems. The method has been\ntested by performing experiments on datasets in the domain of prognostics where\ninterpretation of features is considered very important. The proposed approach\nis based on Wide Learning architecture and provides means for interpretation of\nthe recommended features. It is to be noted that such an interpretation is not\navailable with feature learning approaches like Deep Learning (such as\nConvolutional Neural Network) or feature transformation approaches like\nPrincipal Component Analysis. Results show that the feature recommendation and\ninterpretation techniques are quite effective for the problems at hand in terms\nof performance and drastic reduction in time to develop a solution. It is\nfurther shown by an example, how this human-in-loop interpretation system can\nbe used as a prescriptive system.\n", "title": "Interpretable Feature Recommendation for Signal Analytics" }
null
null
null
null
true
null
13948
null
Default
null
null
null
{ "abstract": " We study the problem of learning one-hidden-layer neural networks with\nRectified Linear Unit (ReLU) activation function, where the inputs are sampled\nfrom standard Gaussian distribution and the outputs are generated from a noisy\nteacher network. We analyze the performance of gradient descent for training\nsuch kind of neural networks based on empirical risk minimization, and provide\nalgorithm-dependent guarantees. In particular, we prove that tensor\ninitialization followed by gradient descent can converge to the ground-truth\nparameters at a linear rate up to some statistical error. To the best of our\nknowledge, this is the first work characterizing the recovery guarantee for\npractical learning of one-hidden-layer ReLU networks with multiple neurons.\nNumerical experiments verify our theoretical findings.\n", "title": "Learning One-hidden-layer ReLU Networks via Gradient Descent" }
null
null
null
null
true
null
13949
null
Default
null
null
null
{ "abstract": " Absolute positioning is an essential factor for the arrival of autonomous\ndriving. Global Navigation Satellites System (GNSS) receiver provides absolute\nlocalization for it. GNSS solution can provide satisfactory positioning in open\nor sub-urban areas, however, its performance suffered in super-urbanized area\ndue to the phenomenon which are well-known as multipath effects and NLOS\nreceptions. The effects dominate GNSS positioning performance in the area. The\nrecent proposed 3D map aided (3DMA) GNSS can mitigate most of the multipath\neffects and NLOS receptions caused by buildings based on 3D city models.\nHowever, the same phenomenon caused by moving objects in urban area is\ncurrently not modelled in the 3D geographic information system (GIS). Moving\nobjects with tall height, such as the double-decker bus, can also cause NLOS\nreceptions because of the blockage of GNSS signals by surface of objects.\nTherefore, we present a novel method to exclude the NLOS receptions caused by\ndouble-decker bus in highly urbanized area, Hong Kong. To estimate the geometry\ndimension and orientation relative to GPS receiver, a Euclidean cluster\nalgorithm and a classification method are used to detect the double-decker\nbuses and calculate their relative locations. To increase the accuracy and\nreliability of the proposed NLOS exclusion method, an NLOS exclusion criterion\nis proposed to exclude the blocked satellites considering the elevation, signal\nnoise ratio (SNR) and horizontal dilution of precision (HDOP). Finally, GNSS\npositioning is estimated by weighted least square (WLS) method using the\nremaining satellites after the NLOS exclusion. A static experiment was\nperformed near a double-decker bus stop in Hong Kong, which verified the\neffectiveness of the proposed method.\n", "title": "Exclusion of GNSS NLOS Receptions Caused by Dynamic Objects in Heavy Traffic Urban Scenarios Using Real-Time 3D Point Cloud: An Approach without 3D Maps" }
null
null
[ "Computer Science" ]
null
true
null
13950
null
Validated
null
null
null
{ "abstract": " Intrinsically nonlinear coupled systems present different oscillating\ncomponents that exchange energy among themselves. We present a new approach to\ndeal with such energy exchanges and to investigate how it depends on the system\ncontrol parameters. The method consists in writing the total energy of the\nsystem, and properly identifying the energy terms for each component and,\nespecially, their coupling. To illustrate the proposed approach, we work with\nthe bi-dimensional spring pendulum, which is a paradigm to study nonlinear\ncoupled systems, and is used as a model for several systems. For the spring\npendulum, we identify three energy components, resembling the spring and\npendulum like motions, and the coupling between them. With these analytical\nexpressions, we analyze the energy exchange for individual trajectories, and we\nalso obtain global characteristics of the spring pendulum energy distribution\nby calculating spatial and time average energy components for a great number of\ntrajectories (periodic, quasi-periodic and chaotic) throughout the phase space.\nConsidering an energy term due to the nonlinear coupling, we identify regions\nin the parameter space that correspond to strong and weak coupling. The\npresented procedure can be applied to nonlinear coupled systems to reveal how\nthe coupling mediates internal energy exchanges, and how the energy\ndistribution varies according to the system parameters.\n", "title": "Energy Distribution in Intrinsically Coupled Systems: The Spring Pendulum Paradigm" }
null
null
null
null
true
null
13951
null
Default
null
null
null
{ "abstract": " In this article, we consider Cayley deformations of a compact complex surface\nin a Calabi--Yau four-fold. We will study complex deformations of compact\ncomplex submanifolds of Calabi--Yau manifolds with a view to explaining why\ncomplex and Cayley deformations of a compact complex surface are the same. We\nin fact prove that the moduli space of complex deformations of any compact\ncomplex embedded submanifold of a Calabi--Yau manifold is a smooth manifold.\n", "title": "Cayley deformations of compact complex surfaces" }
null
null
[ "Mathematics" ]
null
true
null
13952
null
Validated
null
null
null
{ "abstract": " The pairwise maximum entropy model, also known as the Ising model, has been\nwidely used to analyze the collective activity of neurons. However, controversy\npersists in the literature about seemingly inconsistent findings, whose\nsignificance is unclear due to lack of reliable error estimates. We therefore\ndevelop a method for accurately estimating parameter uncertainty based on\nrandom walks in parameter space using adaptive Markov Chain Monte Carlo after\nthe convergence of the main optimization algorithm. We apply our method to the\nspiking patterns of excitatory and inhibitory neurons recorded with\nmultielectrode arrays in the human temporal cortex during the wake-sleep cycle.\nOur analysis shows that the Ising model captures neuronal collective behavior\nmuch better than the independent model during wakefulness, light sleep, and\ndeep sleep when both excitatory (E) and inhibitory (I) neurons are modeled;\nignoring the inhibitory effects of I-neurons dramatically overestimates\nsynchrony among E-neurons. Furthermore, information-theoretic measures reveal\nthat the Ising model explains about 80%-95% of the correlations, depending on\nsleep state and neuron type. Thermodynamic measures show signatures of\ncriticality, although we take this with a grain of salt as it may be merely a\nreflection of long-range neural correlations.\n", "title": "Ensemble Inhibition and Excitation in the Human Cortex: an Ising Model Analysis with Uncertainties" }
null
null
[ "Quantitative Biology" ]
null
true
null
13953
null
Validated
null
null
null
{ "abstract": " We study the smooth structure of convex functions by generalizing a powerful\nconcept so-called self-concordance introduced by Nesterov and Nemirovskii in\nthe early 1990s to a broader class of convex functions, which we call\ngeneralized self-concordant functions. This notion allows us to develop a\nunified framework for designing Newton-type methods to solve convex optimiza-\ntion problems. The proposed theory provides a mathematical tool to analyze both\nlocal and global convergence of Newton-type methods without imposing\nunverifiable assumptions as long as the un- derlying functionals fall into our\ngeneralized self-concordant function class. First, we introduce the class of\ngeneralized self-concordant functions, which covers standard self-concordant\nfunctions as a special case. Next, we establish several properties and key\nestimates of this function class, which can be used to design numerical\nmethods. Then, we apply this theory to develop several Newton-type methods for\nsolving a class of smooth convex optimization problems involving the\ngeneralized self- concordant functions. We provide an explicit step-size for\nthe damped-step Newton-type scheme which can guarantee a global convergence\nwithout performing any globalization strategy. We also prove a local quadratic\nconvergence of this method and its full-step variant without requiring the\nLipschitz continuity of the objective Hessian. Then, we extend our result to\ndevelop proximal Newton-type methods for a class of composite convex\nminimization problems involving generalized self-concordant functions. We also\nachieve both global and local convergence without additional assumption.\nFinally, we verify our theoretical results via several numerical examples, and\ncompare them with existing methods.\n", "title": "Generalized Self-Concordant Functions: A Recipe for Newton-Type Methods" }
null
null
null
null
true
null
13954
null
Default
null
null
null
{ "abstract": " In this paper, we study joint functional calculus for commuting $n$-tuple of\nRitt operators. We provide an equivalent characterisation of boundedness for\njoint functional calculus for Ritt operators on $L^p$-spaces, $1< p<\\infty$. We\nalso investigate joint similarity problem and joint bounded functional calculus\non non-commutative $L^p$-spaces for $n$-tuple of Ritt operators. We get our\nresults by proving a suitable multivariable transfer principle between\nsectorial and Ritt operators as well as an appropriate joint dilation result in\na general setting.\n", "title": "On Joint Functional Calculus For Ritt Operators" }
null
null
null
null
true
null
13955
null
Default
null
null
null
{ "abstract": " Full ranges of both hybrid plasmon-mode dispersions and their damping are\nstudied systematically by our recently developed mean-field theory in open\nsystems involving a conducting substrate and a two-dimensional (2D) material\nwith a buckled honeycomb lattice, such as silicene, germanene, and a group\n\\rom{4} dichalcogenide as well. In this hybrid system, the single plasmon mode\nfor a free-standing 2D layer is split into one acoustic-like and one\noptical-like mode, leading to a dramatic change in the damping of plasmon\nmodes. In comparison with gapped graphene, critical features associated with\nplasmon modes and damping in silicene and molybdenum disulfide are found with\nvarious spin-orbit and lattice asymmetry energy bandgaps, doping types and\nlevels, and coupling strengths between 2D materials and the conducting\nsubstrate. The obtained damping dependence on both spin and valley degrees of\nfreedom is expected to facilitate measuring the open-system dielectric property\nand the spin-orbit coupling strength of individual 2D materials. The unique\nlinear dispersion of the acoustic-like plasmon mode introduces additional\ndamping from the intraband particle-hole modes which is absent for a\nfree-standing 2D material layer, and the use of molybdenum disulfide with a\nlarge bandgap simultaneously suppresses the strong damping from the interband\nparticle-hole modes.\n", "title": "Controlling plasmon modes and damping in buckled two-dimensional material open systems" }
null
null
null
null
true
null
13956
null
Default
null
null
null
{ "abstract": " It is well known, thanks to Lax-Wendroff theorem, that the local conservation\nof a numerical scheme for a conservative hyperbolic system is a simple and\nsystematic way to guarantee that, if stable, a scheme will provide a sequence\nof solutions that will converge to a weak solution of the continuous problem.\nIn [1], it is shown that a nonconservative scheme will not provide a good\nsolution. The question of using, nevertheless, a nonconservative formulation of\nthe system and getting the correct solution has been a long-standing debate. In\nthis paper, we show how get a relevant weak solution from a pressure-based\nformulation of the Euler equations of fluid mechanics. This is useful when\ndealing with nonlinear equations of state because it is easier to compute the\ninternal energy from the pressure than the opposite. This makes it possible to\nget oscillation free solutions, contrarily to classical conservative methods.\nAn extension to multiphase flows is also discussed, as well as a\nmultidimensional extension.\n", "title": "A high-order nonconservative approach for hyperbolic equations in fluid dynamics" }
null
null
null
null
true
null
13957
null
Default
null
null
null
{ "abstract": " Environmental changes, failures, collisions or even terrorist attacks can\ncause serious malfunctions of the delivery systems. We have presented a novel\napproach improving resilience of Autonomous Moving Platforms AMPs. The approach\nis based on multi-level state diagrams describing environmental trigger\nspecifications, movement actions and synchronization primitives. The upper\nlevel diagrams allowed us to model advanced interactions between autonomous\nAMPs and detect irregularities such as deadlocks live-locks etc. The techniques\nwere presented to verify and analyze combined AMPs' behaviors using model\nchecking technique. The described system, Dedan verifier, is still under\ndevelopment. In the near future, a graphical form of verified system\nrepresentation is planned.\n", "title": "Improving Resilience of Autonomous Moving Platforms by Real Time Analysis of Their Cooperation" }
null
null
[ "Computer Science" ]
null
true
null
13958
null
Validated
null
null
null
{ "abstract": " We compute the free energy of the planar monomer-dimer model. Unlike the\nclassical planar dimer model, an exact solution is not known in this case. Even\nthe computation of the low-density power series expansion requires heavy and\nnontrivial computations. Despite of the exponential computational complexity,\nwe compute almost three times more terms than were previously known. Such an\nexpansion provides both lower and upper bound for the free energy, and allows\nto obtain more accurate numerical values than previously possible. We expect\nthat our methods can be applied to other similar problems.\n", "title": "Power series expansions for the planar monomer-dimer problem" }
null
null
[ "Computer Science" ]
null
true
null
13959
null
Validated
null
null
null
{ "abstract": " In this paper, we consider several compression techniques for the language\nmodeling problem based on recurrent neural networks (RNNs). It is known that\nconventional RNNs, e.g, LSTM-based networks in language modeling, are\ncharacterized with either high space complexity or substantial inference time.\nThis problem is especially crucial for mobile applications, in which the\nconstant interaction with the remote server is inappropriate. By using the Penn\nTreebank (PTB) dataset we compare pruning, quantization, low-rank\nfactorization, tensor train decomposition for LSTM networks in terms of model\nsize and suitability for fast inference.\n", "title": "Neural Networks Compression for Language Modeling" }
null
null
null
null
true
null
13960
null
Default
null
null
null
{ "abstract": " Relativistic effects in the non-resonant two-photon K-shell ionization of\nneutral atoms are studied theoretically within the framework of second-order\nperturbation theory. The non-relativistic results are compared with the\nrelativistic calculations in the dipole and no-pair approximations as well as\nwith the complete relativistic approach. The calculations are performed in both\nvelocity and length gauges. Our results show a significant decrease of the\ntotal cross section for heavy atoms as compared to the non-relativistic\ntreatment, which is mainly due to the relativistic wavefunction contraction.\nThe effects of higher multipoles and negative continuum energy states\ncounteract the relativistic contraction contribution, but are generally much\nweaker. While the effects beyond the dipole approximation are equally important\nin both gauges, the inclusion of negative continuum energy states visibly\ncontributes to the total cross section only in the velocity gauge.\n", "title": "Relativistic effects in the non-resonant two-photon K-shell ionization of neutral atoms" }
null
null
null
null
true
null
13961
null
Default
null
null
null
{ "abstract": " This article is concerned with the asymptotic behavior of certain sequences\nof ideals in rings of prime characteristic. These sequences, which we call\n$p$-families of ideals, are ubiquitous in prime characteristic commutative\nalgebra (e.g., they occur naturally in the theories of tight closure,\nHilbert-Kunz multiplicity, and $F$-signature). We associate to each $p$-family\nof ideals an object in Euclidean space that is analogous to the Newton-Okounkov\nbody of a graded family of ideals, which we call a $p$-body. Generalizing the\nmethods used to establish volume formulas for the Hilbert-Kunz multiplicity and\n$F$-signature of semigroup rings, we relate the volume of a $p$-body to a\ncertain asymptotic invariant determined by the corresponding $p$-family of\nideals. We apply these methods to obtain new existence results for limits in\npositive characteristic, an analogue of the Brunn-Minkowski theorem for\nHilbert-Kunz multiplicity, and a uniformity result concerning the positivity of\na $p$-family.\n", "title": "Local Okounkov bodies and limits in prime characteristic" }
null
null
[ "Mathematics" ]
null
true
null
13962
null
Validated
null
null
null
{ "abstract": " We show that 2-dimensional systolic complexes are quasi-isometric to quadric\ncomplexes with flat intervals. We use this fact along with the weight function\nof Brodzki, Campbell, Guentner, Niblo and Wright to prove that 2-dimensional\nsystolic complexes satisfy Property A.\n", "title": "Two-Dimensional Systolic Complexes Satisfy Property A" }
null
null
null
null
true
null
13963
null
Default
null
null
null
{ "abstract": " The notion of computer capacity was proposed in 2012, and this quantity has\nbeen estimated for computers of different kinds.\nIn this paper we show that, when designing new processors, the manufacturers\nchange the parameters that affect the computer capacity. This allows us to\npredict the values of parameters of future processors. As the main example we\nuse Intel processors, due to the accessibility of detailed description of all\ntheir technical characteristics.\n", "title": "Application of the Computer Capacity to the Analysis of Processors Evolution" }
null
null
null
null
true
null
13964
null
Default
null
null
null
{ "abstract": " We construct the base $2$ expansion of an absolutely normal real number $x$\nso that, for every integer $b$ greater than or equal to $2$, the discrepancy\nmodulo $1$ of the sequence $(b^0 x, b^1 x, b^2 x , \\ldots)$ is essentially the\nsame as that realized by almost all real numbers.\n", "title": "On absolutely normal numbers and their discrepancy estimate" }
null
null
null
null
true
null
13965
null
Default
null
null
null
{ "abstract": " Computational ghost imaging is a robust and compact system that has drawn\nwide attentions over the last two decades. Multispectral imaging possesses\nspatial and spectral resolving abilities, is very useful for surveying scenes\nand extracting detailed information. Existing multispectral imagers mostly\nutilize narrow band filters or dispersive optical devices to separate lights of\ndifferent wavelengths, and then use multiple bucket detectors or an array\ndetector to record them separately. Here, we propose a novel multispectral\nghost imaging method that uses one single bucket detector with multiplexed\nillumination to produce colored image. The multiplexed illumination patterns\nare produced by three binary encoded matrices (corresponding to red, green,\nblue colored information, respectively) and random patterns. The results of\nsimulation and experiment have verified that our method can be effective to\nrecover the colored object. Our method has two major advantages: one is that\nthe binary encoded matrices as cipher keys can protect the security of private\ncontents; the other is that multispectral images are produced simultaneously by\none single-pixel detector, which significantly reduces the amount of the data\nacquisition.\n", "title": "Multispectral computational ghost imaging with multiplexed illumination" }
null
null
null
null
true
null
13966
null
Default
null
null
null
{ "abstract": " Estimated connectomes by the means of neuroimaging techniques have enriched\nour knowledge of the organizational properties of the brain leading to the\ndevelopment of network-based clinical diagnostics. Unfortunately, to date, many\nof those network-based clinical diagnostics tools, based on the mere\ndescription of isolated instances of observed connectomes are noisy estimates\nof the true connectivity network. Modeling brain connectivity networks is\ntherefore important to better explain the functional organization of the brain\nand allow inference of specific brain properties. In this report, we present\npilot results on the modeling of combined MEG and fMRI neuroimaging data\nacquired during an n-back memory task experiment. We adopted a pooled\nExponential Random Graph Model (ERGM) as a network statistical model to capture\nthe underlying process in functional brain networks of 9 subjects MEG and fMRI\ndata out of 32 during a 0-back vs 2-back memory task experiment. Our results\nsuggested strong evidence that all the functional connectomes of the 9 subjects\nhave small world properties. A group level comparison using comparing the\nconditions pairwise showed no significant difference in the functional\nconnectomes across the subjects. Our pooled ERGMs successfully reproduced\nimportant brain properties such as functional segregation and functional\nintegration. However, the ERGMs reproducing the functional segregation of the\nbrain networks discriminated between the 0-back and 2-back conditions while the\nmodels reproducing both properties failed to successfully discriminate between\nboth conditions. Our results are promising and would improve in robustness with\na larger sample size. Nevertheless, our pilot results tend to support previous\nfindings that functional segregation and integration are sufficient to\nstatistically reproduce the main properties of brain network.\n", "title": "Combined MEG and fMRI Exponential Random Graph Modeling for inferring functional Brain Connectivity" }
null
null
null
null
true
null
13967
null
Default
null
null
null
{ "abstract": " We present photometry and spectroscopy of nine Type II-P/L supernovae (SNe)\nwith redshifts in the 0.045 < z < 0.335 range, with a view to re-examining\ntheir utility as distance indicators. Specifically, we apply the expanding\nphotosphere method (EPM) and the standardized candle method (SCM) to each\ntarget, and find that both methods yield distances that are in reasonable\nagreement with each other. The current record-holder for the highest-redshift\nspectroscopically confirmed SN II-P is PS1-13bni (z = 0.335 +0.009 -0.012), and\nillustrates the promise of Type II SNe as cosmological tools. We updated\nexisting EPM and SCM Hubble diagrams by adding our sample to those previously\npublished. Within the context of Type II SN distance measuring techniques, we\ninvestigated two related questions. First, we explored the possibility of\nutilising spectral lines other than the traditionally used Fe II 5169 to infer\nthe photospheric velocity of SN ejecta. Using local well-observed objects, we\nderive an epoch-dependent relation between the strong Balmer line and Fe II\n5169 velocities that is applicable 30 to 40 days post-explosion. Motivated in\npart by the continuum of key observables such as rise time and decline rates\nexhibited from II-P to II-L SNe, we assessed the possibility of using\nHubble-flow Type II-L SNe as distance indicators. These yield similar distances\nas the Type II-P SNe. Although these initial results are encouraging, a\nsignificantly larger sample of SNe II-L would be required to draw definitive\nconclusions.\n", "title": "An updated Type II supernova Hubble diagram" }
null
null
null
null
true
null
13968
null
Default
null
null
null
{ "abstract": " The Sharing Economy (SE) is a growing ecosystem focusing on peer-to-peer\nenterprise. In the SE the information available to assist individuals (users)\nin making decisions focuses predominantly on community generated trust and\nreputation information. However, how such information impacts user judgement is\nstill being understood. To explore such effects, we constructed an artificial\nSE accommodation platform where we varied the elements related to hosts'\ndigital identity, measuring users' perceptions and decisions to interact.\nAcross three studies, we find that trust and reputation information increases\nnot only the users' perceived trustworthiness, credibility, and sociability of\nhosts, but also the propensity to rent a private room in their home. This\neffect is seen when providing users both with complete profiles and profiles\nwith partial user-selected information. Closer investigations reveal that three\nelements relating to the host's digital identity are sufficient to produce such\npositive perceptions and increased rental decisions, regardless of which three\nelements are presented. Our findings have relevant implications for human\njudgment and privacy in the SE, and question its current culture of ever\nincreasing information-sharing.\n", "title": "Digital Identity: The Effect of Trust and Reputation Information on User Judgement in the Sharing Economy" }
null
null
null
null
true
null
13969
null
Default
null
null
null
{ "abstract": " The hypothesis that computational models can be reliable enough to be adopted\nin prognosis and patient care is revolutionizing healthcare. Deep learning, in\nparticular, has been a game changer in building predictive models, thereby\nleading to community-wide data curation efforts. However, due to the inherent\nvariabilities in population characteristics and biological systems, these\nmodels are often biased to the training datasets. This can be limiting when\nmodels are deployed in new environments, particularly when there are systematic\ndomain shifts not known a priori. In this paper, we formalize these challenges\nby emulating a large class of domain shifts that can occur in clinical\nsettings, and argue that evaluating the behavior of predictive models in light\nof those shifts is an effective way of quantifying the reliability of clinical\nmodels. More specifically, we develop an approach for building challenging\nscenarios, based on analysis of \\textit{disease landscapes}, and utilize\nunsupervised domain adaptation to compensate for the domain shifts. Using the\nopenly available MIMIC-III EHR dataset for phenotyping, we generate a large\nclass of scenarios and evaluate the ability of deep clinical models in those\ncases. For the first time, our work sheds light into data regimes where deep\nclinical models can fail to generalize, due to significant changes in the\ndisease landscapes between the source and target landscapes. This study\nemphasizes the need for sophisticated evaluation mechanisms driven by\nreal-world domain shifts to build effective AI solutions for healthcare.\n", "title": "Can Deep Clinical Models Handle Real-World Domain Shifts?" }
null
null
null
null
true
null
13970
null
Default
null
null
null
{ "abstract": " We report on the growth of NdFeAs(O,F) thin films on [001]-tilt MgO bicrystal\nsubstrates with misorientation angle theta_GB=6°, 12°, 24° and\n45°, and their inter- and intra-grain transport properties. X-ray\ndiffraction study confirmed that all our NdFeAs(O,F) films are epitaxially\ngrown on the MgO bicrystals. The theta_GB dependence of the inter-grain\ncritical current density Jc shows that, unlike Co-doped BaFe2As2 and Fe(Se,Te),\nits decay with theta_GB is rather significant. As a possible reason of this\nresult, fluorine may have diffused preferentially to the grain boundary region\nand eroded the crystal structure.\n", "title": "Fabrication of grain boundary junctions using NdFeAs(O,F) superconducting thin films" }
null
null
null
null
true
null
13971
null
Default
null
null
null
{ "abstract": " We measure the mass function for a sample of 840 young star clusters with\nages between 10-300 Myr observed by the Panchromatic Hubble Andromeda Treasury\n(PHAT) survey in M31. The data show clear evidence of a high-mass truncation:\nonly 15 clusters more massive than $10^4$ $M_{\\odot}$ are observed, compared to\n$\\sim$100 expected for a canonical $M^{-2}$ pure power-law mass function with\nthe same total number of clusters above the catalog completeness limit.\nAdopting a Schechter function parameterization, we fit a characteristic\ntruncation mass of $M_c = 8.5^{+2.8}_{-1.8} \\times 10^3$ $M_{\\odot}$. While\nprevious studies have measured cluster mass function truncations, the\ncharacteristic truncation mass we measure is the lowest ever reported.\nCombining this M31 measurement with previous results, we find that the cluster\nmass function truncation correlates strongly with the characteristic star\nformation rate surface density of the host galaxy, where $M_c \\propto$ $\\langle\n\\Sigma_{\\mathrm{SFR}} \\rangle^{\\sim1.1}$. We also find evidence that suggests\nthe observed $M_c$-$\\Sigma_{\\mathrm{SFR}}$ relation also applies to globular\nclusters, linking the two populations via a common formation pathway. If so,\nglobular cluster mass functions could be useful tools for constraining the star\nformation properties of their progenitor host galaxies in the early Universe.\n", "title": "Panchromatic Hubble Andromeda Treasury XVIII. The High-mass Truncation of the Star Cluster Mass Function" }
null
null
null
null
true
null
13972
null
Default
null
null
null
{ "abstract": " We show that the Galois cohomology groups of $p$-adic representations of a\ndirect power of $\\operatorname{Gal}(\\overline{\\mathbb{Q}_p}/\\mathbb{Q}_p)$ can\nbe computed via the generalization of Herr's complex to multivariable\n$(\\varphi,\\Gamma)$-modules. Using Tate duality and a pairing for multivariable\n$(\\varphi,\\Gamma)$-modules we extend this to analogues of the Iwasawa\ncohomology. We show that all $p$-adic representations of a direct power of\n$\\operatorname{Gal}(\\overline{\\mathbb{Q}_p}/\\mathbb{Q}_p)$ are overconvergent\nand, moreover, passing to overconvergent multivariable\n$(\\varphi,\\Gamma)$-modules is an equivalence of categories. Finally, we prove\nthat the overconvergent Herr complex also computes the Galois cohomology\ngroups.\n", "title": "Cohomology and overconvergence for representations of powers of Galois groups" }
null
null
null
null
true
null
13973
null
Default
null
null
null
{ "abstract": " This paper is a continuation of \\ct{cmf16} where an efficient algorithm for\ncomputing the maximal eigenpair was introduced first for tridiagonal matrices\nand then extended to the irreducible matrices with nonnegative off-diagonal\nelements. This paper introduces two global algorithms for computing the maximal\neigenpair in a rather general setup, including even a class of real (with some\nnegative off-diagonal elements) or complex matrices.\n", "title": "Global algorithms for maximal eigenpair" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
13974
null
Validated
null
null
null
{ "abstract": " We investigate the time evolution of the entanglement entropy of coupled\nsingle-mode Bose-Einstein condensates in a double well potential at $T=0$\ntemperature, by combining numerical results with analytical approximations. We\nfind that the coherent oscillations of the condensates result in entropy\noscillations on the top of a linear entropy generation at short time scales.\nDue to dephasing, the entropy eventually saturates to a stationary value, in\nspite of the lack of equilibration. We show that this long time limit of the\nentropy reflects the semiclassical dynamics of the system, revealing the\nself-trapping phase transition of the condensates at large interaction strength\nby a sudden entropy jump. We compare the stationary limit of the entropy to the\nprediction of a classical microcanonical ensemble, and find surprisingly good\nagreement in spite of the non-equilibrium state of the system. Our predictions\nshould be experimentally observable on a Bose-Einstein condensate in a double\nwell potential or on a two-component condensate with inter-state coupling.\n", "title": "Entanglement and entropy production in coupled single-mode Bose-Einstein condensates" }
null
null
null
null
true
null
13975
null
Default
null
null
null
{ "abstract": " We proposed a new penalized method in this paper to solve sparse Poisson\nRegression problems. Being different from $\\ell_1$ penalized log-likelihood\nestimation, our new method can be viewed as penalized weighted score function\nmethod. We show that under mild conditions, our estimator is $\\ell_1$\nconsistent and the tuning parameter can be pre-specified, which shares the same\ngood property of the square-root Lasso.\n", "title": "Sparse Poisson Regression with Penalized Weighted Score Function" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
13976
null
Validated
null
null
null
{ "abstract": " The upcoming SKA1-Low radio interferometer will be sensitive enough to\nproduce tomographic imaging data of the redshifted 21-cm signal from the Epoch\nof Reionization. Due to the non-Gaussian distribution of the signal, a power\nspectrum analysis alone will not provide a complete description of its\nproperties. Here, we consider an additional metric which could be derived from\ntomographic imaging data, namely the bubble size distribution of ionized\nregions. We study three methods that have previously been used to characterize\nbubble size distributions in simulation data for the hydrogen ionization\nfraction - the spherical-average, mean-free-path and friends-of-friends methods\n- and apply them to simulated 21-cm data cubes. Our simulated data cubes have\nthe (sensitivity-dictated) resolution expected for the SKA1-Low reionization\nexperiment and we study the impact of both the light-cone and redshift space\ndistortion effects. To identify ionized regions in the 21-cm data we introduce\na new, self-adjusting thresholding approach based on the K-Means algorithm. We\nfind that the fraction of ionized cells identified in this way consistently\nfalls below the mean volume-averaged ionized fraction. From a comparison of the\nthree bubble size methods, we conclude that all three methods are useful, but\nthat the mean-free-path method performs best in terms of tracking the progress\nof reionization and separating different reionization scenarios. The light-cone\neffect is found to affect data spanning more than about 10~MHz in frequency\n($\\Delta z\\sim0.5$). We find that redshift space distortions only marginally\naffect the bubble size distributions.\n", "title": "Bubble size statistics during reionization from 21-cm tomography" }
null
null
null
null
true
null
13977
null
Default
null
null
null
{ "abstract": " Neural networks have been widely used as predictive models to fit data\ndistribution, and they could be implemented through learning a collection of\nsamples. In many applications, however, the given dataset may contain noisy\nsamples or outliers which may result in a poor learner model in terms of\ngeneralization. This paper contributes to a development of robust stochastic\nconfiguration networks (RSCNs) for resolving uncertain data regression\nproblems. RSCNs are built on original stochastic configuration networks with\nweighted least squares method for evaluating the output weights, and the input\nweights and biases are incrementally and randomly generated by satisfying with\na set of inequality constrains. The kernel density estimation (KDE) method is\nemployed to set the penalty weights for each training samples, so that some\nnegative impacts, caused by noisy data or outliers, on the resulting learner\nmodel can be reduced. The alternating optimization technique is applied for\nupdating a RSCN model with improved penalty weights computed from the kernel\ndensity estimation function. Performance evaluation is carried out by a\nfunction approximation, four benchmark datasets and a case study on engineering\napplication. Comparisons to other robust randomised neural modelling\ntechniques, including the probabilistic robust learning algorithm for neural\nnetworks with random weights and improved RVFL networks, indicate that the\nproposed RSCNs with KDE perform favourably and demonstrate good potential for\nreal-world applications.\n", "title": "Robust Stochastic Configuration Networks with Kernel Density Estimation" }
null
null
null
null
true
null
13978
null
Default
null
null
null
{ "abstract": " We consider Jacobi matrices $J$ whose parameters have the power asymptotics\n$\\rho_n=n^{\\beta_1} \\left( x_0 + \\frac{x_1}{n} + {\\rm\nO}(n^{-1-\\epsilon})\\right)$ and $q_n=n^{\\beta_2} \\left( y_0 + \\frac{y_1}{n} +\n{\\rm O}(n^{-1-\\epsilon})\\right)$ for the off-diagonal and diagonal,\nrespectively. We show that for $\\beta_1 > \\beta_2$, or $\\beta_1=\\beta_2$ and\n$2x_0 > |y_0|$, the matrix $J$ is in the limit circle case and the convergence\nexponent of its spectrum is $1/\\beta_1$. Moreover, we obtain upper and lower\nbounds for the upper density of the spectrum. When the parameters of the matrix\n$J$ have a power asymptotic with one more term, we characterise the occurrence\nof the limit circle case completely (including the exceptional case $\\lim_{n\\to\n\\infty} |q_n|\\big/ \\rho_n = 2$) and determine the convergence exponent in\nalmost all cases.\n", "title": "Density of the spectrum of Jacobi matrices with power asymptotics" }
null
null
null
null
true
null
13979
null
Default
null
null
null
{ "abstract": " In the present work we analyze some necessary conditions for ignition of\nsolid energetic materials by low velocity impact ignition mechanism. Basing on\nreported results of {\\it ab initio} computations we assume that the energetic\nactivation barriers for the primary endothermic dissociation in some energetic\nmaterials may be locally lowered due to the effect of shear strain caused by\nthe impact. We show that the ignition may be initiated in regions with the\nreduced activation barriers, even at moderately low exothermicity of the\nsubsequent exothermic reactions thus suggesting that the above regions may\nserve as \"hot spots\" for the ignition. We apply our results to analyze initial\nsteps of ignition in DADNE and TATB molecular crystals.\n", "title": "Modeling of a self-sustaining ignition in a solid energetic material" }
null
null
[ "Physics" ]
null
true
null
13980
null
Validated
null
null
null
{ "abstract": " While the polls have been the most trusted source for election predictions\nfor decades, in the recent presidential election they were called inaccurate\nand biased. How inaccurate were the polls in this election and can social media\nbeat the polls as an accurate election predictor? Polls from several news\noutlets and sentiment analysis on Twitter data were used, in conjunction with\nthe results of the election, to answer this question and outline further\nresearch on the best method for predicting the outcome of future elections.\n", "title": "Election Bias: Comparing Polls and Twitter in the 2016 U.S. Election" }
null
null
null
null
true
null
13981
null
Default
null
null
null
{ "abstract": " Seidel introduced the notion of a Fukaya category `relative to an ample\ndivisor', explained that it is a deformation of the Fukaya category of the\naffine variety that is the complement of the divisor, and showed how the\nrelevant deformation theory is controlled by the symplectic cohomology of the\ncomplement. We elaborate on Seidel's definition of the relative Fukaya\ncategory, and give a criterion under which the deformation is versal.\n", "title": "Versality of the relative Fukaya category" }
null
null
[ "Mathematics" ]
null
true
null
13982
null
Validated
null
null
null
{ "abstract": " The thermalization of hot carriers and phonons gives direct insight into the\nscattering processes that mediate electrical and thermal transport. Obtaining\nthe scattering rates for both hot carriers and phonons currently requires\nmultiple measurements with incommensurate timescales. Here, transient\nextreme-ultraviolet (XUV) spectroscopy on the silicon 2p core level at 100 eV\nis used to measure hot carrier and phonon thermalization in Si(100) from tens\nof femtoseconds to 200 ps following photoexcitation of the indirect transition\nto the {\\Delta} valley at 800 nm. The ground state XUV spectrum is first\ntheoretically predicted using a combination of a single plasmon pole model and\nthe Bethe-Salpeter equation (BSE) with density functional theory (DFT). The\nexcited state spectrum is predicted by incorporating the electronic effects of\nphoto-induced state-filling, broadening, and band-gap renormalization into the\nground state XUV spectrum. A time-dependent lattice deformation and expansion\nis also required to describe the excited state spectrum. The kinetics of these\nstructural components match the kinetics of phonons excited from the\nelectron-phonon and phonon-phonon scattering processes following\nphotoexcitation. Separating the contributions of electronic and structural\neffects on the transient XUV spectra allows the carrier population, the\npopulation of phonons involved in inter- and intra-valley electron-phonon\nscattering, and the population of phonons involved in phonon-phonon scattering\nto be quantified as a function of delay time.\n", "title": "Hot Phonon and Carrier Relaxation in Si(100) Determined by Transient Extreme Ultraviolet Spectroscopy" }
null
null
null
null
true
null
13983
null
Default
null
null
null
{ "abstract": " Given any polynomial $p$ in $C[X]$, we show that the set of irreducible\nmatrices satisfying $p(A)=0$ is finite. In the specific case $p(X)=X^2-nX$, we\ncount the number of irreducible matrices in this set and analyze the arising\nsequences and their asymptotics. Such matrices turn out to be related to\ngeneralized compositions and generalized partitions.\n", "title": "Counting Quasi-Idempotent Irreducible Integral Matrices" }
null
null
null
null
true
null
13984
null
Default
null
null
null
{ "abstract": " Significant training is required to visually interpret neonatal EEG signals.\nThis study explores alternative sound-based methods for EEG interpretation\nwhich are designed to allow for intuitive and quick differentiation between\nhealthy background activity and abnormal activity such as seizures. A novel\nmethod based on frequency and amplitude modulation (FM/AM) is presented. The\nalgorithm is tuned to facilitate the audio domain perception of rhythmic\nactivity which is specific to neonatal seizures. The method is compared with\nthe previously developed phase vocoder algorithm for different time compressing\nfactors. A survey is conducted amongst a cohort of non-EEG experts to\nquantitatively and qualitatively examine the performance of sound-based methods\nin comparison with the visual interpretation. It is shown that both\nsonification methods perform similarly well, with a smaller inter-observer\nvariability in comparison with visual. A post-survey analysis of results is\nperformed by examining the sensitivity of the ear to frequency evolution in\naudio.\n", "title": "On sound-based interpretation of neonatal EEG" }
null
null
[ "Statistics", "Quantitative Biology" ]
null
true
null
13985
null
Validated
null
null
null
{ "abstract": " Millions of users routinely use Google to log in to websites supporting OAuth\n2.0 or OpenID Connect; the security of OAuth 2.0 and OpenID Connect is\ntherefore of critical importance. As revealed in previous studies, in practice\nRPs often implement OAuth 2.0 incorrectly, and so many real-world OAuth 2.0 and\nOpenID Connect systems are vulnerable to attack. However, users of such flawed\nsystems are typically unaware of these issues, and so are at risk of attacks\nwhich could result in unauthorised access to the victim user's account at an\nRP. In order to address this threat, we have developed OAuthGuard, an OAuth 2.0\nand OpenID Connect vulnerability scanner and protector, that works with RPs\nusing Google OAuth 2.0 and OpenID Connect services. It protects user security\nand privacy even when RPs do not implement OAuth 2.0 or OpenID Connect\ncorrectly. We used OAuthGuard to survey the 1000 top-ranked websites supporting\nGoogle sign-in for the possible presence of five OAuth 2.0 or OpenID Connect\nsecurity and privacy vulnerabilities, of which one has not previously been\ndescribed in the literature. Of the 137 sites in our study that employ Google\nSign-in, 69 were found to suffer from at least one serious vulnerability.\nOAuthGuard was able to protect user security and privacy for 56 of these 69\nRPs, and for the other 13 was able to warn users that they were using an\ninsecure implementation.\n", "title": "OAuthGuard: Protecting User Security and Privacy with OAuth 2.0 and OpenID Connect" }
null
null
null
null
true
null
13986
null
Default
null
null
null
{ "abstract": " Evolutionary game dynamics in structured populations are strongly affected by\nupdating rules. Previous studies usually focus on imitation-based rules, which\nrely on payoff information of social peers. Recent behavioral experiments\nsuggest that whether individuals use such social information for strategy\nupdating may be crucial to the outcomes of social interactions. This hints at\nthe importance of considering updating rules without dependence on social\npeers' payoff information, which, however, is rarely investigated. Here, we\nstudy aspiration-based self-evaluation rules, with which individuals\nself-assess the performance of strategies by comparing own payoffs with an\nimaginary value they aspire, called the aspiration level. We explore the fate\nof strategies on population structures represented by graphs or networks. Under\nweak selection, we analytically derive the condition for strategy dominance,\nwhich is found to coincide with the classical condition of risk-dominance. This\ncondition holds for all networks and all distributions of aspiration levels,\nand for individualized ways of self-evaluation. Our condition can be\nintuitively interpreted: one strategy prevails over the other if the strategy\nbrings more satisfaction to individuals than the other does. Our work thus\nsheds light on the intrinsic difference between evolutionary dynamics induced\nby aspiration-based and imitation-based rules.\n", "title": "Aspiration dynamics generate robust predictions in structured populations" }
null
null
[ "Quantitative Biology" ]
null
true
null
13987
null
Validated
null
null
null
{ "abstract": " In supervised machine learning, an agent is typically trained once and then\ndeployed. While this works well for static settings, robots often operate in\nchanging environments and must quickly learn new things from data streams. In\nthis paradigm, known as streaming learning, a learner is trained online, in a\nsingle pass, from a data stream that cannot be assumed to be independent and\nidentically distributed (iid). Streaming learning will cause conventional deep\nneural networks (DNNs) to fail for two reasons: 1) they need multiple passes\nthrough the entire dataset; and 2) non-iid data will cause catastrophic\nforgetting. An old fix to both of these issues is rehearsal. To learn a new\nexample, rehearsal mixes it with previous examples, and then this mixture is\nused to update the DNN. Full rehearsal is slow and memory intensive because it\nstores all previously observed examples, and its effectiveness for preventing\ncatastrophic forgetting has not been studied in modern DNNs. Here, we describe\nthe ExStream algorithm for memory efficient rehearsal and compare it to\nalternatives. We find that full rehearsal can eliminate catastrophic forgetting\nin a variety of streaming learning settings, with ExStream performing well\nusing far less memory and computation.\n", "title": "Memory Efficient Experience Replay for Streaming Learning" }
null
null
[ "Statistics" ]
null
true
null
13988
null
Validated
null
null
null
{ "abstract": " We have undertaken an algorithmic search for new integrable\nsemi-discretizations of physically relevant nonlinear partial differential\nequations. The search is performed by using a compatibility condition for the\ndiscrete Lax operators and symbolic computations. We have discovered a new\nintegrable system of coupled nonlinear Schrodinger equations which combines\nelements of the Ablowitz-Ladik lattice and the triangular-lattice ribbon\nstudied by Vakhnenko. We show that the continuum limit of the new integrable\nsystem is given by uncoupled complex modified Korteweg-de Vries equations and\nuncoupled nonlinear Schrodinger equations.\n", "title": "New integrable semi-discretizations of the coupled nonlinear Schrodinger equations" }
null
null
null
null
true
null
13989
null
Default
null
null
null
{ "abstract": " Non-negative matrix factorization is a basic tool for decomposing data into\nthe feature and weight matrices under non-negativity constraints, and in\npractice is often solved in the alternating minimization framework. However, it\nis unclear whether such algorithms can recover the ground-truth feature matrix\nwhen the weights for different features are highly correlated, which is common\nin applications. This paper proposes a simple and natural alternating gradient\ndescent based algorithm, and shows that with a mild initialization it provably\nrecovers the ground-truth in the presence of strong correlations. In most\ninteresting cases, the correlation can be in the same order as the highest\npossible. Our analysis also reveals its several favorable features including\nrobustness to noise. We complement our theoretical results with empirical\nstudies on semi-synthetic datasets, demonstrating its advantage over several\npopular methods in recovering the ground-truth.\n", "title": "Provable Alternating Gradient Descent for Non-negative Matrix Factorization with Strong Correlations" }
null
null
null
null
true
null
13990
null
Default
null
null
null
{ "abstract": " The paper solves the problem of optimal portfolio choice when the parameters\nof the asset returns distribution, like the mean vector and the covariance\nmatrix are unknown and have to be estimated by using historical data of the\nasset returns. The new approach employs the Bayesian posterior predictive\ndistribution which is the distribution of the future realization of the asset\nreturns given the observable sample. The parameters of the posterior predictive\ndistributions are functions of the observed data values and, consequently, the\nsolution of the optimization problem is expressed in terms of data only and\ndoes not depend on unknown quantities. In contrast, the optimization problem of\nthe traditional approach is based on unknown quantities which are estimated in\nthe second step leading to a suboptimal solution. We also derive a very useful\nstochastic representation of the posterior predictive distribution whose\napplication leads not only to the solution of the considered optimization\nproblem, but provides the posterior predictive distribution of the optimal\nportfolio return used to construct a prediction interval. A Bayesian efficient\nfrontier, a set of optimal portfolios obtained by employing the posterior\npredictive distribution, is constructed as well. Theoretically and using real\ndata we show that the Bayesian efficient frontier outperforms the sample\nefficient frontier, a common estimator of the set of optimal portfolios known\nto be overoptimistic.\n", "title": "Bayesian mean-variance analysis: Optimal portfolio selection under parameter uncertainty" }
null
null
null
null
true
null
13991
null
Default
null
null
null
{ "abstract": " Statistical inference after model selection requires an inference framework\nthat takes the selection into account in order to be valid. Following recent\nwork on selective inference, we derive analytical expressions for inference\nafter likelihood- or test-based model selection for linear models.\n", "title": "Selective inference after likelihood- or test-based model selection in linear models" }
null
null
null
null
true
null
13992
null
Default
null
null
null
{ "abstract": " We study the problem of sampling a bandlimited graph signal in the presence\nof noise, where the objective is to select a node subset of prescribed\ncardinality that minimizes the signal reconstruction mean squared error (MSE).\nTo that end, we formulate the task at hand as the minimization of MSE subject\nto binary constraints, and approximate the resulting NP-hard problem via\nsemidefinite programming (SDP) relaxation. Moreover, we provide an alternative\nformulation based on maximizing a monotone weak submodular function and propose\na randomized-greedy algorithm to find a sub-optimal subset. We then derive a\nworst-case performance guarantee on the MSE returned by the randomized greedy\nalgorithm for general non-stationary graph signals. The efficacy of the\nproposed methods is illustrated through numerical simulations on synthetic and\nreal-world graphs. Notably, the randomized greedy algorithm yields an\norder-of-magnitude speedup over state-of-the-art greedy sampling schemes, while\nincurring only a marginal MSE performance loss.\n", "title": "Sampling and Reconstruction of Graph Signals via Weak Submodularity and Semidefinite Relaxation" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
13993
null
Validated
null
null
null
{ "abstract": " We present DLTK, a toolkit providing baseline implementations for efficient\nexperimentation with deep learning methods on biomedical images. It builds on\ntop of TensorFlow and its high modularity and easy-to-use examples allow for a\nlow-threshold access to state-of-the-art implementations for typical medical\nimaging problems. A comparison of DLTK's reference implementations of popular\nnetwork architectures for image segmentation demonstrates new top performance\non the publicly available challenge data \"Multi-Atlas Labeling Beyond the\nCranial Vault\". The average test Dice similarity coefficient of $81.5$ exceeds\nthe previously best performing CNN ($75.7$) and the accuracy of the challenge\nwinning method ($79.0$).\n", "title": "DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images" }
null
null
null
null
true
null
13994
null
Default
null
null
null
{ "abstract": " We show that the Verdier quotients can be realized as subfactors by the\nhomotopy theory of additive categories with suspensions developed in\n\\cite{ZWLi2, ZWLi3}. As applications, we develop the homotopy theory of Nakaoka\ntwin cotorsion pairs of triangulated categories and prove that Iyama-Yoshino\ntriangulated subfactors are Verdier quotients under suitable conditions.\n", "title": "A homotopy theory of Nakaoka twin cotorsion pairs" }
null
null
null
null
true
null
13995
null
Default
null
null
null
{ "abstract": " We prove that the Grothendieck rings of category $\\mathcal{C}^{(t)}_Q$ over\nquantum affine algebras $U_q'(\\g^{(t)})$ $(t=1,2)$ associated to each Dynkin\nquiver $Q$ of finite type $A_{2n-1}$ (resp. $D_{n+1}$) is isomorphic to one of\ncategory $\\mathcal{C}_{\\mQ}$ over the Langlands dual $U_q'({^L}\\g^{(2)})$ of\n$U_q'(\\g^{(2)})$ associated to any twisted adapted class $[\\mQ]$ of $A_{2n-1}$\n(resp. $D_{n+1}$). This results provide partial answers of conjectures of\nFrenkel-Hernandez on Langlands duality for finite-dimensional representation of\nquantum affine algebras.\n", "title": "Categorical relations between Langlands dual quantum affine algebras: Doubly laced types" }
null
null
null
null
true
null
13996
null
Default
null
null
null
{ "abstract": " We study the optimal pricing strategy of a monopolist selling homogeneous\ngoods to customers over multiple periods. The customers choose their time of\npurchase to maximize their payoff that depends on their valuation of the\nproduct, the purchase price, and the utility they derive from past purchases of\nothers, termed the network effect. We first show that the optimal price\nsequence is non-decreasing. Therefore, by postponing purchase to future rounds,\ncustomers trade-off a higher utility from the network effects with a higher\nprice. We then show that a customer's equilibrium strategy can be characterized\nby a threshold rule in which at each round a customer purchases the product if\nand only if her valuation exceeds a certain threshold. This implies that\ncustomers face an inference problem regarding the valuations of others, i.e.,\nobserving that a customer has not yet purchased the product, signals that her\nvaluation is below a threshold. We consider a block model of network\ninteractions, where there are blocks of buyers subject to the same network\neffect. A natural benchmark, this model allows us to provide an explicit\ncharacterization of the optimal price sequence asymptotically as the number of\nagents goes to infinity, which notably is linearly increasing in time with a\nslope that depends on the network effect through a scalar given by the sum of\nentries of the inverse of the network weight matrix. Our characterization shows\nthat increasing the \"imbalance\" in the network defined as the difference\nbetween the in and out degree of the nodes increases the revenue of the\nmonopolist. We further study the effects of price discrimination and show that\nin earlier periods monopolist offers lower prices to blocks with higher\nBonacich centrality to encourage them to purchase, which in turn further\nincentivizes other customers to buy in subsequent periods.\n", "title": "Strategic Dynamic Pricing with Network Effects" }
null
null
null
null
true
null
13997
null
Default
null
null
null
{ "abstract": " The present paper is devoted to the description of finite-dimensional\nsemisimple Leibniz algebras over complex numbers, their derivations and\nautomorphisms.\n", "title": "Semisimple Leibniz algebras and their derivations and automorphisms" }
null
null
null
null
true
null
13998
null
Default
null
null
null
{ "abstract": " We derive bounds on the extremal singular values and the condition number of\nNxK, with N>=K, Vandermonde matrices with nodes in the unit disk. The\nmathematical techniques we develop to prove our main results are inspired by a\nlink---first established by by Selberg [1] and later extended by Moitra\n[2]---between the extremal singular values of Vandermonde matrices with nodes\non the unit circle and large sieve inequalities. Our main conceptual\ncontribution lies in establishing a connection between the extremal singular\nvalues of Vandermonde matrices with nodes in the unit disk and a novel large\nsieve inequality involving polynomials in z \\in C with |z|<=1. Compared to\nBazán's upper bound on the condition number [3], which, to the best of our\nknowledge, constitutes the only analytical result---available in the\nliterature---on the condition number of Vandermonde matrices with nodes in the\nunit disk, our bound not only takes a much simpler form, but is also sharper\nfor certain node configurations. Moreover, the bound we obtain can be evaluated\nconsistently in a numerically stable fashion, whereas the evaluation of\nBazán's bound requires the solution of a linear system of equations which has\nthe same condition number as the Vandermonde matrix under consideration and can\ntherefore lead to numerical instability in practice. As a byproduct, our\nresult---when particularized to the case of nodes on the unit circle---slightly\nimproves upon the Selberg-Moitra bound.\n", "title": "Vandermonde Matrices with Nodes in the Unit Disk and the Large Sieve" }
null
null
null
null
true
null
13999
null
Default
null
null
null
{ "abstract": " Stanene has been predicted to be a two-dimensional topological insulator\n(2DTI). Its low-buckled atomic geometry and the enhanced spin-orbit coupling\nare expected to cause a prominent quantum spin hall (QSH) effect. However, most\nof the experimentally grown stanene to date displays a metallic state without a\nreal gap, possibly due to the chemical coupling with the substrate and the\nstress applied by the substrate. Here,we demonstrate an efficient way of tuning\nthe atomic buckling in stanene to open a topologically nontrivial energy gap.\nVia tuning the growth kinetics, we obtain not only the low-buckled 1x1 stanene\nbut also an unexpected high-buckled R3xR3 stanene on the Bi(111) substrate.\nScanning tunneling microscopy (STM) study combined with density functional\ntheory (DFT) calculation confirms that the R3xR3 stanene is a distorted 1x1\nstructure with a high-buckled Sn in every three 1x1 unit cells. The\nhigh-buckled R3xR3 stanene favors a large band inversion at the {\\Gamma} point,\nand the spin orbital coupling open a topologically nontrivial energy gap. The\nexistence of edge states as verified in both STM measurement and DFT\ncalculation further confirms the topology of the R3xR3 stanene. This study\nprovides an alternate way to tune the topology of monolayer 2DTI materials.\n", "title": "High-buckled R3 stanene with topologically nontrivial energy gap" }
null
null
null
null
true
null
14000
null
Default
null
null