text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " Distributional approximations of (bi--) linear functions of sample\nvariance-covariance matrices play a critical role to analyze vector time\nseries, as they are needed for various purposes, especially to draw inference\non the dependence structure in terms of second moments and to analyze\nprojections onto lower dimensional spaces as those generated by principal\ncomponents. This particularly applies to the high-dimensional case, where the\ndimension $d$ is allowed to grow with the sample size $n$ and may even be\nlarger than $n$. We establish large-sample approximations for such bilinear\nforms related to the sample variance-covariance matrix of a high-dimensional\nvector time series in terms of strong approximations by Brownian motions. The\nresults cover weakly dependent as well as many long-range dependent linear\nprocesses and are valid for uniformly $ \\ell_1 $-bounded projection vectors,\nwhich arise, either naturally or by construction, in many statistical problems\nextensively studied for high-dimensional series. Among those problems are\nsparse financial portfolio selection, sparse principal components, the LASSO,\nshrinkage estimation and change-point analysis for high--dimensional time\nseries, which matter for the analysis of big data and are discussed in greater\ndetail.\n",
"title": "Large-sample approximations for variance-covariance matrices of high-dimensional time series"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 2101 | null | Validated | null | null |
null | {
"abstract": " We introduce a criterion, resilience, which allows properties of a dataset\n(such as its mean or best low rank approximation) to be robustly computed, even\nin the presence of a large fraction of arbitrary additional data. Resilience is\na weaker condition than most other properties considered so far in the\nliterature, and yet enables robust estimation in a broader variety of settings.\nWe provide new information-theoretic results on robust distribution learning,\nrobust estimation of stochastic block models, and robust mean estimation under\nbounded $k$th moments. We also provide new algorithmic results on robust\ndistribution learning, as well as robust mean estimation in $\\ell_p$-norms.\nAmong our proof techniques is a method for pruning a high-dimensional\ndistribution with bounded $1$st moments to a stable \"core\" with bounded $2$nd\nmoments, which may be of independent interest.\n",
"title": "Resilience: A Criterion for Learning in the Presence of Arbitrary Outliers"
} | null | null | null | null | true | null | 2102 | null | Default | null | null |
null | {
"abstract": " Space out of a topological defect of the Abrikosov-Nielsen-Olesen vortex type\nis locally flat but non-Euclidean. If a spinor field is quantized in such a\nspace, then a variety of quantum effects is induced in the vacuum. Basing on\nthe continuum model for long-wavelength electronic excitations, originating in\nthe tight-binding approximation for the nearest neighbor interaction of atoms\nin the crystal lattice, we consider quantum ground state effects in monolayer\nstructures warped into nanocones by a disclination; the nonzero size of the\ndisclination is taken into account, and a boundary condition at the edge of the\ndisclination is chosen to ensure self-adjointness of the Dirac-Weyl Hamiltonian\noperator. In the case of carbon nanocones, we find circumstances when the\nquantum ground state effects are independent of the boundary parameter and the\ndisclination size.\n",
"title": "Non-Euclidean geometry, nontrivial topology and quantum vacuum effects"
} | null | null | null | null | true | null | 2103 | null | Default | null | null |
null | {
"abstract": " An accurate assessment of the risk of extreme environmental events is of\ngreat importance for populations, authorities and the banking/insurance\nindustry. Koch (2017) introduced a notion of spatial risk measure and a\ncorresponding set of axioms which are well suited to analyze the risk due to\nevents having a spatial extent, precisely such as environmental phenomena. The\naxiom of asymptotic spatial homogeneity is of particular interest since it\nallows one to quantify the rate of spatial diversification when the region\nunder consideration becomes large. In this paper, we first investigate the\ngeneral concepts of spatial risk measures and corresponding axioms further. We\nalso explain the usefulness of this theory for the actuarial practice. Second,\nin the case of a general cost field, we especially give sufficient conditions\nsuch that spatial risk measures associated with expectation, variance,\nValue-at-Risk as well as expected shortfall and induced by this cost field\nsatisfy the axioms of asymptotic spatial homogeneity of order 0, -2, -1 and -1,\nrespectively. Last but not least, in the case where the cost field is a\nfunction of a max-stable random field, we mainly provide conditions on both the\nfunction and the max-stable field ensuring the latter properties. Max-stable\nrandom fields are relevant when assessing the risk of extreme events since they\nappear as a natural extension of multivariate extreme-value theory to the level\nof random fields. Overall, this paper improves our understanding of spatial\nrisk measures as well as of their properties with respect to the space variable\nand generalizes many results obtained in Koch (2017).\n",
"title": "Spatial risk measures and rate of spatial diversification"
} | null | null | null | null | true | null | 2104 | null | Default | null | null |
null | {
"abstract": " We answer the question to what extent homotopy (co)limits in categories with\nweak equivalences allow for a Fubini-type interchange law. The main obstacle is\nthat we do not assume our categories with weak equivalences to come equipped\nwith a calculus for homotopy (co)limits, such as a derivator.\n",
"title": "Double Homotopy (Co)Limits for Relative Categories"
} | null | null | null | null | true | null | 2105 | null | Default | null | null |
null | {
"abstract": " We formulate part I of a rigorous theory of ground states for classical,\nfinite, Heisenberg spin systems. The main result is that all ground states can\nbe constructed from the eigenvectors of a real, symmetric matrix with entries\ncomprising the coupling constants of the spin system as well as certain\nLagrange parameters. The eigenvectors correspond to the unique maximum of the\nminimal eigenvalue considered as a function of the Lagrange parameters.\nHowever, there are rare cases where all ground states obtained in this way have\nunphysical dimensions $M>3$ and the theory would have to be extended. Further\nresults concern the degree of additional degeneracy, additional to the trivial\ndegeneracy of ground states due to rotations or reflections. The theory is\nillustrated by a couple of elementary examples.\n",
"title": "Theory of ground states for classical Heisenberg spin systems I"
} | null | null | null | null | true | null | 2106 | null | Default | null | null |
null | {
"abstract": " A promising research area that has recently emerged, is on how to use index\ncoding to improve the communication efficiency in distributed computing\nsystems, especially for data shuffling in iterative computations. In this\npaper, we posit that pliable index coding can offer a more efficient framework\nfor data shuffling, as it can better leverage the many possible shuffling\nchoices to reduce the number of transmissions. We theoretically analyze pliable\nindex coding under data shuffling constraints, and design a hierarchical\ndata-shuffling scheme that uses pliable coding as a component. We find benefits\nup to $O(ns/m)$ over index coding, where $ns/m$ is the average number of\nworkers caching a message, and $m$, $n$, and $s$ are the numbers of messages,\nworkers, and cache size, respectively.\n",
"title": "A Pliable Index Coding Approach to Data Shuffling"
} | null | null | null | null | true | null | 2107 | null | Default | null | null |
null | {
"abstract": " We use Monte Carlo simulations to explore the statistical challenges of\nconstraining the characteristic mass ($m_c$) and width ($\\sigma$) of a\nlognormal sub-solar initial mass function (IMF) in Local Group dwarf galaxies\nusing direct star counts. For a typical Milky Way (MW) satellite ($M_{V} =\n-8$), jointly constraining $m_c$ and $\\sigma$ to a precision of $\\lesssim 20\\%$\nrequires that observations be complete to $\\lesssim 0.2 M_{\\odot}$, if the IMF\nis similar to the MW IMF. A similar statistical precision can be obtained if\nobservations are only complete down to $0.4M_{\\odot}$, but this requires\nmeasurement of nearly 100$\\times$ more stars, and thus, a significantly more\nmassive satellite ($M_{V} \\sim -12$). In the absence of sufficiently deep data\nto constrain the low-mass turnover, it is common practice to fit a\nsingle-sloped power law to the low-mass IMF, or to fit $m_c$ for a lognormal\nwhile holding $\\sigma$ fixed. We show that the former approximation leads to\nbest-fit power law slopes that vary with the mass range observed and can\nlargely explain existing claims of low-mass IMF variations in MW satellites,\neven if satellite galaxies have the same IMF as the MW. In addition, fixing\n$\\sigma$ during fitting leads to substantially underestimated uncertainties in\nthe recovered value of $m_c$ (by a factor of $\\sim 4$ for typical\nobservations). If the IMFs of nearby dwarf galaxies are lognormal and do vary,\nobservations must reach down to $\\sim m_c$ in order to robustly detect these\nvariations. The high-sensitivity, near-infrared capabilities of JWST and WFIRST\nhave the potential to dramatically improve constraints on the low-mass IMF. We\npresent an efficient observational strategy for using these facilities to\nmeasure the IMFs of Local Group dwarf galaxies.\n",
"title": "The statistical challenge of constraining the low-mass IMF in Local Group dwarf galaxies"
} | null | null | null | null | true | null | 2108 | null | Default | null | null |
null | {
"abstract": " With recent developments in remote sensing technologies, plot-level forest\nresources can be predicted utilizing airborne laser scanning (ALS). The\nprediction is often assisted by mostly vertical summaries of the ALS point\nclouds. We present a spatial analysis of the point cloud by studying the\nhorizontal distribution of the pulse returns through canopy height models\nthresholded at different height levels. The resulting patterns of patches of\nvegetation and gabs on each layer are summarized to spatial ALS features. We\npropose new features based on the Euler number, which is the number of patches\nminus the number of gaps, and the empty-space function, which is a spatial\nsummary function of the gab space. The empty-space function is also used to\ndescribe differences in the gab structure between two different layers. We\nillustrate usefulness of the proposed spatial features for predicting different\nforest variables that summarize the spatial structure of forests or their\nbreast height diameter distribution. We employ the proposed spatial features,\nin addition to commonly used features from literature, in the well-known k-nn\nestimation method to predict the forest variables. We present the methodology\non the example of a study site in Central Finland.\n",
"title": "Spatial analysis of airborne laser scanning point clouds for predicting forest variables"
} | null | null | null | null | true | null | 2109 | null | Default | null | null |
null | {
"abstract": " We consider the reproducing kernel function of the theta Bargmann-Fock\nHilbert space associated to given full-rank lattice and pseudo-character, and\nwe deal with some of its analytical and arithmetical properties. Specially, the\ndistribution and discreteness of its zeros are examined and analytic sets\ninside a product of fundamental cells is characterized and shown to be finite\nand of cardinal less or equal to the dimension of the theta Bargmann-Fock\nHilbert space. Moreover, we obtain some remarkable lattice sums by evaluating\nthe so-called complex Hermite-Taylor coefficients. Some of them generalize some\nof the arithmetic identities established by Perelomov in the framework of\ncoherent states for the specific case of von Neumann lattice. Such complex\nHermite-Taylor coefficients are nontrivial examples of the so-called lattice's\nfunctions according the Serre terminology. The perfect use of the basic\nproperties of the complex Hermite polynomials is crucial in this framework.\n",
"title": "Analytic and arithmetic properties of the $(Γ,χ)$-automorphic reproducing kernel function"
} | null | null | null | null | true | null | 2110 | null | Default | null | null |
null | {
"abstract": " In this paper, we consider a concentration of measure problem on Riemannian\nmanifolds with boundary. We study concentration phenomena of non-negative\n$1$-Lipschitz functions with Dirichlet boundary condition around zero, which is\ncalled boundary concentration phenomena. We first examine relation between\nboundary concentration phenomena and large spectral gap phenomena of Dirichlet\neigenvalues of Laplacian. We will obtain analogue of the Gromov-V. D. Milman\ntheorem and the Funano-Shioya theorem for closed manifolds. Furthermore, to\ncapture boundary concentration phenomena, we introduce a new invariant called\nthe observable inscribed radius. We will formulate comparison theorems for such\ninvariant under a lower Ricci curvature bound, and a lower mean curvature bound\nfor the boundary. Based on such comparison theorems, we investigate various\nboundary concentration phenomena of sequences of manifolds with boundary.\n",
"title": "Concentration of $1$-Lipschitz functions on manifolds with boundary with Dirichlet boundary condition"
} | null | null | [
"Mathematics"
]
| null | true | null | 2111 | null | Validated | null | null |
null | {
"abstract": " JPEG is one of the most widely used image formats, but in some ways remains\nsurprisingly unoptimized, perhaps because some natural optimizations would go\noutside the standard that defines JPEG. We show how to improve JPEG compression\nin a standard-compliant, backward-compatible manner, by finding improved\ndefault quantization tables. We describe a simulated annealing technique that\nhas allowed us to find several quantization tables that perform better than the\nindustry standard, in terms of both compressed size and image fidelity.\nSpecifically, we derive tables that reduce the FSIM error by over 10% while\nimproving compression by over 20% at quality level 95 in our tests; we also\nprovide similar results for other quality levels. While we acknowledge our\napproach can in some images lead to visible artifacts under large\nmagnification, we believe use of these quantization tables, or additional\ntables that could be found using our methodology, would significantly reduce\nJPEG file sizes with improved overall image quality.\n",
"title": "Simulated Annealing for JPEG Quantization"
} | null | null | [
"Computer Science"
]
| null | true | null | 2112 | null | Validated | null | null |
null | {
"abstract": " We study the problems of clustering with outliers in high dimension. Though a\nnumber of methods have been developed in the past decades, it is still quite\nchallenging to design quality guaranteed algorithms with low complexities for\nthe problems. Our idea is inspired by the greedy method, Gonzalez's algorithm,\nfor solving the problem of ordinary $k$-center clustering. Based on some novel\nobservations, we show that this greedy strategy actually can handle\n$k$-center/median/means clustering with outliers efficiently, in terms of\nqualities and complexities. We further show that the greedy approach yields\nsmall coreset for the problem in doubling metrics, so as to reduce the time\ncomplexity significantly. Moreover, a by-product is that the coreset\nconstruction can be applied to speedup the popular density-based clustering\napproach DBSCAN.\n",
"title": "Greedy Strategy Works for Clustering with Outliers and Coresets Construction"
} | null | null | null | null | true | null | 2113 | null | Default | null | null |
null | {
"abstract": " Real-valued word representations have transformed NLP applications; popular\nexamples are word2vec and GloVe, recognized for their ability to capture\nlinguistic regularities. In this paper, we demonstrate a {\\em very simple}, and\nyet counter-intuitive, postprocessing technique -- eliminate the common mean\nvector and a few top dominating directions from the word vectors -- that\nrenders off-the-shelf representations {\\em even stronger}. The postprocessing\nis empirically validated on a variety of lexical-level intrinsic tasks (word\nsimilarity, concept categorization, word analogy) and sentence-level tasks\n(semantic textural similarity and { text classification}) on multiple datasets\nand with a variety of representation methods and hyperparameter choices in\nmultiple languages; in each case, the processed representations are\nconsistently better than the original ones.\n",
"title": "All-but-the-Top: Simple and Effective Postprocessing for Word Representations"
} | null | null | null | null | true | null | 2114 | null | Default | null | null |
null | {
"abstract": " Contemporary software documentation is as complicated as the software itself.\nDuring its lifecycle, the documentation accumulates a lot of near duplicate\nfragments, i.e. chunks of text that were copied from a single source and were\nlater modified in different ways. Such near duplicates decrease documentation\nquality and thus hamper its further utilization. At the same time, they are\nhard to detect manually due to their fuzzy nature. In this paper we give a\nformal definition of near duplicates and present an algorithm for their\ndetection in software documents. This algorithm is based on the exact software\nclone detection approach: the software clone detection tool Clone Miner was\nadapted to detect exact duplicates in documents. Then, our algorithm uses these\nexact duplicates to construct near ones. We evaluate the proposed algorithm\nusing the documentation of 19 open source and commercial projects. Our\nevaluation is very comprehensive - it covers various documentation types:\ndesign and requirement specifications, programming guides and API\ndocumentation, user manuals. Overall, the evaluation shows that all kinds of\nsoftware documentation contain a significant number of both exact and near\nduplicates. Next, we report on the performed manual analysis of the detected\nnear duplicates for the Linux Kernel Documentation. We present both quantative\nand qualitative results of this analysis, demonstrate algorithm strengths and\nweaknesses, and discuss the benefits of duplicate management in software\ndocuments.\n",
"title": "Detecting Near Duplicates in Software Documentation"
} | null | null | [
"Computer Science"
]
| null | true | null | 2115 | null | Validated | null | null |
null | {
"abstract": " In an $\\mathsf{L}$-embedding of a graph, each vertex is represented by an\n$\\mathsf{L}$-segment, and two segments intersect each other if and only if the\ncorresponding vertices are adjacent in the graph. If the corner of each\n$\\mathsf{L}$-segment in an $\\mathsf{L}$-embedding lies on a straight line, we\ncall it a monotone $\\mathsf{L}$-embedding. In this paper we give a full\ncharacterization of monotone $\\mathsf{L}$-embeddings by introducing a new class\nof graphs which we call \"non-jumping\" graphs. We show that a graph admits a\nmonotone $\\mathsf{L}$-embedding if and only if the graph is a non-jumping\ngraph. Further, we show that outerplanar graphs, convex bipartite graphs,\ninterval graphs, 3-leaf power graphs, and complete graphs are subclasses of\nnon-jumping graphs. Finally, we show that distance-hereditary graphs and\n$k$-leaf power graphs ($k\\le 4$) admit $\\mathsf{L}$-embeddings.\n",
"title": "L-Graphs and Monotone L-Graphs"
} | null | null | null | null | true | null | 2116 | null | Default | null | null |
null | {
"abstract": " We develop a novel family of algorithms for the online learning setting with\nregret against any data sequence bounded by the empirical Rademacher complexity\nof that sequence. To develop a general theory of when this type of adaptive\nregret bound is achievable we establish a connection to the theory of\ndecoupling inequalities for martingales in Banach spaces. When the hypothesis\nclass is a set of linear functions bounded in some norm, such a regret bound is\nachievable if and only if the norm satisfies certain decoupling inequalities\nfor martingales. Donald Burkholder's celebrated geometric characterization of\ndecoupling inequalities (1984) states that such an inequality holds if and only\nif there exists a special function called a Burkholder function satisfying\ncertain restricted concavity properties. Our online learning algorithms are\nefficient in terms of queries to this function.\nWe realize our general theory by giving novel efficient algorithms for\nclasses including lp norms, Schatten p-norms, group norms, and reproducing\nkernel Hilbert spaces. The empirical Rademacher complexity regret bound implies\n--- when used in the i.i.d. setting --- a data-dependent complexity bound for\nexcess risk after online-to-batch conversion. To showcase the power of the\nempirical Rademacher complexity regret bound, we derive improved rates for a\nsupervised learning generalization of the online learning with low rank experts\ntask and for the online matrix prediction task.\nIn addition to obtaining tight data-dependent regret bounds, our algorithms\nenjoy improved efficiency over previous techniques based on Rademacher\ncomplexity, automatically work in the infinite horizon setting, and are\nscale-free. To obtain such adaptive methods, we introduce novel machinery, and\nthe resulting algorithms are not based on the standard tools of online convex\noptimization.\n",
"title": "ZigZag: A new approach to adaptive online learning"
} | null | null | [
"Computer Science",
"Mathematics",
"Statistics"
]
| null | true | null | 2117 | null | Validated | null | null |
null | {
"abstract": " Objective: We investigate whether deep learning techniques for natural\nlanguage processing (NLP) can be used efficiently for patient phenotyping.\nPatient phenotyping is a classification task for determining whether a patient\nhas a medical condition, and is a crucial part of secondary analysis of\nhealthcare data. We assess the performance of deep learning algorithms and\ncompare them with classical NLP approaches.\nMaterials and Methods: We compare convolutional neural networks (CNNs),\nn-gram models, and approaches based on cTAKES that extract pre-defined medical\nconcepts from clinical notes and use them to predict patient phenotypes. The\nperformance is tested on 10 different phenotyping tasks using 1,610 discharge\nsummaries extracted from the MIMIC-III database.\nResults: CNNs outperform other phenotyping algorithms in all 10 tasks. The\naverage F1-score of our model is 76 (PPV of 83, and sensitivity of 71) with our\nmodel having an F1-score up to 37 points higher than alternative approaches. We\nadditionally assess the interpretability of our model by presenting a method\nthat extracts the most salient phrases for a particular prediction.\nConclusion: We show that NLP methods based on deep learning improve the\nperformance of patient phenotyping. Our CNN-based algorithm automatically\nlearns the phrases associated with each patient phenotype. As such, it reduces\nthe annotation complexity for clinical domain experts, who are normally\nrequired to develop task-specific annotation rules and identify relevant\nphrases. Our method performs well in terms of both performance and\ninterpretability, which indicates that deep learning is an effective approach\nto patient phenotyping based on clinicians' notes.\n",
"title": "Comparing Rule-Based and Deep Learning Models for Patient Phenotyping"
} | null | null | null | null | true | null | 2118 | null | Default | null | null |
null | {
"abstract": " This paper is concerned with finite sample approximations to the supremum of\na non-degenerate $U$-process of a general order indexed by a function class. We\nare primarily interested in situations where the function class as well as the\nunderlying distribution change with the sample size, and the $U$-process itself\nis not weakly convergent as a process. Such situations arise in a variety of\nmodern statistical problems. We first consider Gaussian approximations, namely,\napproximate the $U$-process supremum by the supremum of a Gaussian process, and\nderive coupling and Kolmogorov distance bounds. Such Gaussian approximations\nare, however, not often directly applicable in statistical problems since the\ncovariance function of the approximating Gaussian process is unknown. This\nmotivates us to study bootstrap-type approximations to the $U$-process\nsupremum. We propose a novel jackknife multiplier bootstrap (JMB) tailored to\nthe $U$-process, and derive coupling and Kolmogorov distance bounds for the\nproposed JMB method. All these results are non-asymptotic, and established\nunder fairly general conditions on function classes and underlying\ndistributions. Key technical tools in the proofs are new local maximal\ninequalities for $U$-processes, which may be useful in other problems. We also\ndiscuss applications of the general approximation results to testing for\nqualitative features of nonparametric functions based on generalized local\n$U$-processes.\n",
"title": "Jackknife multiplier bootstrap: finite sample approximations to the $U$-process supremum with applications"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 2119 | null | Validated | null | null |
null | {
"abstract": " All previous experiments in open turbulent flows (e.g. downstream of grids,\njet and atmospheric boundary layer) have produced quantitatively consistent\nvalues for the scaling exponents of velocity structure functions. The only\nmeasurement in closed turbulent flow (von Kármán swirling flow) using\nTaylor-hypothesis, however, produced scaling exponents that are significantly\nsmaller, suggesting that the universality of these exponents are broken with\nrespect to change of large scale geometry of the flow. Here, we report\nmeasurements of longitudinal structure functions of velocity in a von\nKármán setup without the use of Taylor-hypothesis. The measurements are\nmade using Stereo Particle Image Velocimetry at 4 different ranges of spatial\nscales, in order to observe a combined inertial subrange spanning roughly one\nand a half order of magnitude. We found scaling exponents (up to 9th order)\nthat are consistent with values from open turbulent flows, suggesting that they\nmight be in fact universal.\n",
"title": "On the universality of anomalous scaling exponents of structure functions in turbulent flows"
} | null | null | null | null | true | null | 2120 | null | Default | null | null |
null | {
"abstract": " We propose a method inspired from discrete light cone quantization (DLCQ) to\ndetermine the heat kernel for a Schrödinger field theory (Galilean boost\ninvariant with $z=2$ anisotropic scaling symmetry) living in $d+1$ dimensions,\ncoupled to a curved Newton-Cartan background starting from a heat kernel of a\nrelativistic conformal field theory ($z=1$) living in $d+2$ dimensions. We use\nthis method to show the Schrödinger field theory of a complex scalar field\ncannot have any Weyl anomalies. To be precise, we show that the Weyl anomaly\n$\\mathcal{A}^{G}_{d+1}$ for Schrödinger theory is related to the Weyl anomaly\nof a free relativistic scalar CFT $\\mathcal{A}^{R}_{d+2}$ via\n$\\mathcal{A}^{G}_{d+1}= 2\\pi \\delta (m) \\mathcal{A}^{R}_{d+2}$ where $m$ is the\ncharge of the scalar field under particle number symmetry. We provide further\nevidence of vanishing anomaly by evaluating Feynman diagrams in all orders of\nperturbation theory. We present an explicit calculation of the anomaly using a\nregulated Schrödinger operator, without using the null cone reduction\ntechnique. We generalise our method to show that a similar result holds for one\ntime derivative theories with even $z>2$.\n",
"title": "On the Heat Kernel and Weyl Anomaly of Schrödinger invariant theory"
} | null | null | null | null | true | null | 2121 | null | Default | null | null |
null | {
"abstract": " Phylogenetic networks generalise phylogenetic trees and allow for the\naccurate representation of the evolutionary history of a set of present-day\nspecies whose past includes reticulate events such as hybridisation and lateral\ngene transfer. One way to obtain such a network is by starting with a (rooted)\nphylogenetic tree $T$, called a base tree, and adding arcs between arcs of $T$.\nThe class of phylogenetic networks that can be obtained in this way is called\ntree-based networks and includes the prominent classes of tree-child and\nreticulation-visible networks. Initially defined for binary phylogenetic\nnetworks, tree-based networks naturally extend to arbitrary phylogenetic\nnetworks. In this paper, we generalise recent tree-based characterisations and\nassociated proximity measures for binary phylogenetic networks to arbitrary\nphylogenetic networks. These characterisations are in terms of matchings in\nbipartite graphs, path partitions, and antichains. Some of the generalisations\nare straightforward to establish using the original approach, while others\nrequire a very different approach. Furthermore, for an arbitrary tree-based\nnetwork $N$, we characterise the support trees of $N$, that is, the tree-based\nembeddings of $N$. We use this characterisation to give an explicit formula for\nthe number of support trees of $N$ when $N$ is binary. This formula is written\nin terms of the components of a bipartite graph.\n",
"title": "Tree-based networks: characterisations, metrics, and support trees"
} | null | null | null | null | true | null | 2122 | null | Default | null | null |
null | {
"abstract": " Bibliometric indicators, citation counts and/or download counts are\nincreasingly being used to inform personnel decisions such as hiring or\npromotions. These statistics are very often misused. Here we provide a guide to\nthe factors which should be considered when using these so-called quantitative\nmeasures to evaluate people. Rules of thumb are given for when begin to use\nbibliometric measures when comparing otherwise similar candidates.\n",
"title": "Comparing People with Bibliometrics"
} | null | null | null | null | true | null | 2123 | null | Default | null | null |
null | {
"abstract": " Unprecedented human mobility has driven the rapid urbanization around the\nworld. In China, the fraction of population dwelling in cities increased from\n17.9% to 52.6% between 1978 and 2012. Such large-scale migration poses\nchallenges for policymakers and important questions for researchers. To\ninvestigate the process of migrant integration, we employ a one-month complete\ndataset of telecommunication metadata in Shanghai with 54 million users and 698\nmillion call logs. We find systematic differences between locals and migrants\nin their mobile communication networks and geographical locations. For\ninstance, migrants have more diverse contacts and move around the city with a\nlarger radius than locals after they settle down. By distinguishing new\nmigrants (who recently moved to Shanghai) from settled migrants (who have been\nin Shanghai for a while), we demonstrate the integration process of new\nmigrants in their first three weeks. Moreover, we formulate classification\nproblems to predict whether a person is a migrant. Our classifier is able to\nachieve an F1-score of 0.82 when distinguishing settled migrants from locals,\nbut it remains challenging to identify new migrants because of class imbalance.\nThis classification setup holds promise for identifying new migrants who will\nsuccessfully integrate into locals (new migrants that misclassified as locals).\n",
"title": "Urban Dreams of Migrants: A Case Study of Migrant Integration in Shanghai"
} | null | null | null | null | true | null | 2124 | null | Default | null | null |
null | {
"abstract": " Flexibility in shape and scale of Burr XII distribution can make close\napproximation of numerous well-known probability density functions. Due to\nthese capabilities, the usages of Burr XII distribution are applied in risk\nanalysis, lifetime data analysis and process capability estimation. In this\npaper the Cross-Entropy (CE) method is further developed in terms of Maximum\nLikelihood Estimation (MLE) to estimate the parameters of Burr XII distribution\nfor the complete data or in the presence of multiple censoring. A simulation\nstudy is conducted to evaluate the performance of the MLE by means of CE method\nfor different parameter settings and sample sizes. The results are compared to\nother existing methods in both uncensored and censored situations.\n",
"title": "A computational method for estimating Burr XII parameters with complete and multiple censored data"
} | null | null | null | null | true | null | 2125 | null | Default | null | null |
null | {
"abstract": " Argo floats measure seawater temperature and salinity in the upper 2,000 m of\nthe global ocean. Statistical analysis of the resulting spatio-temporal dataset\nis challenging due to its nonstationary structure and large size. We propose\nmapping these data using locally stationary Gaussian process regression where\ncovariance parameter estimation and spatio-temporal prediction are carried out\nin a moving-window fashion. This yields computationally tractable nonstationary\nanomaly fields without the need to explicitly model the nonstationary\ncovariance structure. We also investigate Student-$t$ distributed fine-scale\nvariation as a means to account for non-Gaussian heavy tails in ocean\ntemperature data. Cross-validation studies comparing the proposed approach with\nthe existing state-of-the-art demonstrate clear improvements in point\npredictions and show that accounting for the nonstationarity and\nnon-Gaussianity is crucial for obtaining well-calibrated uncertainties. This\napproach also provides data-driven local estimates of the spatial and temporal\ndependence scales for the global ocean which are of scientific interest in\ntheir own right.\n",
"title": "Locally stationary spatio-temporal interpolation of Argo profiling float data"
} | null | null | null | null | true | null | 2126 | null | Default | null | null |
null | {
"abstract": " Theories of knowledge reuse posit two distinct processes: reuse for\nreplication and reuse for innovation. We identify another distinct process,\nreuse for customization. Reuse for customization is a process in which\ndesigners manipulate the parameters of metamodels to produce models that\nfulfill their personal needs. We test hypotheses about reuse for customization\nin Thingiverse, a community of designers that shares files for\nthree-dimensional printing. 3D metamodels are reused more often than the 3D\nmodels they generate. The reuse of metamodels is amplified when the metamodels\nare created by designers with greater community experience. Metamodels make the\ncommunity's design knowledge available for reuse for customization-or further\nextension of the metamodels, a kind of reuse for innovation.\n",
"title": "Knowledge Reuse for Customization: Metamodels in an Open Design Community for 3d Printing"
} | null | null | [
"Computer Science"
]
| null | true | null | 2127 | null | Validated | null | null |
null | {
"abstract": " We consider the problem of matching applicants to posts where applicants have\npreferences over posts. Thus the input to our problem is a bipartite graph G =\n(A U P,E), where A denotes a set of applicants, P is a set of posts, and there\nare ranks on edges which denote the preferences of applicants over posts. A\nmatching M in G is called rank-maximal if it matches the maximum number of\napplicants to their rank 1 posts, subject to this the maximum number of\napplicants to their rank 2 posts, and so on.\nWe consider this problem in a dynamic setting, where vertices and edges can\nbe added and deleted at any point. Let n and m be the number of vertices and\nedges in an instance G, and r be the maximum rank used by any rank-maximal\nmatching in G. We give a simple O(r(m+n))-time algorithm to update an existing\nrank-maximal matching under each of these changes. When r = o(n), this is\nfaster than recomputing a rank-maximal matching completely using a known\nalgorithm like that of Irving et al., which takes time O(min((r + n,\nr*sqrt(n))m).\n",
"title": "Dynamic Rank Maximal Matchings"
} | null | null | null | null | true | null | 2128 | null | Default | null | null |
null | {
"abstract": " One of the most challenging problems in technological forecasting is to\nidentify as early as possible those technologies that have the potential to\nlead to radical changes in our society. In this paper, we use the US patent\ncitation network (1926-2010) to test our ability to early identify a list of\nhistorically significant patents through citation network analysis. We show\nthat in order to effectively uncover these patents shortly after they are\nissued, we need to go beyond raw citation counts and take into account both the\ncitation network topology and temporal information. In particular, an\nage-normalized measure of patent centrality, called rescaled PageRank, allows\nus to identify the significant patents earlier than citation count and PageRank\nscore. In addition, we find that while high-impact patents tend to rely on\nother high-impact patents in a similar way as scientific papers, the patents'\ncitation dynamics is significantly slower than that of papers, which makes the\nearly identification of significant patents more challenging than that of\nsignificant papers.\n",
"title": "Early identification of important patents through network centrality"
} | null | null | [
"Computer Science"
]
| null | true | null | 2129 | null | Validated | null | null |
null | {
"abstract": " In this paper we study the ideal variable bandwidth kernel density estimator\nintroduced by McKay (1993) and Jones, McKay and Hu (1994) and the plug-in\npractical version of the variable bandwidth kernel estimator with two sequences\nof bandwidths as in Giné and Sang (2013). Based on the bias and variance\nanalysis of the ideal and true variable bandwidth kernel density estimators, we\nstudy the central limit theorems for each of them.\n",
"title": "Central limit theorem for the variable bandwidth kernel density estimators"
} | null | null | null | null | true | null | 2130 | null | Default | null | null |
null | {
"abstract": " The increase of vehicle in highways may cause traffic congestion as well as\nin the normal roadways. Predicting the traffic flow in highways especially, is\ndemanded to solve this congestion problem. Predictions on time-series\nmultivariate data, such as in the traffic flow dataset, have been largely\naccomplished through various approaches. The approach with conventional\nprediction algorithms, such as with Support Vector Machine (SVM), is only\ncapable of accommodating predictions that are independent in each time unit.\nHence, the sequential relationships in this time series data is hardly\nexplored. Continuous Conditional Random Field (CCRF) is one of Probabilistic\nGraphical Model (PGM) algorithms which can accommodate this problem. The\nneighboring aspects of sequential data such as in the time series data can be\nexpressed by CCRF so that its predictions are more reliable. In this article, a\nnovel approach called DM-CCRF is adopted by modifying the CCRF prediction\nalgorithm to strengthen the probability of the predictions made by the baseline\nregressor. The result shows that DM-CCRF is superior in performance compared to\nCCRF. This is validated by the error decrease of the baseline up to 9%\nsignificance. This is twice the standard CCRF performance which can only\ndecrease baseline error by 4.582% at most.\n",
"title": "Distance-to-Mean Continuous Conditional Random Fields to Enhance Prediction Problem in Traffic Flow Data"
} | null | null | null | null | true | null | 2131 | null | Default | null | null |
null | {
"abstract": " For VSLAM (Visual Simultaneous Localization and Mapping), localization is a\nchallenging task, especially for some challenging situations: textureless\nframes, motion blur, etc.. To build a robust exploration and localization\nsystem in a given space or environment, a submap-based VSLAM system is proposed\nin this paper. Our system uses a submap back-end and a visual front-end. The\nmain advantage of our system is its robustness with respect to tracking\nfailure, a common problem in current VSLAM algorithms. The robustness of our\nsystem is compared with the state-of-the-art in terms of average tracking\npercentage. The precision of our system is also evaluated in terms of ATE\n(absolute trajectory error) RMSE (root mean square error) comparing the\nstate-of-the-art. The ability of our system in solving the `kidnapped' problem\nis demonstrated. Our system can improve the robustness of visual localization\nin challenging situations.\n",
"title": "Submap-based Pose-graph Visual SLAM: A Robust Visual Exploration and Localization System"
} | null | null | null | null | true | null | 2132 | null | Default | null | null |
null | {
"abstract": " Dielectronic recombination (DR) is the dominant mode of recombination in\nmagnetically confined fusion plasmas for intermediate to low-charged ions of W.\nComplete, final-state resolved partial isonuclear W DR rate coefficient data is\nrequired for detailed collisional-radiative modelling for such plasmas in\npreparation for the upcoming fusion experiment ITER. To realize this\nrequirement, we continue {\\it The Tungsten Project} by presenting our\ncalculations for tungsten ions W$^{55+}$ to W$^{38+}$. As per our prior\ncalculations for W$^{73+}$ to W$^{56+}$, we use the collision package {\\sc\nautostructure} to calculate partial and total DR rate coefficients for all\nrelevant core-excitations in intermediate coupling (IC) and configuration\naverage (CA) using $\\kappa$-averaged relativistic wavefunctions. Radiative\nrecombination (RR) rate coefficients are also calculated for the purpose of\nevaluating ionization fractions. Comparison of our DR rate coefficients for\nW$^{46+}$ with other authors yields agreement to within 7-19\\% at peak\nabundance verifying the reliability of our method. Comparison of partial DR\nrate coefficients calculated in IC and CA yield differences of a factor\n$\\sim{2}$ at peak abundance temperature, highlighting the importance of\nrelativistic configuration mixing. Large differences are observed between\nionization fractions calculated using our recombination rate coefficient data\nand that of Pütterich~\\etal [Plasma Phys. and Control. Fusion 50 085016,\n(2008)]. These differences are attributed to deficiencies in the average-atom\nmethod used by the former to calculate their data.\n",
"title": "Partial and Total Dielectronic Recombination Rate Coefficients for W$^{55+}$ to W$^{38+}$"
} | null | null | [
"Physics"
]
| null | true | null | 2133 | null | Validated | null | null |
null | {
"abstract": " Structural nested mean models (SNMMs) are among the fundamental tools for\ninferring causal effects of time-dependent exposures from longitudinal studies.\nWith binary outcomes, however, current methods for estimating multiplicative\nand additive SNMM parameters suffer from variation dependence between the\ncausal SNMM parameters and the non-causal nuisance parameters. Estimating\nmethods for logistic SNMMs do not suffer from this dependence. Unfortunately,\nin contrast with the multiplicative and additive models, unbiased estimation of\nthe causal parameters of a logistic SNMM rely on additional modeling\nassumptions even when the treatment probabilities are known. These difficulties\nhave hindered the uptake of SNMMs in epidemiological practice, where binary\noutcomes are common. We solve the variation dependence problem for the binary\nmultiplicative SNMM by a reparametrization of the non-causal nuisance\nparameters. Our novel nuisance parameters are variation independent of the\ncausal parameters, and hence allows the fitting of a multiplicative SNMM by\nunconstrained maximum likelihood. It also allows one to construct true (i.e.\ncongenial) doubly robust estimators of the causal parameters. Along the way, we\nprove that an additive SNMM with binary outcomes does not admit a variation\nindependent parametrization, thus explaining why we restrict ourselves to the\nmultiplicative SNMM.\n",
"title": "Congenial Causal Inference with Binary Structural Nested Mean Models"
} | null | null | null | null | true | null | 2134 | null | Default | null | null |
null | {
"abstract": " We develop a reinforcement learning based search assistant which can assist\nusers through a set of actions and sequence of interactions to enable them\nrealize their intent. Our approach caters to subjective search where the user\nis seeking digital assets such as images which is fundamentally different from\nthe tasks which have objective and limited search modalities. Labeled\nconversational data is generally not available in such search tasks and\ntraining the agent through human interactions can be time consuming. We propose\na stochastic virtual user which impersonates a real user and can be used to\nsample user behavior efficiently to train the agent which accelerates the\nbootstrapping of the agent. We develop A3C algorithm based context preserving\narchitecture which enables the agent to provide contextual assistance to the\nuser. We compare the A3C agent with Q-learning and evaluate its performance on\naverage rewards and state values it obtains with the virtual user in validation\nepisodes. Our experiments show that the agent learns to achieve higher rewards\nand better states.\n",
"title": "Improving Search through A3C Reinforcement Learning based Conversational Agent"
} | null | null | null | null | true | null | 2135 | null | Default | null | null |
null | {
"abstract": " A nonparametric fuel consumption model is developed and used for eco-routing\nalgorithm development in this paper. Six months of driving information from the\ncity of Ann Arbor is collected from 2,000 vehicles. The road grade information\nfrom more than 1,100 km of road network is modeled and the software Autonomie\nis used to calculate fuel consumption for all trips on the road network. Four\ndifferent routing strategies including shortest distance, shortest time,\neco-routing, and travel-time-constrained eco-routing are compared. The results\nshow that eco-routing can reduce fuel consumption, but may increase travel\ntime. A travel-time-constrained eco-routing algorithm is developed to keep most\nthe fuel saving benefit while incurring very little increase in travel time.\n",
"title": "Eco-Routing based on a Data Driven Fuel Consumption Model"
} | null | null | [
"Statistics"
]
| null | true | null | 2136 | null | Validated | null | null |
null | {
"abstract": " A vast majority of computation in the brain is performed by spiking neural\nnetworks. Despite the ubiquity of such spiking, we currently lack an\nunderstanding of how biological spiking neural circuits learn and compute\nin-vivo, as well as how we can instantiate such capabilities in artificial\nspiking circuits in-silico. Here we revisit the problem of supervised learning\nin temporally coding multi-layer spiking neural networks. First, by using a\nsurrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based\nthree factor learning rule capable of training multi-layer networks of\ndeterministic integrate-and-fire neurons to perform nonlinear computations on\nspatiotemporal spike patterns. Second, inspired by recent results on feedback\nalignment, we compare the performance of our learning rule under different\ncredit assignment strategies for propagating output errors to hidden units.\nSpecifically, we test uniform, symmetric and random feedback, finding that\nsimpler tasks can be solved with any type of feedback, while more complex tasks\nrequire symmetric feedback. In summary, our results open the door to obtaining\na better scientific understanding of learning and computation in spiking neural\nnetworks by advancing our ability to train them to solve nonlinear problems\ninvolving transformations between different spatiotemporal spike-time patterns.\n",
"title": "SuperSpike: Supervised learning in multi-layer spiking neural networks"
} | null | null | null | null | true | null | 2137 | null | Default | null | null |
null | {
"abstract": " Winds from the North-West quadrant and lack of precipitation are known to\nlead to an increase of PM10 concentrations over a residential neighborhood in\nthe city of Taranto (Italy). In 2012 the local government prescribed a\nreduction of industrial emissions by 10% every time such meteorological\nconditions are forecasted 72 hours in advance. Wind forecasting is addressed\nusing the Weather Research and Forecasting (WRF) atmospheric simulation system\nby the Regional Environmental Protection Agency. In the context of\ndistributions-oriented forecast verification, we propose a comprehensive\nmodel-based inferential approach to investigate the ability of the WRF system\nto forecast the local wind speed and direction allowing different performances\nfor unknown weather regimes. Ground-observed and WRF-forecasted wind speed and\ndirection at a relevant location are jointly modeled as a 4-dimensional time\nseries with an unknown finite number of states characterized by homogeneous\ndistributional behavior. The proposed model relies on a mixture of joint\nprojected and skew normal distributions with time-dependent states, where the\ntemporal evolution of the state membership follows a first order Markov\nprocess. Parameter estimates, including the number of states, are obtained by a\nBayesian MCMC-based method. Results provide useful insights on the performance\nof WRF forecasts in relation to different combinations of wind speed and\ndirection.\n",
"title": "Distributions-oriented wind forecast verification by a hidden Markov model for multivariate circular-linear data"
} | null | null | null | null | true | null | 2138 | null | Default | null | null |
null | {
"abstract": " We study randomly initialized residual networks using mean field theory and\nthe theory of difference equations. Classical feedforward neural networks, such\nas those with tanh activations, exhibit exponential behavior on the average\nwhen propagating inputs forward or gradients backward. The exponential forward\ndynamics causes rapid collapsing of the input space geometry, while the\nexponential backward dynamics causes drastic vanishing or exploding gradients.\nWe show, in contrast, that by adding skip connections, the network will,\ndepending on the nonlinearity, adopt subexponential forward and backward\ndynamics, and in many cases in fact polynomial. The exponents of these\npolynomials are obtained through analytic methods and proved and verified\nempirically to be correct. In terms of the \"edge of chaos\" hypothesis, these\nsubexponential and polynomial laws allow residual networks to \"hover over the\nboundary between stability and chaos,\" thus preserving the geometry of the\ninput space and the gradient information flow. In our experiments, for each\nactivation function we study here, we initialize residual networks with\ndifferent hyperparameters and train them on MNIST. Remarkably, our\ninitialization time theory can accurately predict test time performance of\nthese networks, by tracking either the expected amount of gradient explosion or\nthe expected squared distance between the images of two input vectors.\nImportantly, we show, theoretically as well as empirically, that common\ninitializations such as the Xavier or the He schemes are not optimal for\nresidual networks, because the optimal initialization variances depend on the\ndepth. Finally, we have made mathematical contributions by deriving several new\nidentities for the kernels of powers of ReLU functions by relating them to the\nzeroth Bessel function of the second kind.\n",
"title": "Mean Field Residual Networks: On the Edge of Chaos"
} | null | null | null | null | true | null | 2139 | null | Default | null | null |
null | {
"abstract": " We obtain a structure theorem for the group of holomorphic automorphisms of a\nconformally Kähler, Einstein-Maxwell metric, extending the classical results\nof Matsushima, Licherowicz and Calabi in the Kähler-Einstein, cscK, and\nextremal Kähler cases. Combined with previous results of LeBrun,\nApostolov-Maschler and Futaki-Ono, this completes the classification of the\nconformally Kähler, Einstein--Maxwell metrics on $\\mathbb{CP}^1 \\times\n\\mathbb{CP}^1$. We also use our result in order to introduce a (relative)\nMabuchi energy in the more general context of $(K, q, a)$-extremal Kähler\nmetrics in a given Kähler class, and show that the existence of $(K, q,\na)$-extremal Kähler metrics is stable under small deformation of the Kähler\nclass, the Killing vector field $K$ and the normalization constant $a$.\n",
"title": "Automorphisms and deformations of conformally Kähler, Einstein-Maxwell metrics"
} | null | null | [
"Mathematics"
]
| null | true | null | 2140 | null | Validated | null | null |
null | {
"abstract": " The vision systems of the eagle and the snake outperform everything that we\ncan make in the laboratory, but snakes and eagles cannot build an eyeglass or a\ntelescope or a microscope. (Judea Pearl)\n",
"title": "Human-Level Intelligence or Animal-Like Abilities?"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 2141 | null | Validated | null | null |
null | {
"abstract": " I present a web service for querying an embedding of entities in the Wikidata\nknowledge graph. The embedding is trained on the Wikidata dump using Gensim's\nWord2Vec implementation and a simple graph walk. A REST API is implemented.\nTogether with the Wikidata API the web service exposes a multilingual resource\nfor over 600'000 Wikidata items and properties.\n",
"title": "Wembedder: Wikidata entity embedding web service"
} | null | null | null | null | true | null | 2142 | null | Default | null | null |
null | {
"abstract": " We describe MELEE, a meta-learning algorithm for learning a good exploration\npolicy in the interactive contextual bandit setting. Here, an algorithm must\ntake actions based on contexts, and learn based only on a reward signal from\nthe action taken, thereby generating an exploration/exploitation trade-off.\nMELEE addresses this trade-off by learning a good exploration strategy for\noffline tasks based on synthetic data, on which it can simulate the contextual\nbandit setting. Based on these simulations, MELEE uses an imitation learning\nstrategy to learn a good exploration policy that can then be applied to true\ncontextual bandit tasks at test time. We compare MELEE to seven strong baseline\ncontextual bandit algorithms on a set of three hundred real-world datasets, on\nwhich it outperforms alternatives in most settings, especially when differences\nin rewards are large. Finally, we demonstrate the importance of having a rich\nfeature representation for learning how to explore.\n",
"title": "Meta-Learning for Contextual Bandit Exploration"
} | null | null | null | null | true | null | 2143 | null | Default | null | null |
null | {
"abstract": " We present a many-body theory that explains and reproduces recent\nobservations of population polarization dynamics, is supported by controlled\nhuman experiments, and addresses the controversy surrounding the Internet's\nimpact. It predicts that whether and how a population becomes polarized is\ndictated by the nature of the underlying competition, rather than the validity\nof the information that individuals receive or their online bubbles. Building\non this framework, we show that next-generation social media algorithms aimed\nat pulling people together, will instead likely lead to an explosive\npercolation process that generates new pockets of extremes.\n",
"title": "Population polarization dynamics and next-generation social media algorithms"
} | null | null | null | null | true | null | 2144 | null | Default | null | null |
null | {
"abstract": " The potential of an efficient ride-sharing scheme to significantly reduce\ntraffic congestion, lower emission level, as well as facilitating the\nintroduction of smart cities has been widely demonstrated. This positive thrust\nhowever is faced with several delaying factors, one of which is the volatility\nand unpredictability of the potential benefit (or utilization) of ride-sharing\nat different times, and in different places. In this work the following\nresearch questions are posed: (a) Is ride-sharing utilization stable over time\nor does it undergo significant changes? (b) If ride-sharing utilization is\ndynamic, can it be correlated with some traceable features of the traffic? and\n(c) If ride-sharing utilization is dynamic, can it be predicted ahead of time?\nWe analyze a dataset of over 14 Million taxi trips taken in New York City. We\npropose a dynamic travel network approach for modeling and forecasting the\npotential ride-sharing utilization over time, showing it to be highly volatile.\nIn order to model the utilization's dynamics we propose a network-centric\napproach, projecting the aggregated traffic taken from continuous time periods\ninto a feature space comprised of topological features of the network implied\nby this traffic. This feature space is then used to model the dynamics of\nride-sharing utilization over time. The results of our analysis demonstrate the\nsignificant volatility of ride-sharing utilization over time, indicating that\nany policy, design or plan that would disregard this aspect and chose a static\nparadigm would undoubtably be either highly inefficient or provide insufficient\nresources. We show that using our suggested approach it is possible to model\nthe potential utilization of ride sharing based on the topological properties\nof the rides network. We also show that using this method the potential\nutilization can be forecasting a few hours ahead of time.\n",
"title": "Ride Sharing and Dynamic Networks Analysis"
} | null | null | [
"Computer Science",
"Physics"
]
| null | true | null | 2145 | null | Validated | null | null |
null | {
"abstract": " Pillared Graphene Frameworks are a novel class of microporous materials made\nby graphene sheets separated by organic spacers. One of their main features is\nthat the pillar type and density can be chosen to tune the material properties.\nIn this work, we present a computer simulation study of adsorption and dynamics\nof H$_{4}$, CH$_{2}$, CO$_{2}$, N$_{2}$ and O$_{2}$ and binary mixtures\nthereof, in Pillared Graphene Frameworks with nitrogen-containing organic\nspacers. In general, we find that pillar density plays the most important role\nin determining gas adsorption. In the low-pressure regime (< 10 bar) the amount\nof gas adsorbed is an increasing function of pillar density. At higher\npressures the opposite trend is observed. Diffusion coefficients were computed\nfor representative structures taking into account the framework flexibility\nthat is essential in assessing the dynamical properties of the adsorbed gases.\nGood performance for the gas separation in CH$_{4}$/H$_{2}$, CO$_{2}$/H$_{2}$\nand CO$_{2}$/N$_{2}$ mixtures was found with values comparable to those of\nmetal-organic frameworks and zeolites.\n",
"title": "Gas Adsorption and Dynamics in Pillared Graphene Frameworks"
} | null | null | null | null | true | null | 2146 | null | Default | null | null |
null | {
"abstract": " Recent progress in computer vision has been dominated by deep neural networks\ntrained over large amounts of labeled data. Collecting such datasets is however\na tedious, often impossible task; hence a surge in approaches relying solely on\nsynthetic data for their training. For depth images however, discrepancies with\nreal scans still noticeably affect the end performance. We thus propose an\nend-to-end framework which simulates the whole mechanism of these devices,\ngenerating realistic depth data from 3D models by comprehensively modeling\nvital factors e.g. sensor noise, material reflectance, surface geometry. Not\nonly does our solution cover a wider range of sensors and achieve more\nrealistic results than previous methods, assessed through extended evaluation,\nbut we go further by measuring the impact on the training of neural networks\nfor various recognition tasks; demonstrating how our pipeline seamlessly\nintegrates such architectures and consistently enhances their performance.\n",
"title": "DepthSynth: Real-Time Realistic Synthetic Data Generation from CAD Models for 2.5D Recognition"
} | null | null | null | null | true | null | 2147 | null | Default | null | null |
null | {
"abstract": " We prove that for every $n \\in \\mathbb{N}$ and $\\delta>0$ there exists a word\n$w_n \\in F_2$ of length $n^{2/3} \\log(n)^{3+\\delta}$ which is a law for every\nfinite group of order at most $n$. This improves upon the main result of [A.\nThom, About the length of laws for finite groups, Isr. J. Math.]. As an\napplication we prove a new lower bound on the residual finiteness growth of\nnon-abelian free groups.\n",
"title": "Short Laws for Finite Groups and Residual Finiteness Growth"
} | null | null | null | null | true | null | 2148 | null | Default | null | null |
null | {
"abstract": " We theoretically address spin chain analogs of the Kitaev quantum spin model\non the honeycomb lattice. The emergent quantum spin liquid phases or Anderson\nresonating valence bond (RVB) states can be understood, as an effective model,\nin terms of p-wave superconductivity and Majorana fermions. We derive a\ngeneralized phase diagram for the two-leg ladder system with tunable\ninteraction strengths between chains allowing us to vary the shape of the\nlattice (from square to honeycomb ribbon or brickwall ladder). We evaluate the\nwinding number associated with possible emergent (topological) gapless modes at\nthe edges. In the Az phase, as a result of the emergent Z2 gauge fields and\npi-flux ground state, one may build spin-1/2 (loop) qubit operators by analogy\nto the toric code. In addition, we show how the intermediate gapless B phase\nevolves in the generalized ladder model. For the brickwall ladder, the $B$\nphase is reduced to one line, which is analyzed through perturbation theory in\na rung tensor product states representation and bosonization. Finally, we show\nthat doping with a few holes can result in the formation of hole pairs and\nleads to a mapping with the Su-Schrieffer-Heeger model in polyacetylene; a\nsuperconducting-insulating quantum phase transition for these hole pairs is\naccessible, as well as related topological properties.\n",
"title": "Majorana Spin Liquids, Topology and Superconductivity in Ladders"
} | null | null | null | null | true | null | 2149 | null | Default | null | null |
null | {
"abstract": " Motivated by applications in biological science, we propose a novel test to\nassess the conditional mean dependence of a response variable on a large number\nof covariates. Our procedure is built on the martingale difference divergence\nrecently proposed in Shao and Zhang (2014), and it is able to detect a certain\ntype of departure from the null hypothesis of conditional mean independence\nwithout making any specific model assumptions. Theoretically, we establish the\nasymptotic normality of the proposed test statistic under suitable assumption\non the eigenvalues of a Hermitian operator, which is constructed based on the\ncharacteristic function of the covariates. These conditions can be simplified\nunder banded dependence structure on the covariates or Gaussian design. To\naccount for heterogeneity within the data, we further develop a testing\nprocedure for conditional quantile independence at a given quantile level and\nprovide an asymptotic justification. Empirically, our test of conditional mean\nindependence delivers comparable results to the competitor, which was\nconstructed under the linear model framework, when the underlying model is\nlinear. It significantly outperforms the competitor when the conditional mean\nadmits a nonlinear form.\n",
"title": "Conditional Mean and Quantile Dependence Testing in High Dimension"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 2150 | null | Validated | null | null |
null | {
"abstract": " Motivated by applications that arise in online social media and collaboration\nnetworks, there has been a lot of work on community-search and team-formation\nproblems. In the former class of problems, the goal is to find a subgraph that\nsatisfies a certain connectivity requirement and contains a given collection of\nseed nodes. In the latter class of problems, on the other hand, the goal is to\nfind individuals who collectively have the skills required for a task and form\na connected subgraph with certain properties.\nIn this paper, we extend both the community-search and the team-formation\nproblems by associating each individual with a profile. The profile is a\nnumeric score that quantifies the position of an individual with respect to a\ntopic. We adopt a model where each individual starts with a latent profile and\narrives to a conformed profile through a dynamic conformation process, which\ntakes into account the individual's social interaction and the tendency to\nconform with one's social environment. In this framework, social tension arises\nfrom the differences between the conformed profiles of neighboring individuals\nas well as from differences between individuals' conformed and latent profiles.\nGiven a network of individuals, their latent profiles and this conformation\nprocess, we extend the community-search and the team-formation problems by\nrequiring the output subgraphs to have low social tension. From the technical\npoint of view, we study the complexity of these problems and propose algorithms\nfor solving them effectively. Our experimental evaluation in a number of social\nnetworks reveals the efficacy and efficiency of our methods.\n",
"title": "Finding low-tension communities"
} | null | null | null | null | true | null | 2151 | null | Default | null | null |
null | {
"abstract": " PBW degenerations are a particularly nice family of flat degenerations of\ntype A flag varieties. We show that the cohomology of any PBW degeneration of\nthe flag variety surjects onto the cohomology of the original flag variety, and\nthat this holds in an equivariant setting too. We also prove that the same is\ntrue in the symplectic setting when considering Feigin's linear degeneration of\nthe symplectic flag variety.\n",
"title": "Cohomology of the flag variety under PBW degenerations"
} | null | null | [
"Mathematics"
]
| null | true | null | 2152 | null | Validated | null | null |
null | {
"abstract": " This paper is concerned with the problem of exact MAP inference in general\nhigher-order graphical models by means of a traditional linear programming\nrelaxation approach. In fact, the proof that we have developed in this paper is\na rather simple algebraic proof being made straightforward, above all, by the\nintroduction of two novel algebraic tools. Indeed, on the one hand, we\nintroduce the notion of delta-distribution which merely stands for the\ndifference of two arbitrary probability distributions, and which mainly serves\nto alleviate the sign constraint inherent to a traditional probability\ndistribution. On the other hand, we develop an approximation framework of\ngeneral discrete functions by means of an orthogonal projection expressing in\nterms of linear combinations of function margins with respect to a given\ncollection of point subsets, though, we rather exploit the latter approach for\nthe purpose of modeling locally consistent sets of discrete functions from a\nglobal perspective. After that, as a first step, we develop from scratch the\nexpectation optimization framework which is nothing else than a reformulation,\non stochastic grounds, of the convex-hull approach, as a second step, we\ndevelop the traditional LP relaxation of such an expectation optimization\napproach, and we show that it enables to solve the MAP inference problem in\ngraphical models under rather general assumptions. Last but not least, we\ndescribe an algorithm which allows to compute an exact MAP solution from a\nperhaps fractional optimal (probability) solution of the proposed LP\nrelaxation.\n",
"title": "Exact MAP inference in general higher-order graphical models using linear programming"
} | null | null | [
"Computer Science"
]
| null | true | null | 2153 | null | Validated | null | null |
null | {
"abstract": " Weyl semimetals (WSMs) have recently attracted a great deal of attention as\nthey provide condensed matter realization of chiral anomaly, feature\ntopologically protected Fermi arc surface states and sustain sharp chiral Weyl\nquasiparticles up to a critical disorder at which a continuous quantum phase\ntransition (QPT) drives the system into a metallic phase. We here numerically\ndemonstrate that with increasing strength of disorder the Fermi arc gradually\nlooses its sharpness, and close to the WSM-metal QPT it completely dissolves\ninto the metallic bath of the bulk. Predicted topological nature of the\nWSM-metal QPT and the resulting bulk-boundary correspondence across this\ntransition can directly be observed in\nangle-resolved-photo-emmision-spectroscopy (ARPES) and Fourier transformed\nscanning-tunneling-microscopy (STM) measurements by following the continuous\ndeformation of the Fermi arcs with increasing disorder in recently discovered\nWeyl materials.\n",
"title": "Dissolution of topological Fermi arcs in a dirty Weyl semimetal"
} | null | null | null | null | true | null | 2154 | null | Default | null | null |
null | {
"abstract": " We analyze performance of a class of time-delay first-order consensus\nnetworks from a graph topological perspective and present methods to improve\nit. The performance is measured by network's square of H-2 norm and it is shown\nthat it is a convex function of Laplacian eigenvalues and the coupling weights\nof the underlying graph of the network. First, we propose a tight convex, but\nsimple, approximation of the performance measure in order to achieve lower\ncomplexity in our design problems by eliminating the need for\neigen-decomposition. The effect of time-delay reincarnates itself in the form\nof non-monotonicity, which results in nonintuitive behaviors of the performance\nas a function of graph topology. Next, we present three methods to improve the\nperformance by growing, re-weighting, or sparsifying the underlying graph of\nthe network. It is shown that our suggested algorithms provide near-optimal\nsolutions with lower complexity with respect to existing methods in literature.\n",
"title": "Performance Improvement in Noisy Linear Consensus Networks with Time-Delay"
} | null | null | null | null | true | null | 2155 | null | Default | null | null |
null | {
"abstract": " We explore the problem of intersection classification using monocular\non-board passive vision, with the goal of classifying traffic scenes with\nrespect to road topology. We divide the existing approaches into two broad\ncategories according to the type of input data: (a) first person vision (FPV)\napproaches, which use an egocentric view sequence as the intersection is\npassed; and (b) third person vision (TPV) approaches, which use a single view\nimmediately before entering the intersection. The FPV and TPV approaches each\nhave advantages and disadvantages. Therefore, we aim to combine them into a\nunified deep learning framework. Experimental results show that the proposed\nFPV-TPV scheme outperforms previous methods and only requires minimal FPV/TPV\nmeasurements.\n",
"title": "Use of First and Third Person Views for Deep Intersection Classification"
} | null | null | null | null | true | null | 2156 | null | Default | null | null |
null | {
"abstract": " We study the Liouville heat kernel (in the $L^2$ phase) associated with a\nclass of logarithmically correlated Gaussian fields on the two dimensional\ntorus. We show that for each $\\varepsilon>0$ there exists such a field, whose\ncovariance is a bounded perturbation of that of the two dimensional Gaussian\nfree field, and such that the associated Liouville heat kernel satisfies the\nshort time estimates, $$ \\exp \\left( - t^{ - \\frac 1 { 1 + \\frac 1 2 \\gamma^2 }\n- \\varepsilon } \\right) \\le p_t^\\gamma (x, y) \\le \\exp \\left( - t^{- \\frac 1 {\n1 + \\frac 1 2 \\gamma^2 } + \\varepsilon } \\right) , $$ for $\\gamma<1/2$. In\nparticular, these are different from predictions, due to Watabiki, concerning\nthe Liouville heat kernel for the two dimensional Gaussian free field.\n",
"title": "On the Liouville heat kernel for k-coarse MBRW and nonuniversality"
} | null | null | [
"Mathematics"
]
| null | true | null | 2157 | null | Validated | null | null |
null | {
"abstract": " Isoperimetric inequalities form a very intuitive yet powerful\ncharacterization of the connectedness of a state space, that has proven\nsuccessful in obtaining convergence bounds. Since the seventies they form an\nessential tool in differential geometry, graph theory and Markov chain\nanalysis. In this paper we use isoperimetric inequalities to construct a bound\non the convergence time of any local probabilistic evolution that leaves its\nlimit distribution invariant. We illustrate how this general result leads to\nnew bounds on convergence times beyond the explicit Markovian setting, among\nothers on quantum dynamics.\n",
"title": "Bounding the convergence time of local probabilistic evolution"
} | null | null | null | null | true | null | 2158 | null | Default | null | null |
null | {
"abstract": " This chapter revisits the concept of excitability, a basic system property of\nneurons. The focus is on excitable systems regarded as behaviors rather than\ndynamical systems. By this we mean open systems modulated by specific\ninterconnection properties rather than closed systems classified by their\nparameter ranges. Modeling, analysis, and synthesis questions can be formulated\nin the classical language of circuit theory. The input-output characterization\nof excitability is in terms of the local sensitivity of the current-voltage\nrelationship. It suggests the formulation of novel questions for non-linear\nsystem theory, inspired by questions from experimental neurophysiology.\n",
"title": "Excitable behaviors"
} | null | null | null | null | true | null | 2159 | null | Default | null | null |
null | {
"abstract": " We give the first examples of closed Laplacian solitons which are shrinking,\nand in particular produce closed Laplacian flow solutions with a finite-time\nsingularity. Extremally Ricci pinched G2-structures (introduced by Bryant)\nwhich are steady Laplacian solitons have also been found. All the examples are\nleft-invariant G2-structures on solvable Lie groups.\n",
"title": "Laplacian solitons: questions and homogeneous examples"
} | null | null | null | null | true | null | 2160 | null | Default | null | null |
null | {
"abstract": " The Markoff group of transformations is a group $\\Gamma$ of affine integral\nmorphisms, which is known to act transitively on the set of all positive\ninteger solutions to the equation $x^{2}+y^{2}+z^{2}=xyz$. The fundamental\nstrong approximation conjecture for the Markoff equation states that for every\nprime $p$, the group $\\Gamma$ acts transitively on the set\n$X^{*}\\left(p\\right)$ of non-zero solutions to the same equation over\n$\\mathbb{Z}/p\\mathbb{Z}$. Recently, Bourgain, Gamburd and Sarnak proved this\nconjecture for all primes outside a small exceptional set.\nIn the current paper, we study a group of permutations obtained by the action\nof $\\Gamma$ on $X^{*}\\left(p\\right)$, and show that for most primes, it is the\nfull symmetric or alternating group. We use this result to deduce that $\\Gamma$\nacts transitively also on the set of non-zero solutions in a big class of\ncomposite moduli.\nOur result is also related to a well-known theorem of Gilman, stating that\nfor any finite non-abelian simple group $G$ and $r\\ge3$, the group\n$\\mathrm{Aut}\\left(F_{r}\\right)$ acts on at least one $T_{r}$-system of $G$ as\nthe alternating or symmetric group. In this language, our main result\ntranslates to that for most primes $p$, the group\n$\\mathrm{Aut}\\left(F_{2}\\right)$ acts on a particular $T_{2}$-system of\n$\\mathrm{PSL}\\left(2,p\\right)$ as the alternating or symmetric group.\n",
"title": "The Markoff Group of Transformations in Prime and Composite Moduli"
} | null | null | null | null | true | null | 2161 | null | Default | null | null |
null | {
"abstract": " Consider a spin manifold M, equipped with a line bundle L and an action of a\ncompact Lie group G. We can attach to this data a family Theta(k) of\ndistributions on the dual of the Lie algebra of G. The aim of this paper is to\nstudy the asymptotic behaviour of Theta(k) when k is large, and M possibly non\ncompact, and to explore a functorial consequence of this formula for reduced\nspaces.\n",
"title": "The equivariant index of twisted dirac operators and semi-classical limits"
} | null | null | null | null | true | null | 2162 | null | Default | null | null |
null | {
"abstract": " Participatory budgeting is one of the exciting developments in deliberative\ngrassroots democracy. We concentrate on approval elections and propose\nproportional representation axioms in participatory budgeting, by generalizing\nrelevant axioms for approval-based multi-winner elections. We observe a rich\nlandscape with respect to the computational complexity of identifying\nproportional budgets and computing such, and present budgeting methods that\nsatisfy these axioms by identifying budgets that are representative to the\ndemands of vast segments of the voters.\n",
"title": "Proportionally Representative Participatory Budgeting: Axioms and Algorithms"
} | null | null | null | null | true | null | 2163 | null | Default | null | null |
null | {
"abstract": " This paper presents the first estimate of the seasonal cycle of ocean and sea\nice net heat and freshwater (FW) fluxes around the boundary of the Arctic\nOcean. The ocean transports are estimated primarily using 138 moored\ninstruments deployed in September 2005 to August 2006 across the four main\nArctic gateways: Davis, Fram and Bering Straits, and the Barents Sea Opening\n(BSO). Sea ice transports are estimated from a sea ice assimilation product.\nMonthly velocity fields are calculated with a box inverse model that enforces\nvolume and salinity conservation. The resulting net ocean and sea ice heat and\nFW fluxes (annual mean $\\pm$ 1 standard deviation) are 175 $\\pm$48 TW and 204\n$\\pm$85 mSv (respectively; 1 Sv = 10$^{6} m^{3} s^{-1}$). These boundary fluxes\naccurately represent the annual means of the relevant surface fluxes. Oceanic\nnet heat transport variability is driven by temperature variability in upper\npart of the water column and by volume transport variability in the Atlantic\nWater layer. Oceanic net FW transport variability is dominated by Bering Strait\nvelocity variability. The net water mass transformation in the Arctic entails a\nfreshening and cooling of inflowing waters by 0.62$\\pm$0.23 in salinity and\n3.74$\\pm$0.76C in temperature, respectively, and a reduction in density by\n0.23$\\pm$0.20 kg m$^{-3}$. The volume transport into the Arctic of waters\nassociated with this water mass transformation is 11.3$\\pm$1.2 Sv, and the\nexport is -11.4$\\pm$1.1 Sv. The boundary heat and FW fluxes provide a benchmark\ndata set for the validation of numerical models and atmospheric re-analyses\nproducts.\n",
"title": "The Arctic Ocean seasonal cycles of heat and freshwater fluxes: observation-based inverse estimates"
} | null | null | null | null | true | null | 2164 | null | Default | null | null |
null | {
"abstract": " We give a simple proof of a standard zero-free region in the $t$-aspect for\nthe Rankin--Selberg $L$-function $L(s,\\pi \\times \\widetilde{\\pi})$ for any\nunitary cuspidal automorphic representation $\\pi$ of\n$\\mathrm{GL}_n(\\mathbb{A}_F)$ that is tempered at every nonarchimedean place\noutside a set of Dirichlet density zero.\n",
"title": "Standard Zero-Free Regions for Rankin--Selberg L-Functions via Sieve Theory"
} | null | null | null | null | true | null | 2165 | null | Default | null | null |
null | {
"abstract": " Binary stars can interact via mass transfer when one member (the primary)\nascends onto a giant branch. The amount of gas ejected by the binary and the\namount of gas accreted by the secondary over the lifetime of the primary\ninfluence the subsequent binary phenomenology. Some of the gas ejected by the\nbinary will remain gravitationally bound and its distribution will be closely\nrelated to the formation of planetary nebulae. We investigate the nature of\nmass transfer in binary systems containing an AGB star by adding radiative\ntransfer to the AstroBEAR AMR Hydro/MHD code.\n",
"title": "Mass transfer in asymptotic-giant-branch binary systems"
} | null | null | null | null | true | null | 2166 | null | Default | null | null |
null | {
"abstract": " This paper deals with skew ruled surfaces in the Euclidean space\n$\\mathbb{E}^{3}$ which are equipped with polar normalizations, that is,\nrelative normalizations such that the relative normal at each point of the\nruled surface lies on the corresponding polar plane. We determine the\ninvariants of a such normalized ruled surface and we study some properties of\nthe Tchebychev vector field and the support vector field of a polar\nnormalization. Furthermore, we study a special polar normalization, the\nrelative image of which degenerates into a curve.\n",
"title": "On polar relative normalizations of ruled surfaces"
} | null | null | [
"Mathematics"
]
| null | true | null | 2167 | null | Validated | null | null |
null | {
"abstract": " Ultrafast X-ray imaging provides high resolution information on individual\nfragile specimens such as aerosols, metastable particles, superfluid quantum\nsystems and live biospecimen, which is inaccessible with conventional imaging\ntechniques. Coherent X-ray diffractive imaging, however, suffers from intrinsic\nloss of phase, and therefore structure recovery is often complicated and not\nalways uniquely-defined. Here, we introduce the method of in-flight holography,\nwhere we use nanoclusters as reference X-ray scatterers in order to encode\nrelative phase information into diffraction patterns of a virus. The resulting\nhologram contains an unambiguous three-dimensional map of a virus and two\nnanoclusters with the highest lat- eral resolution so far achieved via single\nshot X-ray holography. Our approach unlocks the benefits of holography for\nultrafast X-ray imaging of nanoscale, non-periodic systems and paves the way to\ndirect observation of complex electron dynamics down to the attosecond time\nscale.\n",
"title": "Femtosecond X-ray Fourier holography imaging of free-flying nanoparticles"
} | null | null | [
"Physics"
]
| null | true | null | 2168 | null | Validated | null | null |
null | {
"abstract": " We decompose returns for portfolios of bottom-ranked, lower-priced assets\nrelative to the market into rank crossovers and changes in the relative price\nof those bottom-ranked assets. This decomposition is general and consistent\nwith virtually any asset pricing model. Crossovers measure changes in rank and\nare smoothly increasing over time, while return fluctuations are driven by\nvolatile relative price changes. Our results imply that in a closed,\ndividend-free market in which the relative price of bottom-ranked assets is\napproximately constant, a portfolio of those bottom-ranked assets will\noutperform the market portfolio over time. We show that bottom-ranked relative\ncommodity futures prices have increased only slightly, and confirm the\nexistence of substantial excess returns predicted by our theory. If these\nexcess returns did not exist, then top-ranked relative prices would have had to\nbe much higher in 2018 than those actually observed -- this would imply a\nradically different commodity price distribution.\n",
"title": "The Rank Effect"
} | null | null | null | null | true | null | 2169 | null | Default | null | null |
null | {
"abstract": " Nearly all autonomous robotic systems use some form of motion planning to\ncompute reference motions through their environment. An increasing use of\nautonomous robots in a broad range of applications creates a need for\nefficient, general purpose motion planning algorithms that are applicable in\nany of these new application domains.\nThis thesis presents a resolution complete optimal kinodynamic motion\nplanning algorithm based on a direct forward search of the set of admissible\ninput signals to a dynamical model. The advantage of this generalized label\ncorrecting method is that it does not require a local planning subroutine as in\nthe case of related methods.\nPreliminary material focuses on new topological properties of the canonical\nproblem formulation that are used to show continuity of the performance\nobjective. These observations are used to derive a generalization of Bellman's\nprinciple of optimality in the context of kinodynamic motion planning. A\ngeneralized label correcting algorithm is then proposed which leverages these\nresults to prune candidate input signals from the search when their cost is\ngreater than related signals.\nThe second part of this thesis addresses admissible heuristics for\nkinodynamic motion planning. An admissibility condition is derived that can be\nused to verify the admissibility of candidate heuristics for a particular\nproblem. This condition also characterizes a convex set of admissible\nheuristics.\nA linear program is formulated to obtain a heuristic which is as close to the\noptimal cost-to-go as possible while remaining admissible. This optimization is\njustified by showing its solution coincides with the solution to the\nHamilton-Jacobi-Bellman equation. Lastly, a sum-of-squares relaxation of this\ninfinite-dimensional linear program is proposed for obtaining provably\nadmissible approximate solutions.\n",
"title": "The Generalized Label Correcting Method for Optimal Kinodynamic Motion Planning"
} | null | null | null | null | true | null | 2170 | null | Default | null | null |
null | {
"abstract": " For years, recursive neural networks (RvNNs) have been shown to be suitable\nfor representing text into fixed-length vectors and achieved good performance\non several natural language processing tasks. However, the main drawback of\nRvNNs is that they require structured input, which makes data preparation and\nmodel implementation hard. In this paper, we propose Gumbel Tree-LSTM, a novel\ntree-structured long short-term memory architecture that learns how to compose\ntask-specific tree structures only from plain text data efficiently. Our model\nuses Straight-Through Gumbel-Softmax estimator to decide the parent node among\ncandidates dynamically and to calculate gradients of the discrete decision. We\nevaluate the proposed model on natural language inference and sentiment\nanalysis, and show that our model outperforms or is at least comparable to\nprevious models. We also find that our model converges significantly faster\nthan other models.\n",
"title": "Learning to Compose Task-Specific Tree Structures"
} | null | null | [
"Computer Science"
]
| null | true | null | 2171 | null | Validated | null | null |
null | {
"abstract": " Consider the Navier-Stokes flow in 3-dimensional exterior domains, where a\nrigid body is translating with prescribed translational velocity\n$-h(t)u_\\infty$ with constant vector $u_\\infty\\in \\mathbb R^3\\setminus\\{0\\}$.\nFinn raised the question whether his steady slutions are attainable as limits\nfor $t\\to\\infty$ of unsteady solutions starting from motionless state when\n$h(t)=1$ after some finite time and $h(0)=0$ (starting problem). This was\naffirmatively solved by Galdi, Heywood and Shibata for small $u_\\infty$. We\nstudy some generalized situation in which unsteady solutions start from large\nmotions being in $L^3$. We then conclude that the steady solutions for small\n$u_\\infty$ are still attainable as limits of evolution of those fluid motions\nwhich are found as a sort of weak solutions. The opposite situation, in which\n$h(t)=0$ after some finite time and $h(0)=1$ (landing problem), is also\ndiscussed. In this latter case, the rest state is attainable no matter how\nlarge $u_\\infty$ is.\n",
"title": "Navier-Stokes flow past a rigid body: attainability of steady solutions as limits of unsteady weak solutions, starting and landing cases"
} | null | null | null | null | true | null | 2172 | null | Default | null | null |
null | {
"abstract": " Metabolic fluxes in cells are governed by physical, biochemical,\nphysiological, and economic principles. Cells may show \"economical\" behaviour,\ntrading metabolic performance against the costly side-effects of high enzyme or\nmetabolite concentrations. Some constraint-based flux prediction methods score\nfluxes by heuristic flux costs as proxies of enzyme investments. However,\nlinear cost functions ignore enzyme kinetics and the tight coupling between\nfluxes, metabolite levels and enzyme levels. To derive more realistic cost\nfunctions, I define an apparent \"enzymatic flux cost\" as the minimal enzyme\ncost at which the fluxes can be realised in a given kinetic model, and a\n\"kinetic flux cost\", which includes metabolite cost. I discuss the mathematical\nproperties of such flux cost functions, their usage for flux prediction, and\ntheir importance for cells' metabolic strategies. The enzymatic flux cost\nscales linearly with the fluxes and is a concave function on the flux polytope.\nThe costs of two flows are usually not additive, due to an additional\n\"compromise cost\". Between flux polytopes, where fluxes change their\ndirections, the enzymatic cost shows a jump. With strictly concave flux cost\nfunctions, cells can reduce their enzymatic cost by running different fluxes in\ndifferent cell compartments or at different moments in time. The enzymactic\nflux cost can be translated into an approximated cell growth rate, a convex\nfunction on the flux polytope. Growth-maximising metabolic states can be\npredicted by Flux Cost Minimisation (FCM), a variant of FBA based on general\nflux cost functions. The solutions are flux distributions in corners of the\nflux polytope, i.e. typically elementary flux modes. Enzymatic flux costs can\nbe linearly or nonlinearly approximated, providing model parameters for linear\nFBA based on kinetic parameters and extracellular concentrations, and justified\nby a kinetic model.\n",
"title": "Flux cost functions and the choice of metabolic fluxes"
} | null | null | null | null | true | null | 2173 | null | Default | null | null |
null | {
"abstract": " Archetypal analysis is a type of factor analysis where data is fit by a\nconvex polytope whose corners are \"archetypes\" of the data, with the data\nrepresented as a convex combination of these archetypal points. While\narchetypal analysis has been used on biological data, it has not achieved\nwidespread adoption because most data are not well fit by a convex polytope in\neither the ambient space or after standard data transformations. We propose a\nnew approach to archetypal analysis. Instead of fitting a convex polytope\ndirectly on data or after a specific data transformation, we train a neural\nnetwork (AAnet) to learn a transformation under which the data can best fit\ninto a polytope. We validate this approach on synthetic data where we add\nnonlinearity. Here, AAnet is the only method that correctly identifies the\narchetypes. We also demonstrate AAnet on two biological datasets. In a T cell\ndataset measured with single cell RNA-sequencing, AAnet identifies several\narchetypal states corresponding to naive, memory, and cytotoxic T cells. In a\ndataset of gut microbiome profiles, AAnet recovers both previously described\nmicrobiome states and identifies novel extrema in the data. Finally, we show\nthat AAnet has generative properties allowing us to uniformly sample from the\ndata geometry even when the input data is not uniformly distributed.\n",
"title": "Finding Archetypal Spaces for Data Using Neural Networks"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 2174 | null | Validated | null | null |
null | {
"abstract": " A function from Baire space to the natural numbers is called formally\ncontinuous if it is induced by a morphism between the corresponding formal\nspaces. We compare formal continuity to two other notions of continuity on\nBaire space working in Bishop constructive mathematics: one is a function\ninduced by a Brouwer-operation (i.e. inductively defined neighbourhood\nfunction); the other is a function uniformly continuous near every compact\nimage. We show that formal continuity is equivalent to the former while it is\nstrictly stronger than the latter.\n",
"title": "Formally continuous functions on Baire space"
} | null | null | null | null | true | null | 2175 | null | Default | null | null |
null | {
"abstract": " We consider the problem of universal joint clustering and registration of\nimages and define algorithms using multivariate information functionals. We\nfirst study registering two images using maximum mutual information and prove\nits asymptotic optimality. We then show the shortcomings of pairwise\nregistration in multi-image registration, and design an asymptotically optimal\nalgorithm based on multiinformation. Further, we define a novel multivariate\ninformation functional to perform joint clustering and registration of images,\nand prove consistency of the algorithm. Finally, we consider registration and\nclustering of numerous limited-resolution images, defining algorithms that are\norder-optimal in scaling of number of pixels in each image with the number of\nimages.\n",
"title": "Universal Joint Image Clustering and Registration using Partition Information"
} | null | null | null | null | true | null | 2176 | null | Default | null | null |
null | {
"abstract": " We consider a general relation between fixed point stability of suitably\nperturbed transfer operators and convergence to equilibrium (a notion which is\nstrictly related to decay of correlations). We apply this relation to\ndeterministic perturbations of a class of (piecewise) partially hyperbolic skew\nproducts whose behavior on the preserved fibration is dominated by the\nexpansion of the base map. In particular we apply the results to power law\nmixing toral extensions. It turns out that in this case, the dependence of the\nphysical measure on small deterministic perturbations, in a suitable\nanisotropic metric is at least Holder continuous, with an exponent which is\nexplicitly estimated depending on the arithmetical properties of the system. We\nshow explicit examples of toral extensions having actually Holder stability and\nnon differentiable dependence of the physical measure on perturbations.\n",
"title": "Quantitative statistical stability and speed of convergence to equilibrium for partially hyperbolic skew products"
} | null | null | [
"Mathematics"
]
| null | true | null | 2177 | null | Validated | null | null |
null | {
"abstract": " The redundancy for universal lossless compression of discrete memoryless\nsources in Campbell's setting is characterized as a minimax Rényi divergence,\nwhich is shown to be equal to the maximal $\\alpha$-mutual information via a\ngeneralized redundancy-capacity theorem. Special attention is placed on the\nanalysis of the asymptotics of minimax Rényi divergence, which is determined\nup to a term vanishing in blocklength.\n",
"title": "Minimax Rényi Redundancy"
} | null | null | null | null | true | null | 2178 | null | Default | null | null |
null | {
"abstract": " In deep learning, performance is strongly affected by the choice of\narchitecture and hyperparameters. While there has been extensive work on\nautomatic hyperparameter optimization for simple spaces, complex spaces such as\nthe space of deep architectures remain largely unexplored. As a result, the\nchoice of architecture is done manually by the human expert through a slow\ntrial and error process guided mainly by intuition. In this paper we describe a\nframework for automatically designing and training deep models. We propose an\nextensible and modular language that allows the human expert to compactly\nrepresent complex search spaces over architectures and their hyperparameters.\nThe resulting search spaces are tree-structured and therefore easy to traverse.\nModels can be automatically compiled to computational graphs once values for\nall hyperparameters have been chosen. We can leverage the structure of the\nsearch space to introduce different model search algorithms, such as random\nsearch, Monte Carlo tree search (MCTS), and sequential model-based optimization\n(SMBO). We present experiments comparing the different algorithms on CIFAR-10\nand show that MCTS and SMBO outperform random search. In addition, these\nexperiments show that our framework can be used effectively for model\ndiscovery, as it is possible to describe expressive search spaces and discover\ncompetitive models without much effort from the human expert. Code for our\nframework and experiments has been made publicly available.\n",
"title": "DeepArchitect: Automatically Designing and Training Deep Architectures"
} | null | null | null | null | true | null | 2179 | null | Default | null | null |
null | {
"abstract": " We propose ultranarrow dynamical control of population oscillation (PO)\nbetween ground states through the polarization content of an input bichromatic\nfield. Appropriate engineering of classical interference between optical fields\nresults in PO arising exclusively from optical pumping. Contrary to the\nexpected broad spectral response associated with optical pumping, we obtain\nsubnatural linewidth in complete absence of quantum interference. The\nellipticity of the light polarizations can be used for temporal shaping of the\nPO leading to generation of multiple sidebands even at low light level.\n",
"title": "Dynamical control of atoms with polarized bichromatic weak field"
} | null | null | null | null | true | null | 2180 | null | Default | null | null |
null | {
"abstract": " Bryant, Horsley, Maenhaut and Smith recently gave necessary and sufficient\nconditions for when the complete multigraph can be decomposed into cycles of\nspecified lengths $m_1,m_2,\\ldots,m_\\tau$. In this paper we characterise\nexactly when there exists a packing of the complete multigraph with cycles of\nspecified lengths $m_1,m_2,\\ldots,m_\\tau$. While cycle decompositions can give\nrise to packings by removing cycles from the decomposition, in general it is\nnot known when there exists a packing of the complete multigraph with cycles of\nvarious specified lengths.\n",
"title": "Cycle packings of the complete multigraph"
} | null | null | null | null | true | null | 2181 | null | Default | null | null |
null | {
"abstract": " Modern statistical inference tasks often require iterative optimization\nmethods to approximate the solution. Convergence analysis from optimization\nonly tells us how well we are approximating the solution deterministically, but\noverlooks the sampling nature of the data. However, due to the randomness in\nthe data, statisticians are keen to provide uncertainty quantification, or\nconfidence, for the answer obtained after certain steps of optimization.\nTherefore, it is important yet challenging to understand the sampling\ndistribution of the iterative optimization methods.\nThis paper makes some progress along this direction by introducing a new\nstochastic optimization method for statistical inference, the moment adjusted\nstochastic gradient descent. We establish non-asymptotic theory that\ncharacterizes the statistical distribution of the iterative methods, with good\noptimization guarantee. On the statistical front, the theory allows for model\nmisspecification, with very mild conditions on the data. For optimization, the\ntheory is flexible for both the convex and non-convex cases. Remarkably, the\nmoment adjusting idea motivated from \"error standardization\" in statistics\nachieves similar effect as Nesterov's acceleration in optimization, for certain\nconvex problems as in fitting generalized linear models. We also demonstrate\nthis acceleration effect in the non-convex setting through experiments.\n",
"title": "Statistical Inference for the Population Landscape via Moment Adjusted Stochastic Gradients"
} | null | null | null | null | true | null | 2182 | null | Default | null | null |
null | {
"abstract": " Passive Kerr cavities driven by coherent laser fields display a rich\nlandscape of nonlinear physics, including bistability, pattern formation, and\nlocalised dissipative structures (solitons). Their conceptual simplicity has\nfor several decades offered an unprecedented window into nonlinear cavity\ndynamics, providing insights into numerous systems and applications ranging\nfrom all-optical memory devices to microresonator frequency combs. Yet despite\nthe decades of study, a recent theoretical study has surprisingly alluded to an\nentirely new and unexplored paradigm in the regime where nonlinearly tilted\ncavity resonances overlap with one another [T. Hansson and S. Wabnitz, J. Opt.\nSoc. Am. B 32, 1259 (2015)]. We have used synchronously driven fiber ring\nresonators to experimentally access this regime, and observed the rise of new\nnonlinear dissipative states. Specifically, we have observed, for the first\ntime to the best of our knowledge, the stable coexistence of dissipative\n(cavity) solitons and extended modulation instability (Turing) patterns, and\nperformed real time measurements that unveil the dynamics of the ensuing\nnonlinear structures. When operating in the regime of continuous wave\ntristability, we have further observed the coexistence of two distinct cavity\nsoliton states, one of which can be identified as a \"super\" cavity soliton as\npredicted by Hansson and Wabnitz. Our experimental findings are in excellent\nagreement with theoretical analyses and numerical simulations of the\ninfinite-dimensional Ikeda map that governs the cavity dynamics. The results\nfrom our work reveal that experimental systems can support complex combinations\nof distinct nonlinear states, and they could have practical implications to\nfuture microresonator-based frequency comb sources.\n",
"title": "Super cavity solitons and the coexistence of multiple nonlinear states in a tristable passive Kerr resonator"
} | null | null | null | null | true | null | 2183 | null | Default | null | null |
null | {
"abstract": " We present a framework that connects three interesting classes of groups: the\ntwisted groups (also known as Suzuki-Ree groups), the mixed groups and the\nexotic pseudo-reductive groups.\nFor a given characteristic p, we construct categories of twisted and mixed\nschemes. Ordinary schemes are a full subcategory of the mixed schemes. Mixed\nschemes arise from a twisted scheme by base change, although not every mixed\nscheme arises this way. The group objects in these categories are called\ntwisted and mixed group schemes.\nOur main theorems state: (1) The twisted Chevalley groups ${}^2\\mathsf B_2$,\n${}^2\\mathsf G_2$ and ${}^2\\mathsf F_4$ arise as rational points of twisted\ngroup schemes. (2) The mixed groups in the sense of Tits arise as rational\npoints of mixed group schemes over mixed fields. (3) The exotic\npseudo-reductive groups of Conrad, Gabber and Prasad are Weil restrictions of\nmixed group schemes.\n",
"title": "Twisting and Mixing"
} | null | null | [
"Mathematics"
]
| null | true | null | 2184 | null | Validated | null | null |
null | {
"abstract": " Previous studies have demonstrated the empirical success of word embeddings\nin various applications. In this paper, we investigate the problem of learning\ndistributed representations for text documents which many machine learning\nalgorithms take as input for a number of NLP tasks.\nWe propose a neural network model, KeyVec, which learns document\nrepresentations with the goal of preserving key semantics of the input text. It\nenables the learned low-dimensional vectors to retain the topics and important\ninformation from the documents that will flow to downstream tasks. Our\nempirical evaluations show the superior quality of KeyVec representations in\ntwo different document understanding tasks.\n",
"title": "KeyVec: Key-semantics Preserving Document Representations"
} | null | null | null | null | true | null | 2185 | null | Default | null | null |
null | {
"abstract": " The bound to factor large integers is dominated by the computational effort\nto discover numbers that are smooth, typically performed by sieving a\npolynomial sequence. On a von Neumann architecture, sieving has log-log\namortized time complexity to check each value for smoothness. This work\npresents a neuromorphic sieve that achieves a constant time check for\nsmoothness by exploiting two characteristic properties of neuromorphic\narchitectures: constant time synaptic integration and massively parallel\ncomputation. The approach is validated by modifying msieve, one of the fastest\npublicly available integer factorization implementations, to use the IBM\nNeurosynaptic System (NS1e) as a coprocessor for the sieving stage.\n",
"title": "Integer Factorization with a Neuromorphic Sieve"
} | null | null | null | null | true | null | 2186 | null | Default | null | null |
null | {
"abstract": " The increasing uptake of residential batteries has led to suggestions that\nthe prevalence of batteries on LV networks will serendipitously mitigate the\ntechnical problems induced by PV installations. However, in general, the\neffects of PV-battery systems on LV networks have not been well studied. Given\nthis background, in this paper, we test the assertion that the uncoordinated\noperation of batteries improves network performance. In order to carry out this\nassessment, we develop a methodology for incorporating home energy management\n(HEM) operational decisions within a Monte Carlo (MC) power flow analysis\ncomprising three parts. First, due to the unavailability of large number of\nload and PV traces required for MC analysis, we used a maximum a-posteriori\nDirichlet process to generate statistically representative synthetic profiles.\nSecond, a policy function approximation (PFA) that emulates the outputs of the\nHEM solver is implemented to provide battery scheduling policies for a pool of\ncustomers, making simulation of optimization-based HEM feasible within MC\nstudies. Third, the resulting net loads are used in a MC power flow time series\nstudy. The efficacy of our method is shown on three typical LV feeders. Our\nassessment finds that uncoordinated PV-battery systems have little beneficial\nimpact on LV networks.\n",
"title": "Probabilistic Assessment of PV-Battery System Impacts on LV Distribution Networks"
} | null | null | null | null | true | null | 2187 | null | Default | null | null |
null | {
"abstract": " The evolution of cellular technologies toward 5G progressively enables\nefficient and ubiquitous communications in an increasing number of fields.\nAmong these, vehicular networks are being considered as one of the most\npromising and challenging applications, requiring support for communications in\nhigh-speed mobility and delay-constrained information exchange in proximity. In\nthis context, simulation frameworks under the OMNeT++ umbrella are already\navailable: SimuLTE and Veins for cellular and vehicular systems, respectively.\nIn this paper, we describe the modifications that make SimuLTE interoperable\nwith Veins and INET, which leverage the OMNeT++ paradigm, and allow us to\nachieve our goal without any modification to either of the latter two. We\ndiscuss the limitations of the previous solution, namely VeinsLTE, which\nintegrates all three in a single framework, thus preventing independent\nevolution and upgrades of each building block.\n",
"title": "Simulating Cellular Communications in Vehicular Networks: Making SimuLTE Interoperable with Veins"
} | null | null | null | null | true | null | 2188 | null | Default | null | null |
null | {
"abstract": " Training deep neural networks with Stochastic Gradient Descent, or its\nvariants, requires careful choice of both learning rate and batch size. While\nsmaller batch sizes generally converge in fewer training epochs, larger batch\nsizes offer more parallelism and hence better computational efficiency. We have\ndeveloped a new training approach that, rather than statically choosing a\nsingle batch size for all epochs, adaptively increases the batch size during\nthe training process. Our method delivers the convergence rate of small batch\nsizes while achieving performance similar to large batch sizes. We analyse our\napproach using the standard AlexNet, ResNet, and VGG networks operating on the\npopular CIFAR-10, CIFAR-100, and ImageNet datasets. Our results demonstrate\nthat learning with adaptive batch sizes can improve performance by factors of\nup to 6.25 on 4 NVIDIA Tesla P100 GPUs while changing accuracy by less than 1%\nrelative to training with fixed batch sizes.\n",
"title": "AdaBatch: Adaptive Batch Sizes for Training Deep Neural Networks"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 2189 | null | Validated | null | null |
null | {
"abstract": " We report the design, fabrication and characterization of ultralight highly\nemissive metaphotonic structures with record-low mass/area that emit thermal\nradiation efficiently over a broad spectral (2 to 35 microns) and angular (0-60\ndegrees) range. The structures comprise one to three pairs of alternating\nnanometer-scale metallic and dielectric layers, and have measured effective 300\nK hemispherical emissivities of 0.7 to 0.9. To our knowledge, these structures,\nwhich are all subwavelength in thickness are the lightest reported metasurfaces\nwith comparable infrared emissivity. The superior optical properties, together\nwith their mechanical flexibility, low outgassing, and low areal mass, suggest\nthat these metasurfaces are candidates for thermal management in applications\ndemanding of ultralight flexible structures, including aerospace applications,\nultralight photovoltaics, lightweight flexible electronics, and textiles for\nthermal insulation.\n",
"title": "Extremely broadband ultralight thermally emissive metasurfaces"
} | null | null | null | null | true | null | 2190 | null | Default | null | null |
null | {
"abstract": " We investigate spatial evolutionary games with death-birth updating in large\nfinite populations. Within growing spatial structures subject to appropriate\nconditions, the density processes of a fixed type are proven to converge to the\nWright-Fisher diffusions with drift. In addition, convergence in the\nWasserstein distance of the laws of their occupation measures holds. The proofs\nof these results develop along an equivalence between the laws of the\nevolutionary games and certain voter models and rely on the analogous results\nof voter models on large finite sets by convergences of the Radon-Nikodym\nderivative processes. As another application of this equivalence of laws, we\nshow that in a general, large population of size $N$, for which the stationary\nprobabilities of the corresponding voting kernel are comparable to uniform\nprobabilities, a first-derivative test among the major methods for these\nevolutionary games is applicable at least up to weak selection strengths in the\nusual biological sense (that is, selection strengths of the order $\\mathcal\nO(1/N)$).\n",
"title": "Wright-Fisher diffusions for evolutionary games with death-birth updating"
} | null | null | null | null | true | null | 2191 | null | Default | null | null |
null | {
"abstract": " We provide the first analysis of a non-trivial quantization scheme for\ncompressed sensing measurements arising from structured measurements.\nSpecifically, our analysis studies compressed sensing matrices consisting of\nrows selected at random, without replacement, from a circulant matrix generated\nby a random subgaussian vector. We quantize the measurements using stable,\npossibly one-bit, Sigma-Delta schemes, and use a reconstruction method based on\nconvex optimization. We show that the part of the reconstruction error due to\nquantization decays polynomially in the number of measurements. This is in line\nwith analogous results on Sigma-Delta quantization associated with random\nGaussian or subgaussian matrices, and significantly better than results\nassociated with the widely assumed memoryless scalar quantization. Moreover, we\nprove that our approach is stable and robust; i.e., the reconstruction error\ndegrades gracefully in the presence of non-quantization noise and when the\nunderlying signal is not strictly sparse. The analysis relies on results\nconcerning subgaussian chaos processes as well as a variation of McDiarmid's\ninequality.\n",
"title": "Quantized Compressed Sensing for Partial Random Circulant Matrices"
} | null | null | null | null | true | null | 2192 | null | Default | null | null |
null | {
"abstract": " This paper proposes a detailed optimal scheduling model of an exemplar\nmulti-energy system comprising combined cycle power plants (CCPPs), battery\nenergy storage systems, renewable energy sources, boilers, thermal energy\nstorage systems,electric loads and thermal loads. The proposed model considers\nthe detailed start-up and shutdown power trajectories of the gas turbines,\nsteam turbines and boilers. Furthermore, a practical,multi-energy load\nmanagement scheme is proposed within the framework of the optimal scheduling\nproblem. The proposed load management scheme utilizes the flexibility offered\nby system components such as flexible electrical pump loads, electrical\ninterruptible loads and a flexible thermal load to reduce the overall energy\ncost of the system. The efficacy of the proposed model in reducing the energy\ncost of the system is demonstrated in the context of a day-ahead scheduling\nproblem using four illustrative scenarios.\n",
"title": "Optimal Scheduling of Multi-Energy Systems with Flexible Electrical and Thermal Loads"
} | null | null | null | null | true | null | 2193 | null | Default | null | null |
null | {
"abstract": " Justification Awareness Models, JAMs, were proposed by S.~Artemov as a tool\nfor modelling epistemic scenarios like Russel's Prime Minister example. It was\ndemonstrated that the sharpness and the injective property of a model play\nessential role in the epistemic usage of JAMs. The problem to axiomatize these\nproperties using the propositional justification language was left opened. We\npropose the solution and define a decidable justification logic Jref that is\nsound and complete with respect to the class of all sharp injective\njustification models.\n",
"title": "On the sharpness and the injective property of basic justification models"
} | null | null | null | null | true | null | 2194 | null | Default | null | null |
null | {
"abstract": " The splendid success of convolutional neural networks (CNNs) in computer\nvision is largely attributed to the availability of large annotated datasets,\nsuch as ImageNet and Places. However, in biomedical imaging, it is very\nchallenging to create such large annotated datasets, as annotating biomedical\nimages is not only tedious, laborious, and time consuming, but also demanding\nof costly, specialty-oriented skills, which are not easily accessible. To\ndramatically reduce annotation cost, this paper presents a novel method to\nnaturally integrate active learning and transfer learning (fine-tuning) into a\nsingle framework, called AFT*, which starts directly with a pre-trained CNN to\nseek \"worthy\" samples for annotation and gradually enhance the (fine-tuned) CNN\nvia continuous fine-tuning. We have evaluated our method in three distinct\nbiomedical imaging applications, demonstrating that it can cut the annotation\ncost by at least half, in comparison with the state-of-the-art method. This\nperformance is attributed to the several advantages derived from the advanced\nactive, continuous learning capability of our method. Although AFT* was\ninitially conceived in the context of computer-aided diagnosis in biomedical\nimaging, it is generic and applicable to many tasks in computer vision and\nimage analysis; we illustrate the key ideas behind AFT* with the Places\ndatabase for scene interpretation in natural images.\n",
"title": "AFT*: Integrating Active Learning and Transfer Learning to Reduce Annotation Efforts"
} | null | null | [
"Statistics"
]
| null | true | null | 2195 | null | Validated | null | null |
null | {
"abstract": " Thomassen conjectured that triangle-free planar graphs have an exponential\nnumber of $3$-colorings. We show this conjecture to be equivalent to the\nfollowing statement: there exists a positive real $\\alpha$ such that whenever\n$G$ is a planar graph and $A$ is a subset of its edges whose deletion makes $G$\ntriangle-free, there exists a subset $A'$ of $A$ of size at least $\\alpha|A|$\nsuch that $G-(A\\setminus A')$ is $3$-colorable. This equivalence allows us to\nstudy restricted situations, where we can prove the statement to be true.\n",
"title": "Do triangle-free planar graphs have exponentially many 3-colorings?"
} | null | null | null | null | true | null | 2196 | null | Default | null | null |
null | {
"abstract": " This paper presents a continuous-time equilibrium model of TWAP trading and\nliquidity provision in a market with multiple strategic investors with\nheterogeneous intraday trading targets. We solve the model in closed-form and\nshow there are infinitely many equilibria. We compare the competitive\nequilibrium with different non-price-taking equilibria. In addition, we show\nintraday TWAP benchmarking reduces market liquidity relative to just terminal\ntrading targets alone. The model is computationally tractable, and we provide a\nnumber of numerical illustrations. An extension to stochastic VWAP targets is\nalso provided.\n",
"title": "Smart TWAP trading in continuous-time equilibria"
} | null | null | null | null | true | null | 2197 | null | Default | null | null |
null | {
"abstract": " The brain can display self-sustained activity (SSA), which is the persistent\nfiring of neurons in the absence of external stimuli. This spontaneous activity\nshows low neuronal firing rates and is observed in diverse in vitro and in vivo\nsituations. In this work, we study the influence of excitatory/inhibitory\nbalance, connection density, and network size on the self-sustained activity of\na neuronal network model. We build a random network of adaptive exponential\nintegrate-and-fire (AdEx) neuron models connected through inhibitory and\nexcitatory chemical synapses. The AdEx model mimics several behaviours of\nbiological neurons, such as spike initiation, adaptation, and bursting\npatterns. In an excitation/inhibition balanced state, if the mean connection\ndegree (K) is fixed, the firing rate does not depend on the network size (N),\nwhereas for fixed N, the firing rate decreases when K increases. However, for\nlarge K, SSA states can appear only for large N. We show the existence of SSA\nstates with similar behaviours to those observed in experimental recordings,\nsuch as very low and irregular neuronal firing rates, and spike-train power\nspectra with slow fluctuations, only for balanced networks of large size.\n",
"title": "Self-sustained activity in balanced networks with low firing-rate"
} | null | null | null | null | true | null | 2198 | null | Default | null | null |
null | {
"abstract": " Several important applications, such as streaming PCA and semidefinite\nprogramming, involve a large-scale positive-semidefinite (psd) matrix that is\npresented as a sequence of linear updates. Because of storage limitations, it\nmay only be possible to retain a sketch of the psd matrix. This paper develops\na new algorithm for fixed-rank psd approximation from a sketch. The approach\ncombines the Nystrom approximation with a novel mechanism for rank truncation.\nTheoretical analysis establishes that the proposed method can achieve any\nprescribed relative error in the Schatten 1-norm and that it exploits the\nspectral decay of the input matrix. Computer experiments show that the proposed\nmethod dominates alternative techniques for fixed-rank psd matrix approximation\nacross a wide range of examples.\n",
"title": "Fixed-Rank Approximation of a Positive-Semidefinite Matrix from Streaming Data"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 2199 | null | Validated | null | null |
null | {
"abstract": " Heavy-tailed errors impair the accuracy of the least squares estimate, which\ncan be spoiled by a single grossly outlying observation. As argued in the\nseminal work of Peter Huber in 1973 [{\\it Ann. Statist.} {\\bf 1} (1973)\n799--821], robust alternatives to the method of least squares are sorely\nneeded. To achieve robustness against heavy-tailed sampling distributions, we\nrevisit the Huber estimator from a new perspective by letting the tuning\nparameter involved diverge with the sample size. In this paper, we develop\nnonasymptotic concentration results for such an adaptive Huber estimator,\nnamely, the Huber estimator with the tuning parameter adapted to sample size,\ndimension, and the variance of the noise. Specifically, we obtain a\nsub-Gaussian-type deviation inequality and a nonasymptotic Bahadur\nrepresentation when noise variables only have finite second moments. The\nnonasymptotic results further yield two conventional normal approximation\nresults that are of independent interest, the Berry-Esseen inequality and\nCramér-type moderate deviation. As an important application to large-scale\nsimultaneous inference, we apply these robust normal approximation results to\nanalyze a dependence-adjusted multiple testing procedure for moderately\nheavy-tailed data. It is shown that the robust dependence-adjusted procedure\nasymptotically controls the overall false discovery proportion at the nominal\nlevel under mild moment conditions. Thorough numerical results on both\nsimulated and real datasets are also provided to back up our theory.\n",
"title": "A New Perspective on Robust $M$-Estimation: Finite Sample Theory and Applications to Dependence-Adjusted Multiple Testing"
} | null | null | null | null | true | null | 2200 | null | Default | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.