text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Good parameter settings are crucial to achieve high performance in many areas\nof artificial intelligence (AI), such as propositional satisfiability solving,\nAI planning, scheduling, and machine learning (in particular deep learning).\nAutomated algorithm configuration methods have recently received much attention\nin the AI community since they replace tedious, irreproducible and error-prone\nmanual parameter tuning and can lead to new state-of-the-art performance.\nHowever, practical applications of algorithm configuration are prone to several\n(often subtle) pitfalls in the experimental design that can render the\nprocedure ineffective. We identify several common issues and propose best\npractices for avoiding them. As one possibility for automatically handling as\nmany of these as possible, we also propose a tool called GenericWrapper4AC.\n", "title": "Pitfalls and Best Practices in Algorithm Configuration" }
null
null
null
null
true
null
20201
null
Default
null
null
null
{ "abstract": " We study the critical behavior of a general contagion model where nodes are\neither active (e.g. with opinion A, or functioning) or inactive (e.g. with\nopinion B, or damaged). The transitions between these two states are determined\nby (i) spontaneous transitions independent of the neighborhood, (ii)\ntransitions induced by neighboring nodes and (iii) spontaneous reverse\ntransitions. The resulting dynamics is extremely rich including limit cycles\nand random phase switching. We derive a unifying mean-field theory.\nSpecifically, we analytically show that the critical behavior of systems whose\ndynamics is governed by processes (i-iii) can only exhibit three distinct\nregimes: (a) uncorrelated spontaneous transition dynamics (b) contact process\ndynamics and (c) cusp catastrophes. This ends a long-standing debate on the\nuniversality classes of complex contagion dynamics in mean-field and\nsubstantially deepens its mathematical understanding.\n", "title": "Critical behaviors in contagion dynamics" }
null
null
[ "Physics", "Mathematics" ]
null
true
null
20202
null
Validated
null
null
null
{ "abstract": " An appearance-based robot self-localization problem is considered in the\nmachine learning framework. The appearance space is composed of all possible\nimages, which can be captured by a robot's visual system under all robot\nlocalizations. Using recent manifold learning and deep learning techniques, we\npropose a new geometrically motivated solution based on training data\nconsisting of a finite set of images captured in known locations of the robot.\nThe solution includes estimation of the robot localization mapping from the\nappearance space to the robot localization space, as well as estimation of the\ninverse mapping for modeling visual image features. The latter allows solving\nthe robot localization problem as the Kalman filtering problem.\n", "title": "Machine Learning in Appearance-based Robot Self-localization" }
null
null
null
null
true
null
20203
null
Default
null
null
null
{ "abstract": " We currently witness the emergence of interesting new network topologies\noptimized towards the traffic matrices they serve, such as demand-aware\ndatacenter interconnects (e.g., ProjecToR) and demand-aware overlay networks\n(e.g., SplayNets). This paper introduces a formal framework and approach to\nreason about and design such topologies. We leverage a connection between the\ncommunication frequency of two nodes and the path length between them in the\nnetwork, which depends on the entropy of the communication matrix. Our main\ncontribution is a novel robust, yet sparse, family of network topologies which\nguarantee an expected path length that is proportional to the entropy of the\ncommunication patterns.\n", "title": "Towards Communication-Aware Robust Topologies" }
null
null
null
null
true
null
20204
null
Default
null
null
null
{ "abstract": " We propose a method to infer domain-specific models such as classifiers for\nunseen domains, from which no data are given in the training phase, without\ndomain semantic descriptors. When training and test distributions are\ndifferent, standard supervised learning methods perform poorly. Zero-shot\ndomain adaptation attempts to alleviate this problem by inferring models that\ngeneralize well to unseen domains by using training data in multiple source\ndomains. Existing methods use observed semantic descriptors characterizing\ndomains such as time information to infer the domain-specific models for the\nunseen domains. However, it cannot always be assumed that such metadata can be\nused in real-world applications. The proposed method can infer appropriate\ndomain-specific models without any semantic descriptors by introducing the\nconcept of latent domain vectors, which are latent representations for the\ndomains and are used for inferring the models. The latent domain vector for the\nunseen domain is inferred from the set of the feature vectors in the\ncorresponding domain, which is given in the testing phase. The domain-specific\nmodels consist of two components: the first is for extracting a representation\nof a feature vector to be predicted, and the second is for inferring model\nparameters given the latent domain vector. The posterior distributions of the\nlatent domain vectors and the domain-specific models are parametrized by neural\nnetworks, and are optimized by maximizing the variational lower bound using\nstochastic gradient descent. The effectiveness of the proposed method was\ndemonstrated through experiments using one regression and two classification\ntasks.\n", "title": "Zero-shot Domain Adaptation without Domain Semantic Descriptors" }
null
null
null
null
true
null
20205
null
Default
null
null
null
{ "abstract": " Genome-wide chromosome conformation capture techniques such as Hi-C enable\nthe generation of 3D genome contact maps and offer new pathways toward\nunderstanding the spatial organization of genome. One specific feature of the\n3D organization is known as topologically associating domains (TADs), which are\ndensely interacting, contiguous chromatin regions playing important roles in\nregulating gene expression. A few algorithms have been proposed to detect TADs.\nIn particular, the structure of Hi-C data naturally inspires application of\ncommunity detection methods. However, one of the drawbacks of community\ndetection is that most methods take exchangeability of the nodes in the network\nfor granted; whereas the nodes in this case, i.e. the positions on the\nchromosomes, are not exchangeable. We propose a network model for detecting\nTADs using Hi-C data that takes into account this non-exchangeability. In\naddition, our model explicitly makes use of cell-type specific CTCF binding\nsites as biological covariates and can be used to identify conserved TADs\nacross multiple cell types. The model leads to a likelihood objective that can\nbe efficiently optimized via relaxation. We also prove that when suitably\ninitialized, this model finds the underlying TAD structure with high\nprobability. Using simulated data, we show the advantages of our method and the\ncaveats of popular community detection methods, such as spectral clustering, in\nthis application. Applying our method to real Hi-C data, we demonstrate the\ndomains identified have desirable epigenetic features and compare them across\ndifferent cell types. The code is available upon request.\n", "title": "Network modelling of topological domains using Hi-C data" }
null
null
null
null
true
null
20206
null
Default
null
null
null
{ "abstract": " It is proposed and substantiated that an extraterrestrial object of the\napproximate size and mass of Planet Mars, impacting the Earth in an oblique\nangle along an approximately NE-SW route (with respect to the current\norientation of the North America continent) around 750 million years ago (750\nMa), is likely to be the direct cause of a chain of events which led to the\nrifting of the Rodinia supercontinent and the severing of the foundation of the\nColorado Plateau from its surrounding craton.\nIt is further argued that the impactor most likely originated as a rouge\nexoplanet produced during one of the past crossings of our Solar System through\nthe Galactic spiral arms in its orbital motion around the center of the Milky\nWay Galaxy. Recent work has shown that the sites of galactic spiral arms are\nlocations of density-wave collisionless shocks. The perturbations from such\nshock are known lead to the formation of massive stars, which evolve quickly\nand die as supernovae. The blastwaves from supernova explosions, in addition to\nthe collisionless shocks at the spiral arms, can perturb the orbits of the\nstreaming disk matter, occasionally producing rogue exoplanets that can reach\nthe inner confines of our Solar System. The similarity between the period of\nspiral-arm crossings of our Solar System to the period of major extinction\nevents in the Phanerozoic Eon of the Earth's history, as well as to the period\nof the supercontinent cycle (the so-called Wilson Cycle), indicates that the\nglobal environment of the Milky Way Galaxy may have played a major role in\ninitiating Earth's past tectonic activities.\n", "title": "On a Possible Giant Impact Origin for the Colorado Plateau" }
null
null
null
null
true
null
20207
null
Default
null
null
null
{ "abstract": " We show how to reverse a while language extended with blocks, local\nvariables, procedures and the interleaving parallel composition. Annotation is\ndefined along with a set of operational semantics capable of storing necessary\nreversal information, and identifiers are introduced to capture the\ninterleaving order of an execution. Inversion is defined with a set of\noperational semantics that use saved information to undo an execution. We prove\nthat annotation does not alter the behaviour of the original program, and that\ninversion correctly restores the initial program state.\n", "title": "Reversing Parallel Programs with Blocks and Procedures" }
null
null
null
null
true
null
20208
null
Default
null
null
null
{ "abstract": " Hubble Space Telescope photometry from the ACS/WFC and WFPC2 cameras is used\nto detect and measure globular clusters (GCs) in the central region of the rich\nPerseus cluster of galaxies. A detectable population of Intragalactic GCs is\nfound extending out to at least 500 kpc from the cluster center. These objects\ndisplay luminosity and color (metallicity) distributions that are entirely\nnormal for GC populations. Extrapolating from the limited spatial coverage of\nthe HST fields, we estimate very roughly that the entire Perseus cluster should\ncontain ~50000 or more IGCs, but a targetted wide-field survey will be needed\nfor a more definitive answer. Separate brief results are presented for the rich\nGC systems in NGC 1272 and NGC 1275, the two largest Perseus ellipticals. For\nNGC 1272 we find a specific frequency S_N = 8, while for the central giant NGC\n1275, S_N ~ 12. In both these giant galaxies, the GC colors are well matched by\nbimodal distributions, with the majority in the blue (metal-poor) component.\nThis preliminary study suggests that Perseus is a prime target for a more\ncomprehensive deep imaging survey of Intragalactic GCs.\n", "title": "Detection of the Stellar Intracluster Medium in Perseus (Abell 426)" }
null
null
null
null
true
null
20209
null
Default
null
null
null
{ "abstract": " Phylogenetic networks are a generalization of evolutionary trees that are\nused by biologists to represent the evolution of organisms which have undergone\nreticulate evolution. Essentially, a phylogenetic network is a directed acyclic\ngraph having a unique root in which the leaves are labelled by a given set of\nspecies. Recently, some approaches have been developed to construct\nphylogenetic networks from collections of networks on 2- and 3-leaved networks,\nwhich are known as binets and trinets, respectively. Here we study in more\ndepth properties of collections of binets, one of the simplest possible types\nof networks into which a phylogenetic network can be decomposed. More\nspecifically, we show that if a collection of level-1 binets is compatible with\nsome binary network, then it is also compatible with a binary level-1 network.\nOur proofs are based on useful structural results concerning lowest stable\nancestors in networks. In addition, we show that, although the binets do not\ndetermine the topology of the network, they do determine the number of\nreticulations in the network, which is one of its most important parameters. We\nalso consider algorithmic questions concerning binets. We show that deciding\nwhether an arbitrary set of binets is compatible with some network is at least\nas hard as the well-known Graph Isomorphism problem. However, if we restrict to\nlevel-1 binets, it is possible to decide in polynomial time whether there\nexists a binary network that displays all the binets. We also show that to find\na network that displays a maximum number of the binets is NP-hard, but that\nthere exists a simple polynomial-time 1/3-approximation algorithm for this\nproblem. It is hoped that these results will eventually assist in the\ndevelopment of new methods for constructing phylogenetic networks from\ncollections of smaller networks.\n", "title": "Binets: fundamental building blocks for phylogenetic networks" }
null
null
null
null
true
null
20210
null
Default
null
null
null
{ "abstract": " $G$-deformability of maps into projective space is characterised by the\nexistence of certain Lie algebra valued 1-forms. This characterisation gives a\nunified way to obtain well known results regarding deformability in different\ngeometries.\n", "title": "G-Deformations of maps into projective space" }
null
null
null
null
true
null
20211
null
Default
null
null
null
{ "abstract": " Atmospheric modeling of low-gravity (VL-G) young brown dwarfs remains a\nchallenge. The presence of very thick clouds has been suggested because of\ntheir extremely red near-infrared (NIR) spectra, but no cloud models provide a\ngood fit to the data with a radius compatible with evolutionary models for\nthese objects. We show that cloudless atmospheres assuming a temperature\ngradient reduction caused by fingering convection provides a very good model to\nmatch the observed VL-G NIR spectra. The sequence of extremely red colors in\nthe NIR for atmospheres with effective temperature from ~2000 K down to ~1200 K\nis very well reproduced with predicted radii typical of young low-gravity\nobjects. Future observations with NIRSPEC and MIRI on the James Webb Space\nTelescope (JWST) will provide more constrains in the mid-infrared, helping to\nconfirm/refute whether or not the NIR reddening is caused by fingering\nconvection. We suggest that the presence/absence of clouds will be directly\ndetermined by the silicate absorption features that can be observed with MIRI.\nJWST will therefore be able to better characterize the atmosphere of these hot\nyoung brown dwarfs and their low-gravity exoplanet analogues.\n", "title": "Cloudless atmospheres for young low-gravity substellar objects" }
null
null
null
null
true
null
20212
null
Default
null
null
null
{ "abstract": " Ongoing innovations in recurrent neural network architectures have provided a\nsteady influx of apparently state-of-the-art results on language modelling\nbenchmarks. However, these have been evaluated using differing code bases and\nlimited computational resources, which represent uncontrolled sources of\nexperimental variation. We reevaluate several popular architectures and\nregularisation methods with large-scale automatic black-box hyperparameter\ntuning and arrive at the somewhat surprising conclusion that standard LSTM\narchitectures, when properly regularised, outperform more recent models. We\nestablish a new state of the art on the Penn Treebank and Wikitext-2 corpora,\nas well as strong baselines on the Hutter Prize dataset.\n", "title": "On the State of the Art of Evaluation in Neural Language Models" }
null
null
[ "Computer Science" ]
null
true
null
20213
null
Validated
null
null
null
{ "abstract": " Separating audio mixtures into individual instrument tracks has been a long\nstanding challenging task. We introduce a novel weakly supervised audio source\nseparation approach based on deep adversarial learning. Specifically, our loss\nfunction adopts the Wasserstein distance which directly measures the\ndistribution distance between the separated sources and the real sources for\neach individual source. Moreover, a global regularization term is added to\nfulfill the spectrum energy preservation property regardless separation. Unlike\nstate-of-the-art weakly supervised models which often involve deliberately\ndevised constraints or careful model selection, our approach need little prior\nmodel specification on the data, and can be straightforwardly learned in an\nend-to-end fashion. We show that the proposed method performs competitively on\npublic benchmark against state-of-the-art weakly supervised methods.\n", "title": "Weakly Supervised Audio Source Separation via Spectrum Energy Preserved Wasserstein Learning" }
null
null
[ "Computer Science" ]
null
true
null
20214
null
Validated
null
null
null
{ "abstract": " Let $d$ be a positive integer and $x$ a real number. Let $A_{d, x}$ be a\n$d\\times 2d$ matrix with its entries $$ a_{i,j}=\\left\\{ \\begin{array}{ll} x\\ \\\n& \\mbox{for} \\ 1\\leqslant j\\leqslant d+1-i, 1\\ \\ & \\mbox{for} \\ d+2-i\\leqslant\nj\\leqslant d+i, 0\\ \\ & \\mbox{for} \\ d+1+i\\leqslant j\\leqslant 2d. \\end{array}\n\\right. $$ Further, let $R_d$ be a set of sequences of integers as follows:\n$$R_d=\\{(\\rho_1, \\rho_2,\\ldots, \\rho_d)|1\\leqslant \\rho_i\\leqslant d+i,\n1\\leqslant i \\leqslant d,\\ \\mbox{and}\\ \\rho_r\\neq \\rho_s\\ \\mbox{for}\\ r\\neq\ns\\}.$$ and define $$\\Omega_d(x)=\\sum_{\\rho\\in R_d}a_{1,\\rho_1}a_{2,\n\\rho_2}\\ldots a_{d,\\rho_d}.$$ In order to give a better bound on the size of\nspheres of permutation codes under the Chebychev distance, Kl{\\o}ve introduced\nthe above function and conjectured that $$\\Omega_d(x)=\\sum_{m=0}^d{d\\choose\nm}(m+1)^d(x-1)^{d-m}.$$ In this paper, we settle down this conjecture\npositively.\n", "title": "Proof of a conjecture of Kløve on permutation codes under the Chebychev distance" }
null
null
null
null
true
null
20215
null
Default
null
null
null
{ "abstract": " Triggered star formation around HII regions could be an important process.\nThe Galactic HII region RCW 79 is a prototypical object for triggered high-mass\nstar formation. We take advantage of Herschel data from the surveys HOBYS,\n\"Evolution of Interstellar Dust\", and Hi-Gal to extract compact sources in this\nregion, complemented with archival 2MASS, Spitzer, and WISE data to determine\nthe physical parameters of the sources (e.g., envelope mass, dust temperature,\nand luminosity) by fitting the spectral energy distribution. We obtained a\nsample of 50 compact sources, 96% of which are situated in the\nionization-compressed layer of cold and dense gas that is characterized by the\ncolumn density PDF with a double-peaked lognormal distribution. The 50 sources\nhave sizes of 0.1-0.4 pc with a typical value of 0.2 pc, temperatures of 11-26\nK, envelope masses of 6-760 $M_\\odot$, densities of 0.1-44 $\\times$ $10^5$\ncm$^{-3}$, and luminosities of 19-12712 $L_\\odot$. The sources are classified\ninto 16 class 0, 19 intermediate, and 15 class I objects. Their distribution\nfollows the evolutionary tracks in the diagram of bolometric luminosity versus\nenvelope mass (Lbol-Menv) well. A mass threshold of 140 $M_\\odot$, determined\nfrom the Lbol-Menv diagram, yields 12 candidate massive dense cores that may\nform high-mass stars. The core formation efficiency (CFE) for the 8 massive\ncondensations shows an increasing trend of the CFE with density. This suggests\nthat the denser the condensation, the higher the fraction of its mass\ntransformation into dense cores, as previously observed in other high-mass\nstar-forming regions.\n", "title": "Herschel observations of the Galactic HII region RCW 79" }
null
null
null
null
true
null
20216
null
Default
null
null
null
{ "abstract": " The methodology of Software-Defined Robotics hierarchical-based and\nstand-alone framework can be designed and implemented to program and control\ndifferent sets of robots, regardless of their manufacturers' parameters and\nspecifications, with unified commands and communications. This framework\napproach will increase the capability of (re)programming a specific group of\nrobots during the runtime without affecting the others as desired in the\ncritical missions and industrial operations, expand the shared bandwidth,\nenhance the reusability of code, leverage the computational processing power,\ndecrease the unnecessary analyses of vast supplemental electrical components\nfor each robot, as well as get advantages of the most state-of-the-art\nindustrial trends in the cloud-based computing, Virtual Machines (VM), and\nRobot-as-a-Service (RaaS) technologies.\n", "title": "Software-Defined Robotics -- Idea & Approach" }
null
null
[ "Computer Science" ]
null
true
null
20217
null
Validated
null
null
null
{ "abstract": " We consider a curved Sitnikov problem, in which an infinitesimal particle\nmoves on a circle under the gravitational influence of two equal masses in\nKeplerian motion within a plane perpendicular to that circle. There are two\nequilibrium points, whose stability we are studying. We show that one of the\nequilibrium points undergoes stability interchanges as the semi-major axis of\nthe Keplerian ellipses approaches the diameter of that circle. To derive this\nresult, we first formulate and prove a general theorem on stability\ninterchanges, and then we apply it to our model. The motivation for our model\nresides with the $n$-body problem in spaces of constant curvature.\n", "title": "Stability interchanges in a curved Sitnikov problem" }
null
null
null
null
true
null
20218
null
Default
null
null
null
{ "abstract": " We define Hardy spaces $H^p(\\Omega_\\pm)$ on half-strip domain~$\\Omega_+$ and\n$\\Omega_-= \\mathbb{C}\\setminus\\overline{\\Omega_+}$, where $0<p<\\infty$, and\nprove that functions in $H^p(\\Omega_\\pm)$ has non-tangential boundary limit\na.e. on $\\Gamma$, the common boundary of $\\Omega_\\pm$. We then prove that\nCauchy integral of functions in $L^p(\\Gamma)$ are in $H^p(\\Omega_\\pm)$, where\n$1<p<\\infty$, that is, Cauchy transform is bounded. Besides, if $1\\leqslant\np<\\infty$, then $H^p(\\Omega_\\pm)$ functions are the Cauchy integral of their\nnon-tangential boundary limits. We also establish an isomorphism between\n$H^p(\\Omega_\\pm)$ and $H^p(\\mathbb{C}_\\pm)$, the classical Hardy spaces over\nupper and lower half complex planes.\n", "title": "Hardy Spaces over Half-strip Domains" }
null
null
null
null
true
null
20219
null
Default
null
null
null
{ "abstract": " In recent years, numerous vision and learning tasks have been (re)formulated\nas nonconvex and nonsmooth programmings(NNPs). Although some algorithms have\nbeen proposed for particular problems, designing fast and flexible optimization\nschemes with theoretical guarantee is a challenging task for general NNPs. It\nhas been investigated that performing inexact inner iterations often benefit to\nspecial applications case by case, but their convergence behaviors are still\nunclear. Motivated by these practical experiences, this paper designs a novel\nalgorithmic framework, named inexact proximal alternating direction method\n(IPAD) for solving general NNPs. We demonstrate that any numerical algorithms\ncan be incorporated into IPAD for solving subproblems and the convergence of\nthe resulting hybrid schemes can be consistently guaranteed by a series of\nsimple error conditions. Beyond the guarantee in theory, numerical experiments\non both synthesized and real-world data further demonstrate the superiority and\nflexibility of our IPAD framework for practical use.\n", "title": "An Optimization Framework with Flexible Inexact Inner Iterations for Nonconvex and Nonsmooth Programming" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
20220
null
Validated
null
null
null
{ "abstract": " The control task of tracking a reference pointing direction (the attitude\nabout the pointing direction is irrelevant) while obtaining a desired angular\nvelocity (PDAV) around the pointing direction using geometric techniques is\naddressed here. Existing geometric controllers developed on the two-sphere only\naddress the tracking of a reference pointing direction while driving the\nangular velocity about the pointing direction to zero. In this paper a tracking\ncontroller on the two-sphere, able to address the PDAV control task, is\ndeveloped globally in a geometric frame work, to avoid problems related to\nother attitude representations such as unwinding (quaternions) or singularities\n(Euler angles). An attitude error function is constructed resulting in a\ncontrol system with desired tracking performance for rotational maneuvers with\nlarge initial attitude/angular velocity errors and the ability to negotiate\nbounded modeling inaccuracies. The tracking ability of the developed control\nsystem is evaluated by comparing its performance with an existing geometric\ncontroller on the two-sphere and by numerical simulations, showing improved\nperformance for large initial attitude errors, smooth transitions between\ndesired angular velocities and the ability to negotiate bounded modeling\ninaccuracies.\n", "title": "Attitude and angular velocity tracking for a rigid body using geometric methods on the two-sphere" }
null
null
null
null
true
null
20221
null
Default
null
null
null
{ "abstract": " Computer programs do not always work as expected. In fact, ominous warnings\nabout the desperate state of the software industry continue to be released with\nalmost ritualistic regularity. In this paper, we look at the 60 years history\nof programming and at the different practical methods that software community\ndeveloped to live with programming errors. We do so by observing a class of\nstudents discussing different approaches to programming errors. While learning\nabout the different methods for dealing with errors, we uncover basic\nassumptions that proponents of different paradigms follow. We learn about the\nmathematical attempt to eliminate errors through formal methods, scientific\nmethod based on testing, a way of building reliable systems through engineering\nmethods, as well as an artistic approach to live coding that accepts errors as\na creative inspiration. This way, we can explore the differences and\nsimilarities among the different paradigms. By inviting proponents of different\nmethods into a single discussion, we hope to open potential for new thinking\nabout errors. When should we use which of the approaches? And what can software\ndevelopment learn from mathematics, science, engineering and art? When\nprogramming or studying programming, we are often enclosed in small communities\nand we take our basic assumptions for granted. Through the discussion in this\npaper, we attempt to map the large and rich space of programming ideas and\nprovide reference points for exploring, perhaps foreign, ideas that can\nchallenge some of our assumptions.\n", "title": "Miscomputation in software: Learning to live with errors" }
null
null
null
null
true
null
20222
null
Default
null
null
null
{ "abstract": " A vector composition of a vector $\\mathbf{\\ell}$ is a matrix $\\mathbf{A}$\nwhose rows sum to $\\mathbf{\\ell}$. We define a weighted vector composition as a\nvector composition in which the column values of $\\mathbf{A}$ may appear in\ndifferent colors. We study vector compositions from different viewpoints: (1)\nWe show how they are related to sums of random vectors and (2) how they allow\nto derive formulas for partial derivatives of composite functions. (3) We study\ncongruence properties of the number of weighted vector compositions, for fixed\nand arbitrary number of parts, many of which are analogous to those of ordinary\nbinomial coefficients and related quantities. Via the Central Limit Theorem and\ntheir multivariate generating functions, (4) we also investigate the asymptotic\nbehavior of several special cases of numbers of weighted vector compositions.\nFinally, (5) we conjecture an extension of a primality criterion due to Mann\nand Shanks in the context of weighted vector compositions.\n", "title": "The Combinatorics of Weighted Vector Compositions" }
null
null
null
null
true
null
20223
null
Default
null
null
null
{ "abstract": " This paper shows that a simple baseline based on a Bag-of-Words (BoW)\nrepresentation learns surprisingly good knowledge graph embeddings. By casting\nknowledge base completion and question answering as supervised classification\nproblems, we observe that modeling co-occurences of entities and relations\nleads to state-of-the-art performance with a training time of a few minutes\nusing the open sourced library fastText.\n", "title": "Fast Linear Model for Knowledge Graph Embeddings" }
null
null
null
null
true
null
20224
null
Default
null
null
null
{ "abstract": " Deep learning is having a profound impact in many fields, especially those\nthat involve some form of image processing. Deep neural networks excel in\nturning an input image into a set of high-level features. On the other hand,\ntomography deals with the inverse problem of recreating an image from a number\nof projections. In plasma diagnostics, tomography aims at reconstructing the\ncross-section of the plasma from radiation measurements. This reconstruction\ncan be computed with neural networks. However, previous attempts have focused\non learning a parametric model of the plasma profile. In this work, we use a\ndeep neural network to produce a full, pixel-by-pixel reconstruction of the\nplasma profile. For this purpose, we use the overview bolometer system at JET,\nand we introduce an up-convolutional network that has been trained and tested\non a large set of sample tomograms. We show that this network is able to\nreproduce existing reconstructions with a high level of accuracy, as measured\nby several metrics.\n", "title": "Deep learning for plasma tomography using the bolometer system at JET" }
null
null
[ "Physics", "Statistics" ]
null
true
null
20225
null
Validated
null
null
null
{ "abstract": " We use publicly available data for the Millennium Simulation to explore the\nimplications of the recent detection of assembly bias and splashback signatures\nin a large sample of galaxy clusters. These were identified in the SDSS/DR8\nphotometric data by the redMaPPer algorithm and split into high- and\nlow-concentration subsamples based on the projected positions of cluster\nmembers. We use simplified versions of these procedures to build cluster\nsamples of similar size from the simulation data. These match the observed\nsamples quite well and show similar assembly bias and splashback signals.\nPrevious theoretical work has found the logarithmic slope of halo density\nprofiles to have a well-defined minimum whose depth decreases and whose radius\nincreases with halo concentration. Projected profiles for the observed and\nsimulated cluster samples show trends with concentration which are opposite to\nthese predictions. In addition, for high-concentration clusters the minimum\nslope occurs at significantly smaller radius than predicted. We show that these\ndiscrepancies all reflect confusion between splashback features and features\nimposed on the profiles by the cluster identification and concentration\nestimation procedures. The strong apparent assembly bias is not reflected in\nthe three-dimensional distribution of matter around clusters. Rather it is a\nconsequence of the preferential contamination of low-concentration clusters by\nforeground or background groups.\n", "title": "Assembly Bias and Splashback in Galaxy Clusters" }
null
null
[ "Physics" ]
null
true
null
20226
null
Validated
null
null
null
{ "abstract": " Support vector machines (SVM) and other kernel techniques represent a family\nof powerful statistical classification methods with high accuracy and broad\napplicability. Because they use all or a significant portion of the training\ndata, however, they can be slow, especially for large problems. Piecewise\nlinear classifiers are similarly versatile, yet have the additional advantages\nof simplicity, ease of interpretation and, if the number of component linear\nclassifiers is not too large, speed. Here we show how a simple, piecewise\nlinear classifier can be trained from a kernel-based classifier in order to\nimprove the classification speed. The method works by finding the root of the\ndifference in conditional probabilities between pairs of opposite classes to\nbuild up a representation of the decision boundary. When tested on 17 different\ndatasets, it succeeded in improving the classification speed of a SVM for 9 of\nthem by factors as high as 88 times or more. The method is best suited to\nproblems with continuum features data and smooth probability functions. Because\nthe component linear classifiers are built up individually from an existing\nclassifier, rather than through a simultaneous optimization procedure, the\nclassifier is also fast to train.\n", "title": "Accelerating Kernel Classifiers Through Borders Mapping" }
null
null
null
null
true
null
20227
null
Default
null
null
null
{ "abstract": " This paper considers the problem of approximating a density when it can be\nevaluated up to a normalizing constant at a finite number of points. This\ndensity approximation problem is ubiquitous in machine learning, such as\napproximating a posterior density for Bayesian inference and estimating an\noptimal density for importance sampling. Approximating the density with a\nparametric model can be cast as a model selection problem. This problem cannot\nbe addressed with traditional approaches that maximize the (marginal)\nlikelihood of a model, for example, using the Akaike information criterion\n(AIC) or Bayesian information criterion (BIC). We instead aim to minimize the\ncross-entropy that gauges the deviation of a parametric model from the target\ndensity. We propose a novel information criterion called the cross-entropy\ninformation criterion (CIC) and prove that the CIC is an asymptotically\nunbiased estimator of the cross-entropy (up to a multiplicative constant) under\nsome regularity conditions. We propose an iterative method to approximate the\ntarget density by minimizing the CIC. We demonstrate that the proposed method\nselects a parametric model that well approximates the target density.\n", "title": "Information Criterion for Minimum Cross-Entropy Model Selection" }
null
null
null
null
true
null
20228
null
Default
null
null
null
{ "abstract": " In this study we follow Grossmann and Lohse, Phys. Rev. Lett. 86 (2001), who\nderived various scalings regimes for the dependence of the Nusselt number $Nu$\nand the Reynolds number $Re$ on the Rayleigh number $Ra$ and the Prandtl number\n$Pr$. We focus on theoretical arguments as well as on numerical simulations for\nthe case of large-$Pr$ natural thermal convection. Based on an analysis of\nself-similarity of the boundary layer equations, we derive that in this case\nthe limiting large-$Pr$ boundary-layer dominated regime is I$_\\infty^<$,\nintroduced and defined in [1], with the scaling relations $Nu\\sim\nPr^0\\,Ra^{1/3}$ and $Re\\sim Pr^{-1}\\,Ra^{2/3}$. Our direct numerical\nsimulations for $Ra$ from $10^4$ to $10^9$ and $Pr$ from 0.1 to 200 show that\nthe regime I$_\\infty^<$ is almost indistinguishable from the regime\nIII$_\\infty$, where the kinetic dissipation is bulk-dominated. With increasing\n$Ra$, the scaling relations undergo a transition to those in IV$_u$ of\nreference [1], where the thermal dissipation is determined by its bulk\ncontribution.\n", "title": "Scaling relations in large-Prandtl-number natural thermal convection" }
null
null
null
null
true
null
20229
null
Default
null
null
null
{ "abstract": " The renormalization method based on the Taylor expansion for asymptotic\nanalysis of differential equations is generalized to difference equations. The\nproposed renormalization method is based on the Newton-Maclaurin expansion.\nSeveral basic theorems on the renormalization method are proven. Some\ninteresting applications are given, including asymptotic solutions of quantum\nanharmonic oscillator and discrete boundary layer, the reductions and invariant\nmanifolds of some discrete dynamics systems. Furthermore, the homotopy\nrenormalization method based on the Newton-Maclaurin expansion is proposed and\napplied to those difference equations including no a small parameter.\n", "title": "The renormalization method from continuous to discrete dynamical systems: asymptotic solutions, reductions and invariant manifolds" }
null
null
null
null
true
null
20230
null
Default
null
null
null
{ "abstract": " The numerical approximation of 2D elasticity problems is considered, in the\nframework of the small strain theory and in connection with the mixed\nHellinger-Reissner variational formulation. A low-order Virtual Element Method\n(VEM) with a-priori symmetric stresses is proposed. Several numerical tests are\nprovided, along with a rigorous stability and convergence analysis.\n", "title": "A Stress/Displacement Virtual Element Method for Plane Elasticity Problems" }
null
null
null
null
true
null
20231
null
Default
null
null
null
{ "abstract": " In this paper, we consider the problem of estimating the lead-lag parameter\nbetween two stochastic processes driven by fractional Brownian motions (fBMs)\nof the Hurst parameter greater than 1/2. First we propose a lead-lag model\nbetween two stochastic processes involving fBMs, and then construct a\nconsistent estimator of the lead-lag parameter with possible convergence rate.\nOur estimator has the following two features. Firstly, we can construct the\nlead-lag estimator without using the Hurst parameters of the underlying fBMs.\nSecondly, our estimator can deal with some non-synchronous and irregular\nobservations. We explicitly calculate possible convergence rate when the\nobservation times are (1) synchronous and equidistant, and (2) given by the\nPoisson sampling scheme. We also present numerical simulations of our results\nusing the R package YUIMA.\n", "title": "Estimation of the lead-lag parameter between two stochastic processes driven by fractional Brownian motions" }
null
null
null
null
true
null
20232
null
Default
null
null
null
{ "abstract": " A polymer model given in terms of beads, interacting through Hookean springs\nand hydrodynamic forces, is studied. Brownian dynamics description of this\nbead-spring polymer model is extended to multiple resolutions. Using this\nmultiscale approach, a modeller can efficiently look at different regions of\nthe polymer in different spatial and temporal resolutions with scalings given\nfor the number of beads, statistical segment length and bead radius in order to\nmaintain macro-scale properties of the polymer filament. The Boltzmann\ndistribution of a Gaussian chain for differing statistical segment lengths\ngives a Langevin equation for the multi-resolution model with a mobility tensor\nfor different bead sizes. Using the pre-averaging approximation, the\ntranslational diffusion coefficient is obtained as a function of the inverse of\na matrix and then in closed form in the long-chain limit. This is then\nconfirmed with numerical experiments.\n", "title": "Multi-resolution polymer Brownian dynamics with hydrodynamic interactions" }
null
null
null
null
true
null
20233
null
Default
null
null
null
{ "abstract": " Distributed learning is an effective way to analyze big data. In distributed\nregression, a typical approach is to divide the big data into multiple blocks,\napply a base regression algorithm on each of them, and then simply average the\noutput functions learnt from these blocks. Since the average process will\ndecrease the variance, not the bias, bias correction is expected to improve the\nlearning performance if the base regression algorithm is a biased one.\nRegularization kernel network is an effective and widely used method for\nnonlinear regression analysis. In this paper we will investigate a bias\ncorrected version of regularization kernel network. We derive the error bounds\nwhen it is applied to a single data set and when it is applied as a base\nalgorithm in distributed regression. We show that, under certain appropriate\nconditions, the optimal learning rates can be reached in both situations.\n", "title": "Learning Theory of Distributed Regression with Bias Corrected Regularization Kernel Network" }
null
null
null
null
true
null
20234
null
Default
null
null
null
{ "abstract": " Kriging or Gaussian Process Regression is applied in many fields as a\nnon-linear regression model as well as a surrogate model in the field of\nevolutionary computation. However, the computational and space complexity of\nKriging, that is cubic and quadratic in the number of data points respectively,\nbecomes a major bottleneck with more and more data available nowadays. In this\npaper, we propose a general methodology for the complexity reduction, called\ncluster Kriging, where the whole data set is partitioned into smaller clusters\nand multiple Kriging models are built on top of them. In addition, four Kriging\napproximation algorithms are proposed as candidate algorithms within the new\nframework. Each of these algorithms can be applied to much larger data sets\nwhile maintaining the advantages and power of Kriging. The proposed algorithms\nare explained in detail and compared empirically against a broad set of\nexisting state-of-the-art Kriging approximation methods on a well-defined\ntesting framework. According to the empirical study, the proposed algorithms\nconsistently outperform the existing algorithms. Moreover, some practical\nsuggestions are provided for using the proposed algorithms.\n", "title": "Cluster-based Kriging Approximation Algorithms for Complexity Reduction" }
null
null
null
null
true
null
20235
null
Default
null
null
null
{ "abstract": " Over the past three years it has become evident that fake news is a danger to\ndemocracy. However, until now there has been no clear understanding of how to\ndefine fake news, much less how to model it. This paper addresses both these\nissues. A definition of fake news is given, and two approaches for the\nmodelling of fake news and its impact in elections and referendums are\nintroduced. The first approach, based on the idea of a representative voter, is\nshown to be suitable to obtain a qualitative understanding of phenomena\nassociated with fake news at a macroscopic level. The second approach, based on\nthe idea of an election microstructure, describes the collective behaviour of\nthe electorate by modelling the preferences of individual voters. It is shown\nthrough a simulation study that the mere knowledge that pieces of fake news may\nbe in circulation goes a long way towards mitigating the impact of fake news.\n", "title": "How to model fake news" }
null
null
null
null
true
null
20236
null
Default
null
null
null
{ "abstract": " Deep learning is the state-of-the-art in fields such as visual object\nrecognition and speech recognition. This learning uses a large number of layers\nand a huge number of units and connections. Therefore, overfitting is a serious\nproblem with it, and the dropout which is a kind of regularization tool is\nused. However, in online learning, the effect of dropout is not well known.\nThis paper presents our investigation on the effect of dropout in online\nlearning. We analyzed the effect of dropout on convergence speed near the\nsingular point. Our results indicated that dropout is effective in online\nlearning. Dropout tends to avoid the singular point for convergence speed near\nthat point.\n", "title": "Analysis of Dropout in Online Learning" }
null
null
null
null
true
null
20237
null
Default
null
null
null
{ "abstract": " The Alzheimer's Disease Prediction Of Longitudinal Evolution (TADPOLE)\nChallenge compares the performance of algorithms at predicting future evolution\nof individuals at risk of Alzheimer's disease. TADPOLE Challenge participants\ntrain their models and algorithms on historical data from the Alzheimer's\nDisease Neuroimaging Initiative (ADNI) study or any other datasets to which\nthey have access. Participants are then required to make monthly forecasts over\na period of 5 years from January 2018, of three key outcomes for ADNI-3\nrollover participants: clinical diagnosis, Alzheimer's Disease Assessment Scale\nCognitive Subdomain (ADAS-Cog13), and total volume of the ventricles. These\nindividual forecasts are later compared with the corresponding future\nmeasurements in ADNI-3 (obtained after the TADPOLE submission deadline). The\nfirst submission phase of TADPOLE was open for prize-eligible submissions\nbetween 15 June and 15 November 2017. The submission system remains open via\nthe website: this https URL, although since 15 November\n2017 submissions are not eligible for the first round of prizes. This paper\ndescribes the design of the TADPOLE Challenge.\n", "title": "TADPOLE Challenge: Prediction of Longitudinal Evolution in Alzheimer's Disease" }
null
null
null
null
true
null
20238
null
Default
null
null
null
{ "abstract": " The acquisition of Magnetic Resonance Imaging (MRI) is inherently slow.\nInspired by recent advances in deep learning, we propose a framework for\nreconstructing MR images from undersampled data using a deep cascade of\nconvolutional neural networks to accelerate the data acquisition process. We\nshow that for Cartesian undersampling of 2D cardiac MR images, the proposed\nmethod outperforms the state-of-the-art compressed sensing approaches, such as\ndictionary learning-based MRI (DLMRI) reconstruction, in terms of\nreconstruction error, perceptual quality and reconstruction speed for both\n3-fold and 6-fold undersampling. Compared to DLMRI, the error produced by the\nmethod proposed is approximately twice as small, allowing to preserve\nanatomical structures more faithfully. Using our method, each image can be\nreconstructed in 23 ms, which is fast enough to enable real-time applications.\n", "title": "A Deep Cascade of Convolutional Neural Networks for MR Image Reconstruction" }
null
null
null
null
true
null
20239
null
Default
null
null
null
{ "abstract": " We present a state-of-the-art end-to-end Automatic Speech Recognition (ASR)\nmodel. We learn to listen and write characters with a joint Connectionist\nTemporal Classification (CTC) and attention-based encoder-decoder network. The\nencoder is a deep Convolutional Neural Network (CNN) based on the VGG network.\nThe CTC network sits on top of the encoder and is jointly trained with the\nattention-based decoder. During the beam search process, we combine the CTC\npredictions, the attention-based decoder predictions and a separately trained\nLSTM language model. We achieve a 5-10\\% error reduction compared to prior\nsystems on spontaneous Japanese and Chinese speech, and our end-to-end model\nbeats out traditional hybrid ASR systems.\n", "title": "Advances in Joint CTC-Attention based End-to-End Speech Recognition with a Deep CNN Encoder and RNN-LM" }
null
null
null
null
true
null
20240
null
Default
null
null
null
{ "abstract": " This study focuses on the social structure and interpersonal dynamics of an\nafterschool math club for middle schoolers. Using social network analysis, two\nnetworks were formed and analyzed: The network of friendship relationships and\nthe network of working relationships. The interconnections and correlations\nbetween friendship relationships, working relationships, and student opinion\nsurveys are studied. A core working group of talented students emerged from the\nnetwork of working relations. This group acted a central go-between for other\nmembers in the club. This core working group expanded into largest friendship\ngroup in the friendship network. Although there were working isolates, they\nwere not found to be socially isolated. Students who were less popular tended\nto report a greater favorable impact from club participation. Implications for\nthe study of the social structure of afterschool STEM clubs and classrooms are\ndiscussed.\n", "title": "The Social and Work Structure of an Afterschool Math Club" }
null
null
null
null
true
null
20241
null
Default
null
null
null
{ "abstract": " It is well-known that the Harish-Chandra transform, $f\\mapsto\\mathcal{H}f,$\nis a topological isomorphism of the spherical (Schwartz) convolution algebra\n$\\mathcal{C}^{p}(G//K)$ (where $K$ is a maximal compact subgroup of any\narbitrarily chosen group $G$ in the Harish-Chandra class and $0<p\\leq2$) onto\nthe (Schwartz) multiplication algebra\n$\\bar{\\mathcal{Z}}({\\mathfrak{F}}^{\\epsilon})$ (of $\\mathfrak{w}-$invariant\nmembers of $\\mathcal{Z}({\\mathfrak{F}}^{\\epsilon}),$ with $\\epsilon=(2/p)-1$).\nThe same cannot however be said of the full Schwartz convolution algebra\n$\\mathcal{C}^{p}(G),$ except for few specific examples of groups (notably\n$G=SL(2,\\mathbb{R})$) and for some notable values of $p$ (with restrictions on\n$G$ and/or on $\\mathcal{C}^{p}(G)$). Nevertheless the full Harish-Chandra\nPlancherel formula on $G$ is known for all of\n$\\mathcal{C}^{2}(G)=:\\mathcal{C}(G).$ In order to then understand the structure\nof Harish-Chandra transform more clearly and to compute the image of\n$\\mathcal{C}^{p}(G)$ under it (without any restriction) we derive an absolutely\nconvergent series expansion (in terms of known functions) for the\nHarish-Chandra transform by an application of the full Plancherel formula on\n$G.$ This leads to a computation of the image of $\\mathcal{C}(G)$ under the\nHarish-Chandra transform which may be seen as a concrete realization of\nArthur's result and be easily extended to all of $\\mathcal{C}^{p}(G)$ in much\nthe same way as it is known in the work of Trombi and Varadarajan.\n", "title": "Fourier Transform of Schwartz Algebras on Groups in the Harish-Chandra class" }
null
null
null
null
true
null
20242
null
Default
null
null
null
{ "abstract": " We consider the estimation of a signal from the knowledge of its noisy linear\nrandom Gaussian projections. A few examples where this problem is relevant are\ncompressed sensing, sparse superposition codes, and code division multiple\naccess. There has been a number of works considering the mutual information for\nthis problem using the replica method from statistical physics. Here we put\nthese considerations on a firm rigorous basis. First, we show, using a\nGuerra-Toninelli type interpolation, that the replica formula yields an upper\nbound to the exact mutual information. Secondly, for many relevant practical\ncases, we present a converse lower bound via a method that uses spatial\ncoupling, state evolution analysis and the I-MMSE theorem. This yields a single\nletter formula for the mutual information and the minimal-mean-square error for\nrandom Gaussian linear estimation of all discrete bounded signals. In addition,\nwe prove that the low complexity approximate message-passing algorithm is\noptimal outside of the so-called hard phase, in the sense that it\nasymptotically reaches the minimal-mean-square error. In this work spatial\ncoupling is used primarily as a proof technique. However our results also prove\ntwo important features of spatially coupled noisy linear random Gaussian\nestimation. First there is no algorithmically hard phase. This means that for\nsuch systems approximate message-passing always reaches the minimal-mean-square\nerror. Secondly, in a proper limit the mutual information associated to such\nsystems is the same as the one of uncoupled linear random Gaussian estimation.\n", "title": "Mutual Information and Optimality of Approximate Message-Passing in Random Linear Estimation" }
null
null
null
null
true
null
20243
null
Default
null
null
null
{ "abstract": " Memristors have recently received significant attention as ubiquitous\ndevice-level components for building a novel generation of computing systems.\nThese devices have many promising features, such as non-volatility, low power\nconsumption, high density, and excellent scalability. The ability to control\nand modify biasing voltages at the two terminals of memristors make them\npromising candidates to perform matrix-vector multiplications and solve systems\nof linear equations. In this article, we discuss how networks of memristors\narranged in crossbar arrays can be used for efficiently solving optimization\nand machine learning problems. We introduce a new memristor-based optimization\nframework that combines the computational merit of memristor crossbars with the\nadvantages of an operator splitting method, alternating direction method of\nmultipliers (ADMM). Here, ADMM helps in splitting a complex optimization\nproblem into subproblems that involve the solution of systems of linear\nequations. The capability of this framework is shown by applying it to linear\nprogramming, quadratic programming, and sparse optimization. In addition to\nADMM, implementation of a customized power iteration (PI) method for\neigenvalue/eigenvector computation using memristor crossbars is discussed. The\nmemristor-based PI method can further be applied to principal component\nanalysis (PCA). The use of memristor crossbars yields a significant speed-up in\ncomputation, and thus, we believe, has the potential to advance optimization\nand machine learning research in artificial intelligence (AI).\n", "title": "A Memristor-Based Optimization Framework for AI Applications" }
null
null
null
null
true
null
20244
null
Default
null
null
null
{ "abstract": " We introduce a directed, weighted random graph model, where the edge-weights\nare independent and beta-distributed with parameters depending on their\nendpoints. We will show that the row- and column-sums of the transformed\nedge-weight matrix are sufficient statistics for the parameters, and use the\ntheory of exponential families to prove that the ML estimate of the parameters\nexists and is unique. Then an algorithm to find this estimate is introduced\ntogether with convergence proof that uses properties of the digamma function.\nSimulation results and applications are also presented.\n", "title": "Estimating parameters of a directed weighted graph model with beta-distributed edge-weights" }
null
null
null
null
true
null
20245
null
Default
null
null
null
{ "abstract": " In this letter, we investigate the performance of multiple-input\nmultiple-output techniques in a vehicle-to-vehicle communication system. We\nconsider both transmit antenna selection with maximal-ratio combining and\ntransmit antenna selection with selection combining. The channel propagation\nmodel between two vehicles is represented as n*Rayleigh distribution, which has\nbeen shown to be a realistic model for vehicle-to-vehicle communication\nscenarios. We derive tight analytical expressions for the outage probability\nand amount of fading of the post-processing signal-to-noise ratio.\n", "title": "On the Performance of Reduced-Complexity Transmit/Receive Diversity Systems over MIMO-V2V Channel Model" }
null
null
null
null
true
null
20246
null
Default
null
null
null
{ "abstract": " Migration the main process shaping patterns of human settlement within and\nbetween countries. It is widely acknowledged to be integral to the process of\nhuman development as it plays a significant role in enhancing educational\noutcomes. At regional and national levels, internal migration underpins the\nefficient functioning of the economy by bringing knowledge and skills to the\nlocations where they are needed. It is the multi-dimensional nature of\nmigration that underlines its significance in the process of human development.\nHuman mobility extends in the spatial domain from local travel to international\nmigration, and in the temporal dimension from short-term stays to permanent\nrelocations. Classification and measurement of such phenomena is inevitably\ncomplex, which has severely hindered progress in comparative research, with\nvery few large-scale cross-national comparisons of migration. The linkages\nbetween migration and education have been explored in a separate line of\ninquiry that has predominantly focused on country-specific analyses as to the\nways in which migration affects educational outcomes and how educational\nattainment affects migration behaviour. A recurrent theme has been the\neducational selectivity of migrants, which in turn leads to an increase of\nhuman capital in some regions, primarily cities, at the expense of others.\nQuestions have long been raised as to the links between education and migration\nin response to educational expansion, but have not yet been fully answered\nbecause of the absence, until recently, of adequate data for comparative\nanalysis of migration. In this paper, we bring these two separate strands of\nresearch together to systematically explore links between internal migration\nand education across a global sample of 57 countries at various stages of\ndevelopment, using data drawn from the IPUMS database.\n", "title": "Internal migration and education: A cross-national comparison" }
null
null
null
null
true
null
20247
null
Default
null
null
null
{ "abstract": " We establish the Gröbner-Shirshov bases theory for differential Lie\n$\\Omega$-algebras. As an application, we give a linear basis of a free\ndifferential Lie Rota-Baxter algebra on a set.\n", "title": "Free differential Lie Rota-Baxter algebras and Gröbner-Shirshov bases" }
null
null
[ "Mathematics" ]
null
true
null
20248
null
Validated
null
null
null
{ "abstract": " Conversion optimization means designing a web interface so that as many users\nas possible take a desired action on it, such as register or purchase. Such\ndesign is usually done by hand, testing one change at a time through A/B\ntesting, or a limited number of combinations through multivariate testing,\nmaking it possible to evaluate only a small fraction of designs in a vast\ndesign space. This paper describes Sentient Ascend, an automatic conversion\noptimization system that uses evolutionary optimization to create effective web\ninterface designs. Ascend makes it possible to discover and utilize\ninteractions between the design elements that are difficult to identify\notherwise. Moreover, evaluation of design candidates is done in parallel\nonline, i.e. with a large number of real users interacting with the system. A\ncase study on an existing media site shows that significant improvements (i.e.\nover 43%) are possible beyond human design. Ascend can therefore be seen as an\napproach to massively multivariate conversion optimization, based on a\nmassively parallel interactive evolution.\n", "title": "Conversion Rate Optimization through Evolutionary Computation" }
null
null
null
null
true
null
20249
null
Default
null
null
null
{ "abstract": " This paper investigates the phenomenon of emergence of spatial curvature.\nThis phenomenon is absent in the Standard Cosmological Model, which has a flat\nand fixed spatial curvature (small perturbations are considered in the Standard\nCosmological Model but their global average vanishes, leading to spatial\nflatness at all times). This paper shows that with the nonlinear growth of\ncosmic structures the global average deviates from zero. The analysis is based\non the {\\em silent universes} (a wide class of inhomogeneous cosmological\nsolutions of the Einstein equations). The initial conditions are set in the\nearly universe as perturbations around the $\\Lambda$CDM model with $\\Omega_m =\n0.31$, $\\Omega_\\Lambda = 0.69$, and $H_0 = 67.8$ km s$^{-1}$ Mpc$^{-1}$. As the\ngrowth of structures becomes nonlinear, the model deviates from the\n$\\Lambda$CDM model, and at the present instant if averaged over a domain ${\\cal\nD}$ with volume $V = (2150\\,{\\rm Mpc})^3$ (at these scales the cosmic variance\nis negligibly small) gives: $\\Omega_m^{\\cal D} = 0.22$, $\\Omega_\\Lambda^{\\cal\nD} = 0.61$, $\\Omega_{\\cal R}^{\\cal D} = 0.15$ (in the FLRW limit $\\Omega_{\\cal\nR}^{\\cal D} \\to \\Omega_k$), and $\\langle H \\rangle_{\\cal D} = 72.2$ km s$^{-1}$\nMpc$^{-1}$. Given the fact that low-redshift observations favor higher values\nof the Hubble constant and lower values of matter density, compared to the CMB\nconstraints, the emergence of the spatial curvature in the low-redshift\nuniverse could be a possible solution to these discrepancies.\n", "title": "Emergence of spatial curvature" }
null
null
[ "Physics" ]
null
true
null
20250
null
Validated
null
null
null
{ "abstract": " In many biological, agricultural, military activity problems and in some\nquality control problems, it is almost impossible to have a fixed sample size,\nbecause some observations are always lost for various reasons. Therefore, the\nsample size itself is considered frequently to be a random variable (rv). The\nclass of limit distribution functions (df's) of the random bivariate extreme\ngeneralized order statistics (GOS) from independent and identically distributed\nRV's are fully characterized. When the random sample size is assumed to be\nindependent of the basic variables and its df is assumed to converge weakly to\na non-degenerate limit, the necessary and sufficient conditions for the weak\nconvergence of the random bivariate extreme GOS are obtained. Furthermore, when\nthe interrelation of the random size and the basic rv's is not restricted,\nsufficient conditions for the convergence and the forms of the limit df's are\ndeduced. Illustrative examples are given which lend further support to our\ntheoretical results.\n", "title": "Asymptotic generalized bivariate extreme with random index" }
null
null
null
null
true
null
20251
null
Default
null
null
null
{ "abstract": " We consider minimization problems in the calculus of variations set in a\nsequence of domains the size of which tends to infinity in certain directions\nand such that the data only depend on the coordinates in the directions that\nremain constant. We study the asymptotic behavior of minimizers in various\nsituations and show that they converge in an appropriate sense toward\nminimizers of a related energy functional in the constant directions.\n", "title": "On problems in the calculus of variations in increasingly elongated domains" }
null
null
null
null
true
null
20252
null
Default
null
null
null
{ "abstract": " In this paper, we extend the Hermite-Hadamard type $\\dot{I}$scan inequality\nto the class of symmetrized harmonic convex functions. The corresponding\nversion for harmonic h-convex functions is also investigated. Furthermore, we\nestablish Hermite-Hadamard type inequalites for the product of a harmonic\nconvex function with a symmetrized harmonic convex function.\n", "title": "Inequalities related to Symmetrized Harmonic Convex Functions" }
null
null
null
null
true
null
20253
null
Default
null
null
null
{ "abstract": " This paper provides several statistical estimators for the drift and\nvolatility parameters of an Ornstein-Uhlenbeck process driven by fractional\nBrownian motion, whose observations can be made either continuously or at\ndiscrete time instants. First and higher order power variations are used to\nestimate the volatility parameter. The almost sure convergence of the\nestimators and the corresponding central limit theorems are obtained for all\nthe Hurst parameter range $H\\in (0, 1)$. The least squares estimator is used\nfor the drift parameter. A central limit theorem is proved when the Hurst\nparameter $H \\in (0, 1/2)$ and a noncentral limit theorem is proved for\n$H\\in[3/4, 1)$. Thus, the open problem left in the paper by Hu and Nualart\n(2010) is completely solved, where a central limit theorem for least squares\nestimator is proved for $H\\in [1/2, 3/4)$.\n", "title": "Parameter estimation for fractional Ornstein-Uhlenbeck processes of general Hurst parameter" }
null
null
null
null
true
null
20254
null
Default
null
null
null
{ "abstract": " In this paper, we compute the E-polynomials of the\n$PGL(2,\\mathbb{C})$-character varieties associated to surfaces of genus $g$\nwith one puncture, for any holonomy around it, and compare it with its\nLanglands dual case, $SL(2,\\mathbb{C})$. The study is based on the\nstratification of the space of representations and on the analysis of the\nbehaviour of the E-polynomial under fibrations.\n", "title": "E-polynomials of $PGL(2,\\mathbb{C})$-character varieties of surface groups" }
null
null
null
null
true
null
20255
null
Default
null
null
null
{ "abstract": " Modern computer architectures share physical resources between different\nprograms in order to increase area-, energy-, and cost-efficiency.\nUnfortunately, sharing often gives rise to side channels that can be exploited\nfor extracting or transmitting sensitive information. We currently lack\ntechniques for systematic reasoning about this interplay between security and\nefficiency. In particular, there is no established way for quantifying security\nproperties of shared caches.\nIn this paper, we propose a novel model that enables us to characterize\nimportant security properties of caches. Our model encompasses two aspects: (1)\nThe amount of information that can be absorbed by a cache, and (2) the amount\nof information that can effectively be extracted from the cache by an\nadversary. We use our model to compute both quantities for common cache\nreplacement policies (FIFO, LRU, and PLRU) and to compare their isolation\nproperties. We further show how our model for information extraction leads to\nan algorithm that can be used to improve the bounds delivered by the CacheAudit\nstatic analyzer.\n", "title": "Security Analysis of Cache Replacement Policies" }
null
null
null
null
true
null
20256
null
Default
null
null
null
{ "abstract": " We show various supercongruences for truncated series which involve central\nbinomial coefficients and harmonic numbers. The corresponding infinite series\nare also evaluated.\n", "title": "Supercongruences related to ${}_3F_2(1)$ involving harmonic numbers" }
null
null
[ "Mathematics" ]
null
true
null
20257
null
Validated
null
null
null
{ "abstract": " Layer normalization is a recently introduced technique for normalizing the\nactivities of neurons in deep neural networks to improve the training speed and\nstability. In this paper, we introduce a new layer normalization technique\ncalled Dynamic Layer Normalization (DLN) for adaptive neural acoustic modeling\nin speech recognition. By dynamically generating the scaling and shifting\nparameters in layer normalization, DLN adapts neural acoustic models to the\nacoustic variability arising from various factors such as speakers, channel\nnoises, and environments. Unlike other adaptive acoustic models, our proposed\napproach does not require additional adaptation data or speaker information\nsuch as i-vectors. Moreover, the model size is fixed as it dynamically\ngenerates adaptation parameters. We apply our proposed DLN to deep\nbidirectional LSTM acoustic models and evaluate them on two benchmark datasets\nfor large vocabulary ASR experiments: WSJ and TED-LIUM release 2. The\nexperimental results show that our DLN improves neural acoustic models in terms\nof transcription accuracy by dynamically adapting to various speakers and\nenvironments.\n", "title": "Dynamic Layer Normalization for Adaptive Neural Acoustic Modeling in Speech Recognition" }
null
null
[ "Computer Science" ]
null
true
null
20258
null
Validated
null
null
null
{ "abstract": " Translating formulas of Linear Temporal Logic (LTL) over finite traces, or\nLTLf, to symbolic Deterministic Finite Automata (DFA) plays an important role\nnot only in LTLf synthesis, but also in synthesis for Safety LTL formulas. The\ntranslation is enabled by using MONA, a powerful tool for symbolic, BDD-based,\nDFA construction from logic specifications. Recent works used a first-order\nencoding of LTLf formulas to translate LTLf to First Order Logic (FOL), which\nis then fed to MONA to get the symbolic DFA. This encoding was shown to perform\nwell, but other encodings have not been studied. Specifically, the natural\nquestion of whether second-order encoding, which has significantly simpler\nquantificational structure, can outperform first-order encoding remained open.\nIn this paper we address this challenge and study second-order encodings for\nLTLf formulas. We first introduce a specific MSO encoding that captures the\nsemantics of LTLf in a natural way and prove its correctness. We then explore\nis a Compact MSO encoding, which benefits from automata-theoretic minimization,\nthus suggesting a possible practical advantage. To that end, we propose a\nformalization of symbolic DFA in second-order logic, thus developing a novel\nconnection between BDDs and MSO. We then show by empirical evaluations that the\nfirst-order encoding does perform better than both second-order encodings. The\nconclusion is that first-order encoding is a better choice than second-order\nencoding in LTLf-to-Automata translation.\n", "title": "First-Order vs. Second-Order Encodings for LTLf-to-Automata Translation" }
null
null
null
null
true
null
20259
null
Default
null
null
null
{ "abstract": " This short note describes the benefit one obtains from a specific\nconstruction of a family of parametrices for a class of elliptic boundary value\nproblems perturbed by non-linear terms of product type. The construction is\nbased on the Boutet de Monvel calculus of pseudo-differential boundary\noperators for the linear elliptic parts, and on paradifferential operators for\nthe product terms.\n", "title": "Regularity results and parametrices of semi-linear boundary problems of product type" }
null
null
null
null
true
null
20260
null
Default
null
null
null
{ "abstract": " In this paper, we introduce a new concept of stability for cross-validation,\ncalled the $\\left( \\beta, \\varpi \\right)$-stability, and use it as a new\nperspective to build the general theory for cross-validation. The $\\left(\n\\beta, \\varpi \\right)$-stability mathematically connects the generalization\nability and the stability of the cross-validated model via the Rademacher\ncomplexity. Our result reveals mathematically the effect of cross-validation\nfrom two sides: on one hand, cross-validation picks the model with the best\nempirical generalization ability by validating all the alternatives on test\nsets; on the other hand, cross-validation may compromise the stability of the\nmodel selection by causing subsampling error. Moreover, the difference between\ntraining and test errors in q\\textsuperscript{th} round, sometimes referred to\nas the generalization error, might be autocorrelated on q. Guided by the ideas\nabove, the $\\left( \\beta, \\varpi \\right)$-stability help us derivd a new class\nof Rademacher bounds, referred to as the one-round/convoluted Rademacher\nbounds, for the stability of cross-validation in both the i.i.d.\\ and\nnon-i.i.d.\\ cases. For both light-tail and heavy-tail losses, the new bounds\nquantify the stability of the one-round/average test error of the\ncross-validated model in terms of its one-round/average training error, the\nsample sizes $n$, number of folds $K$, the tail property of the loss (encoded\nas Orlicz-$\\Psi_\\nu$ norms) and the Rademacher complexity of the model class\n$\\Lambda$. The new class of bounds not only quantitatively reveals the\nstability of the generalization ability of the cross-validated model, it also\nshows empirically the optimal choice for number of folds $K$, at which the\nupper bound of the one-round/average test error is lowest, or, to put it in\nanother way, where the test error is most stable.\n", "title": "$\\left( β, \\varpi \\right)$-stability for cross-validation and the choice of the number of folds" }
null
null
null
null
true
null
20261
null
Default
null
null
null
{ "abstract": " The convolution properties are discussed for the complex-valued harmonic\nfunctions in the unit disk $\\mathbb{D}$ constructed from the harmonic shearing\nof the analytic function $\\phi(z):=\\int_0^z\n(1/(1-2\\xi\\textit{e}^{\\textit{i}\\mu}\\cos\\nu+\\xi^2\\textit{e}^{2\\textit{i}\\mu}))\\textit{d}\\xi$,\nwhere $\\mu$ and $\\nu$ are real numbers. For any real number $\\alpha$ and\nharmonic function $f=h+\\overline{g}$, define an analytic function\n$f_{\\alpha}:=h+\\textit{e}^{-2\\textit{i}\\alpha}g$. Let $\\mu_1$ and $\\mu_2$\n$(\\mu_1+\\mu_2=\\mu)$ be real numbers, and $f=h+\\overline{g}$ and\n$F=H+\\overline{G}$ be locally-univalent and sense-preserving harmonic functions\nsuch that $f_{\\mu_1}*F_{\\mu_2}=\\phi$. It is shown that the convolution $f*F$ is\nunivalent and convex in the direction of $-\\mu$, provided it is locally\nunivalent and sense-preserving. Also, local-univalence of the above convolution\n$f*F$ is shown for some specific analytic dilatations of $f$ and $F$.\nFurthermore, if $g\\equiv0$ and both the analytic functions $f_{\\mu_1}$ and\n$F_{\\mu_2}$ are convex, then the convolution $f*F$ is shown to be convex. These\nresults extends the work done by Dorff \\textit{et al.} to a larger class of\nfunctions.\n", "title": "Directional convexity of harmonic mappings" }
null
null
null
null
true
null
20262
null
Default
null
null
null
{ "abstract": " Network densification and heterogenisation through the deployment of small\ncellular access points (picocells and femtocells) are seen as key mechanisms in\nhandling the exponential increase in cellular data traffic. Modelling such\nnetworks by leveraging tools from Stochastic Geometry has proven particularly\nuseful in understanding the fundamental limits imposed on network coverage and\ncapacity by co-channel interference. Most of these works however assume\ninfinite sized and uniformly distributed networks on the Euclidean plane. In\ncontrast, we study finite sized non-uniformly distributed networks, and find\nthe optimal non-uniform distribution of access points which maximises network\ncoverage for a given non-uniform distribution of mobile users, and vice versa.\n", "title": "Optimal Non-uniform Deployments in Ultra-Dense Finite-Area Cellular Networks" }
null
null
null
null
true
null
20263
null
Default
null
null
null
{ "abstract": " We investigate the integration of a planning mechanism into an\nencoder-decoder architecture with an explicit alignment for character-level\nmachine translation. We develop a model that plans ahead when it computes\nalignments between the source and target sequences, constructing a matrix of\nproposed future alignments and a commitment vector that governs whether to\nfollow or recompute the plan. This mechanism is inspired by the strategic\nattentive reader and writer (STRAW) model. Our proposed model is end-to-end\ntrainable with fully differentiable operations. We show that it outperforms a\nstrong baseline on three character-level decoder neural machine translation on\nWMT'15 corpus. Our analysis demonstrates that our model can compute\nqualitatively intuitive alignments and achieves superior performance with fewer\nparameters.\n", "title": "Plan, Attend, Generate: Character-level Neural Machine Translation with Planning in the Decoder" }
null
null
null
null
true
null
20264
null
Default
null
null
null
{ "abstract": " In this paper, we propose a novel recurrent neural network architecture for\nspeech separation. This architecture is constructed by unfolding the iterations\nof a sequential iterative soft-thresholding algorithm (ISTA) that solves the\noptimization problem for sparse nonnegative matrix factorization (NMF) of\nspectrograms. We name this network architecture deep recurrent NMF (DR-NMF).\nThe proposed DR-NMF network has three distinct advantages. First, DR-NMF\nprovides better interpretability than other deep architectures, since the\nweights correspond to NMF model parameters, even after training. This\ninterpretability also provides principled initializations that enable faster\ntraining and convergence to better solutions compared to conventional random\ninitialization. Second, like many deep networks, DR-NMF is an order of\nmagnitude faster at test time than NMF, since computation of the network output\nonly requires evaluating a few layers at each time step. Third, when a limited\namount of training data is available, DR-NMF exhibits stronger generalization\nand separation performance compared to sparse NMF and state-of-the-art\nlong-short term memory (LSTM) networks. When a large amount of training data is\navailable, DR-NMF achieves lower yet competitive separation performance\ncompared to LSTM networks.\n", "title": "Deep Recurrent NMF for Speech Separation by Unfolding Iterative Thresholding" }
null
null
null
null
true
null
20265
null
Default
null
null
null
{ "abstract": " Our start point is a 3D piecewise smooth vector field defined in two zones\nand presenting a shared fold curve for the two smooth vector fields considered.\nMoreover, these smooth vector fields are symmetric relative to the fold curve,\ngiving raise to a continuum of nested topological cylinders such that each\northogonal section of these cylinders is filled by centers. First we prove that\nthe normal form considered represents a whole class of piecewise smooth vector\nfields. After we perturb the initial model in order to obtain exactly\n$\\mathcal{L}$ invariant planes containing centers. A second perturbation of the\ninitial model also is considered in order to obtain exactly $k$ isolated\ncylinders filled by periodic orbits. Finally, joining the two previous\nbifurcations we are able to exhibit a model, preserving the symmetry relative\nto the fold curve, and having exactly $k.\\mathcal{L}$ limit cycles.\n", "title": "Birth of isolated nested cylinders and limit cycles in 3D piecewise smooth vector fields with symmetry" }
null
null
[ "Mathematics" ]
null
true
null
20266
null
Validated
null
null
null
{ "abstract": " Why is it difficult to refold a previously folded sheet of paper? We show\nthat even crease patterns with only one designed folding motion inevitably\ncontain an exponential number of `distractor' folding branches accessible from\na bifurcation at the flat state. Consequently, refolding a sheet requires\nfinding the ground state in a glassy energy landscape with an exponential\nnumber of other attractors of higher energy, much like in models of protein\nfolding (Levinthal's paradox) and other NP-hard satisfiability (SAT) problems.\nAs in these problems, we find that refolding a sheet requires actuation at\nmultiple carefully chosen creases. We show that seeding successful folding in\nthis way can be understood in terms of sub-patterns that fold when cut out\n(`folding islands'). Besides providing guidelines for the placement of active\nhinges in origami applications, our results point to fundamental limits on the\nprogrammability of energy landscapes in sheets.\n", "title": "The difficulty of folding self-folding origami" }
null
null
null
null
true
null
20267
null
Default
null
null
null
{ "abstract": " We discuss a general procedure to construct an integrable real-time\ntrotterization of interacting lattice models. As an illustrative example we\nconsider a spin-$1/2$ chain, with continuous time dynamics described by the\nisotropic ($XXX$) Heisenberg Hamiltonian. For periodic boundary conditions\nlocal conservation laws are derived from an inhomogeneous transfer matrix and a\nboost operator is constructed. In the continuous time limit these local charges\nreduce to the known integrals of motion of the Heisenberg chain. In a simple\nKraus representation we also examine the nonequilibrium setting, where our\nintegrable cellular automaton is driven by stochastic processes at the\nboundaries. We show explicitly, how an exact nonequilibrium steady state\ndensity matrix can be written in terms of a staggered matrix product ansatz.\nThis simple trotterization scheme, in particular in the open system framework,\ncould prove to be a useful tool for experimental simulations of the lattice\nmodels in terms of trapped ion and atom optics setups.\n", "title": "Integrable Trotterization: Local Conservation Laws and Boundary Driving" }
null
null
[ "Physics" ]
null
true
null
20268
null
Validated
null
null
null
{ "abstract": " The Hawkes process is a class of point processes whose future depends on its\nown history. Previous theoretical work on the Hawkes process is limited to the\ncase of a mutually-exciting process, in which a past event can only increase\nthe occurrence of future events. However, in neuronal networks and other\nreal-world applications, inhibitory relationships may be present. In this\npaper, we develop a new approach for establishing the properties of the Hawkes\nprocess without the restriction to mutual excitation. To this end, we employ a\nthinning process representation and a coupling construction to bound the\ndependence coefficient of the Hawkes process. Using recent developments on\nweakly dependent sequences, we establish a concentration inequality for\nsecond-order statistics of the Hawkes process. We apply this concentration\ninequality in order to establish theoretical results for penalized regression\nand clustering analysis in the high-dimensional regime. Our theoretical results\nare corroborated by simulation studies and an application to a neuronal spike\ntrain data set.\n", "title": "The Multivariate Hawkes Process in High Dimensions: Beyond Mutual Excitation" }
null
null
null
null
true
null
20269
null
Default
null
null
null
{ "abstract": " Rapid deployment and operation are key requirements in time critical\napplication, such as Search and Rescue (SaR). Efficiently teleoperated ground\nrobots can support first-responders in such situations. However, first-person\nview teleoperation is sub-optimal in difficult terrains, while a third-person\nperspective can drastically increase teleoperation performance. Here, we\npropose a Micro Aerial Vehicle (MAV)-based system that can autonomously provide\nthird-person perspective to ground robots. While our approach is based on local\nvisual servoing, it further leverages the global localization of several ground\nrobots to seamlessly transfer between these ground robots in GPS-denied\nenvironments. Therewith one MAV can support multiple ground robots on a demand\nbasis. Furthermore, our system enables different visual detection regimes, and\nenhanced operability, and return-home functionality. We evaluate our system in\nreal-world SaR scenarios.\n", "title": "Aerial-Ground collaborative sensing: Third-Person view for teleoperation" }
null
null
null
null
true
null
20270
null
Default
null
null
null
{ "abstract": " We present an analytical treatment of a three-dimensional variational model\nof a system that exhibits a second-order phase transition in the presence of\ndipolar interactions. Within the framework of Ginzburg-Landau theory, we\nconcentrate on the case in which the domain occupied by the sample has the\nshape of a flat thin film and obtain a reduced two-dimensional, non-local\nvariational model that describes the energetics of the system in terms of the\norder parameter averages across the film thickness. Namely, we show that the\nreduced two-dimensional model is in a certain sense asymptotically equivalent\nto the original three-dimensional model for small film thicknesses. Using this\nasymptotic equivalence, we analyze two different thin film limits for the full\nthree-dimensional model via the methods of $\\Gamma$-convergence applied to the\nreduced two-dimensional model. In the first regime, in which the film thickness\nvanishes while all other parameters remain fixed, we recover the local\ntwo-dimensional Ginzburg-Landau model. On the other hand, when the film\nthickness vanishes while the sample's lateral dimensions diverge at the right\nrate, we show that the system exhibits a transition from homogeneous to\nspatially modulated global energy minimizers. We identify a sharp threshold for\nthis transition.\n", "title": "A universal thin film model for Ginzburg-Landau energy with dipolar interaction" }
null
null
null
null
true
null
20271
null
Default
null
null
null
{ "abstract": " This document describes our submission to the 2018 LOCalization And TrAcking\n(LOCATA) challenge (Tasks 1, 3, 5). We estimate the 3D position of a speaker\nusing the Global Coherence Field (GCF) computed from multiple microphone pairs\nof a DICIT planar array. One of the main challenges when using such an array\nwith omnidirectional microphones is the front-back ambiguity, which is\nparticularly evident in Task 5. We address this challenge by post-processing\nthe peaks of the GCF and exploiting the attenuation introduced by the frame of\nthe array. Moreover, the intermittent nature of speech and the changing\norientation of the speaker make localization difficult. For Tasks 3 and 5, we\nalso employ a Particle Filter (PF) that favors the spatio-temporal continuity\nof the localization results.\n", "title": "LOCATA challenge: speaker localization with a planar array" }
null
null
null
null
true
null
20272
null
Default
null
null
null
{ "abstract": " The number of component classifiers chosen for an ensemble greatly impacts\nthe prediction ability. In this paper, we use a geometric framework for a\npriori determining the ensemble size, which is applicable to most of existing\nbatch and online ensemble classifiers. There are only a limited number of\nstudies on the ensemble size examining Majority Voting (MV) and Weighted\nMajority Voting (WMV). Almost all of them are designed for batch-mode, hardly\naddressing online environments. Big data dimensions and resource limitations,\nin terms of time and memory, make determination of ensemble size crucial,\nespecially for online environments. For the MV aggregation rule, our framework\nproves that the more strong components we add to the ensemble, the more\naccurate predictions we can achieve. For the WMV aggregation rule, our\nframework proves the existence of an ideal number of components, which is equal\nto the number of class labels, with the premise that components are completely\nindependent of each other and strong enough. While giving the exact definition\nfor a strong and independent classifier in the context of an ensemble is a\nchallenging task, our proposed geometric framework provides a theoretical\nexplanation of diversity and its impact on the accuracy of predictions. We\nconduct a series of experimental evaluations to show the practical value of our\ntheorems and existing challenges.\n", "title": "Less Is More: A Comprehensive Framework for the Number of Components of Ensemble Classifiers" }
null
null
null
null
true
null
20273
null
Default
null
null
null
{ "abstract": " Low-rank modeling has many important applications in computer vision and\nmachine learning. While the matrix rank is often approximated by the convex\nnuclear norm, the use of nonconvex low-rank regularizers has demonstrated\nbetter empirical performance. However, the resulting optimization problem is\nmuch more challenging. Recent state-of-the-art requires an expensive full SVD\nin each iteration. In this paper, we show that for many commonly-used nonconvex\nlow-rank regularizers, a cutoff can be derived to automatically threshold the\nsingular values obtained from the proximal operator. This allows such operator\nbeing efficiently approximated by power method. Based on it, we develop a\nproximal gradient algorithm (and its accelerated variant) with inexact proximal\nsplitting and prove that a convergence rate of O(1/T) where T is the number of\niterations is guaranteed. Furthermore, we show the proposed algorithm can be\nwell parallelized, which achieves nearly linear speedup w.r.t the number of\nthreads. Extensive experiments are performed on matrix completion and robust\nprincipal component analysis, which shows a significant speedup over the\nstate-of-the-art. Moreover, the matrix solution obtained is more accurate and\nhas a lower rank than that of the nuclear norm regularizer.\n", "title": "Large-Scale Low-Rank Matrix Learning with Nonconvex Regularizers" }
null
null
null
null
true
null
20274
null
Default
null
null
null
{ "abstract": " Inspired by the importance of diversity in biological system, we built an\nheterogeneous system that could achieve this goal. Our architecture could be\nsummarized in two basic steps. First, we generate a diverse set of\nclassification hypothesis using both Convolutional Neural Networks, currently\nthe state-of-the-art technique for this task, among with other traditional and\ninnovative machine learning techniques. Then, we optimally combine them through\nMeta-Nets, a family of recently developed and performing ensemble methods.\n", "title": "Computational Eco-Systems for Handwritten Digits Recognition" }
null
null
null
null
true
null
20275
null
Default
null
null
null
{ "abstract": " Community detection is a commonly used technique for identifying groups in a\nnetwork based on similarities in connectivity patterns. To facilitate community\ndetection in large networks, we recast the network to be partitioned into a\nsmaller network of 'super nodes', each super node comprising one or more nodes\nin the original network. To define the seeds of our super nodes, we apply the\n'CoreHD' ranking from dismantling and decycling. We test our approach through\nthe analysis of two common methods for community detection: modularity\nmaximization with the Louvain algorithm and maximum likelihood optimization for\nfitting a stochastic block model. Our results highlight that applying community\ndetection to the compressed network of super nodes is significantly faster\nwhile successfully producing partitions that are more aligned with the local\nnetwork connectivity, more stable across multiple (stochastic) runs within and\nbetween community detection algorithms, and overlap well with the results\nobtained using the full network.\n", "title": "Compressing networks with super nodes" }
null
null
null
null
true
null
20276
null
Default
null
null
null
{ "abstract": " Electronic friction and the ensuing nonadiabatic energy loss play an\nimportant role in chemical reaction dynamics at metal surfaces. Using molecular\ndynamics with electronic friction evaluated on-the-fly from Density Functional\nTheory, we find strong mode dependence and a dominance of nonadiabatic energy\nloss along the bond stretch coordinate for scattering and dissociative\nchemisorption of H$_2$ on the Ag(111) surface. Exemplary trajectories with\nvarying initial conditions indicate that this mode-specificity translates into\nmodulated energy loss during a dissociative chemisorption event. Despite minor\nnonadiabatic energy loss of about 5\\%, the directionality of friction forces\ninduces dynamical steering that affects individual reaction outcomes,\nspecifically for low-incidence energies and vibrationally excited molecules.\nMode-specific friction induces enhanced loss of rovibrational rather than\ntranslational energy and will be most visible in its effect on final energy\ndistributions in molecular scattering experiments.\n", "title": "Mode specific electronic friction in dissociative chemisorption on metal surfaces: H$_2$ on Ag(111)" }
null
null
null
null
true
null
20277
null
Default
null
null
null
{ "abstract": " We explore the notion of the quantum auxiliary linear problem and the\nassociated problem of quantum Backlund transformations (BT). In this context we\nsystematically construct the analogue of the classical formula that provides\nthe whole hierarchy of the time components of Lax pairs at the quantum level\nfor both closed and open integrable lattice models. The generic time evolution\noperator formula is particularly interesting and novel at the quantum level\nwhen dealing with systems with open boundary conditions. In the same frame we\nshow that the reflection K-matrix can also be viewed as a particular type of\nBT, fixed at the boundaries of the system. The q-oscillator (q-boson) model, a\nvariant of the Ablowitz-Ladik model, is then employed as a paradigm to\nillustrate the method. Particular emphasis is given to the time part of the\nquantum BT as possible connections and applications to the problem of quantum\nquenches as well as the time evolution of local quantum impurities are evident.\nA discussion on the use of Bethe states as well as coherent states for the\nstudy of the time evolution is also presented.\n", "title": "The quantum auxiliary linear problem & quantum Darboux-Backlund transformations" }
null
null
[ "Physics" ]
null
true
null
20278
null
Validated
null
null
null
{ "abstract": " Recurrent neural networks with various types of hidden units have been used\nto solve a diverse range of problems involving sequence data. Two of the most\nrecent proposals, gated recurrent units (GRU) and minimal gated units (MGU),\nhave shown comparable promising results on example public datasets. In this\npaper, we introduce three model variants of the minimal gated unit (MGU) which\nfurther simplify that design by reducing the number of parameters in the\nforget-gate dynamic equation. These three model variants, referred to simply as\nMGU1, MGU2, and MGU3, were tested on sequences generated from the MNIST dataset\nand from the Reuters Newswire Topics (RNT) dataset. The new models have shown\nsimilar accuracy to the MGU model while using fewer parameters and thus\nlowering training expense. One model variant, namely MGU2, performed better\nthan MGU on the datasets considered, and thus may be used as an alternate to\nMGU or GRU in recurrent neural networks.\n", "title": "Simplified Minimal Gated Unit Variations for Recurrent Neural Networks" }
null
null
null
null
true
null
20279
null
Default
null
null
null
{ "abstract": " Hydrogen bonding between nucleobases produces diverse DNA structural motifs,\nincluding canonical duplexes, guanine (G) quadruplexes and cytosine (C)\ni-motifs. Incorporating metal-mediated base pairs into nucleic acid structures\ncan introduce new functionalities and enhanced stabilities. Here we\ndemonstrate, using mass spectrometry (MS), ion mobility spectrometry (IMS) and\nfluorescence resonance energy transfer (FRET), that parallel-stranded\nstructures consisting of up to 20 G-Ag(I)-G contiguous base pairs are formed\nwhen natural DNA sequences are mixed with silver cations in aqueous solution.\nFRET indicates that duplexes formed by poly(cytosine) strands with 20\ncontiguous C-Ag(I)-C base pairs are also parallel. Silver-mediated G-duplexes\nform preferentially over G-quadruplexes, and the ability of Ag+ to convert\nG-quadruplexes into silver-paired duplexes may provide a new route to\nmanipulating these biologically relevant structures. IMS indicates that\nG-duplexes are linear and more rigid than B-DNA. DFT calculations were used to\npropose structures compatible with the IMS experiments. Such inexpensive,\ndefect-free and soluble DNA-based nanowires open new directions in the design\nof novel metal-mediated DNA nanotechnology.\n", "title": "Parallel G-duplex and C-duplex DNA with Uninterrupted Spines of AgI-Mediated Base Pairs" }
null
null
null
null
true
null
20280
null
Default
null
null
null
{ "abstract": " This paper derives an upper limit on the density $\\rho_{\\scriptstyle\\Lambda}$\nof dark energy based on the requirement that cosmological structure forms\nbefore being frozen out by the eventual acceleration of the universe. By\nallowing for variations in both the cosmological parameters and the strength of\ngravity, the resulting constraint is a generalization of previous limits. The\nspecific parameters under consideration include the amplitude $Q$ of the\nprimordial density fluctuations, the Planck mass $M_{\\rm pl}$, the\nbaryon-to-photon ratio $\\eta$, and the density ratio $\\Omega_M/\\Omega_b$. In\naddition to structure formation, we use considerations from stellar structure\nand Big Bang Nucleosynthesis (BBN) to constrain these quantities. The resulting\nupper limit on the dimensionless density of dark energy becomes\n$\\rho_{\\scriptstyle\\Lambda}/M_{\\rm pl}^4<10^{-90}$, which is $\\sim30$ orders of\nmagnitude larger than the value in our universe\n$\\rho_{\\scriptstyle\\Lambda}/M_{\\rm pl}^4\\sim10^{-120}$. This new limit is much\nless restrictive than previous constraints because additional parameters are\nallowed to vary. With these generalizations, a much wider range of universes\ncan develop cosmic structure and support observers. To constrain the\nconstituent parameters, new BBN calculations are carried out in the regime\nwhere $\\eta$ and $G=M_{\\rm pl}^{-2}$ are much larger than in our universe. If\nthe BBN epoch were to process all of the protons into heavier elements, no\nhydrogen would be left behind to make water, and the universe would not be\nviable. However, our results show that some hydrogen is always left over, even\nunder conditions of extremely large $\\eta$ and $G$, so that a wide range of\nalternate universes are potentially habitable.\n", "title": "Constraints on Vacuum Energy from Structure Formation and Nucleosynthesis" }
null
null
[ "Physics" ]
null
true
null
20281
null
Validated
null
null
null
{ "abstract": " Mobile edge caching enables content delivery directly within the radio access\nnetwork, which effectively alleviates the backhaul burden and reduces\nround-trip latency. To fully exploit the edge resources, the most popular\ncontents should be identified and cached. Observing that content popularity\nvaries greatly at different locations, to maximize local hit rate, this paper\nproposes an online learning algorithm that dynamically predicts content hit\nrate, and makes location-differentiated caching decisions. Specifically, a\nlinear model is used to estimate the future hit rate. Considering the\nvariations in user demand, a perturbation is added to the estimation to account\nfor uncertainty. The proposed learning algorithm requires no training phase,\nand hence is adaptive to the time-varying content popularity profile.\nTheoretical analysis indicates that the proposed algorithm asymptotically\napproaches the optimal policy in the long term. Extensive simulations based on\nreal world traces show that, the proposed algorithm achieves higher hit rate\nand better adaptiveness to content popularity fluctuation, compared with other\nschemes.\n", "title": "Dynamic Mobile Edge Caching with Location Differentiation" }
null
null
null
null
true
null
20282
null
Default
null
null
null
{ "abstract": " We develop an online learning method for prediction, which is important in\nproblems with large and/or streaming data sets. We formulate the learning\napproach using a covariance-fitting methodology, and show that the resulting\npredictor has desirable computational and distribution-free properties: It is\nimplemented online with a runtime that scales linearly in the number of\nsamples; has a constant memory requirement; avoids local minima problems; and\nprunes away redundant feature dimensions without relying on restrictive\nassumptions on the data distribution. In conjunction with the split conformal\napproach, it also produces distribution-free prediction confidence intervals in\na computationally efficient manner. The method is demonstrated on both real and\nsynthetic datasets.\n", "title": "Online Learning for Distribution-Free Prediction" }
null
null
null
null
true
null
20283
null
Default
null
null
null
{ "abstract": " We study the mass imbalanced Fermi-Fermi mixture within the framework of a\ntwo-dimensional lattice fermion model. Based on the thermodynamic and species\ndependent quasiparticle behavior we map out the finite temperature phase\ndiagram of this system and show that unlike the balanced Fermi superfluid there\nare now two different pseudogap regimes as PG-I and PG-II. While within the\nPG-I regime both the fermionic species are pseudogapped, PG-II corresponds to\nthe regime where pseudogap feature survives only in the light species. We\nbelieve that the single particle spectral features that we discuss in this\npaper are observable through the species resolved radio frequency spectroscopy\nand momentum resolved photo emission spectroscopy measurements on systems such\nas, 6$_{Li}$-40$_{K}$ mixture. We further investigate the interplay between the\npopulation and mass imbalances and report that at a fixed population imbalance\nthe BCS-BEC crossover in a Fermi-Fermi mixture would require a critical\ninteraction (U$_{c}$), for the realization of the uniform superfluid state. The\neffect of imbalance in mass on the exotic Fulde-Ferrell-Larkin-Ovchinnikov\n(FFLO) superfluid phase has been probed in detail in terms of the thermodynamic\nand quasiparticle behavior of this phase. It has been observed that in spite of\nthe s-wave symmetry of the pairing field a nodal superfluid gap is realized in\nthe LO regime. Our results on the various thermal scales and regimes are\nexpected to serve as benchmarks for the experimental observations on\n6$_{Li}$-40$_{K}$ mixture.\n", "title": "Thermal transitions, pseudogap behavior and BCS-BEC crossover in Fermi-Fermi mixtures" }
null
null
null
null
true
null
20284
null
Default
null
null
null
{ "abstract": " We establish concrete criteria for fully supported absolutely continuous\nspectrum for ergodic CMV matrices and purely absolutely continuous spectrum for\nlimit-periodic CMV matrices. We proceed by proving several variational\nestimates on the measure of the spectrum and the vanishing set of the Lyapunov\nexponent for CMV matrices, which represent CMV analogues of results obtained\nfor Schrödinger operators due to Y.\\ Last in the early 1990s. Having done so,\nwe combine those estimates with results from inverse spectral theory to obtain\npurely absolutely continuous spectrum.\n", "title": "Spectral Approximation for Ergodic CMV Operators with an Application to Quantum Walks" }
null
null
null
null
true
null
20285
null
Default
null
null
null
{ "abstract": " While in standard photoacoustic imaging the propagation of sound waves is\nmodeled by the standard wave equation, our approach is based on a generalized\nwave equation with variable sound speed and material density, respectively. In\nthis paper we present an approach for photoacoustic imaging, which in addition\nto recovering of the absorption density parameter, the imaging parameter of\nstandard photoacoustics, also allows to reconstruct the spatially varying sound\nspeed and density, respectively, of the medium. We provide analytical\nreconstruction formulas for all three parameters based in a linearized model\nbased on single plane illumination microscopy (SPIM) techniques.\n", "title": "Quantitative Photoacoustic Imaging in the Acoustic Regime using SPIM" }
null
null
null
null
true
null
20286
null
Default
null
null
null
{ "abstract": " Deep convolutional neural network (CNN) based salient object detection\nmethods have achieved state-of-the-art performance and outperform those\nunsupervised methods with a wide margin. In this paper, we propose to integrate\ndeep and unsupervised saliency for salient object detection under a unified\nframework. Specifically, our method takes results of unsupervised saliency\n(Robust Background Detection, RBD) and normalized color images as inputs, and\ndirectly learns an end-to-end mapping between inputs and the corresponding\nsaliency maps. The color images are fed into a Fully Convolutional Neural\nNetworks (FCNN) adapted from semantic segmentation to exploit high-level\nsemantic cues for salient object detection. Then the results from deep FCNN and\nRBD are concatenated to feed into a shallow network to map the concatenated\nfeature maps to saliency maps. Finally, to obtain a spatially consistent\nsaliency map with sharp object boundaries, we fuse superpixel level saliency\nmap at multi-scale. Extensive experimental results on 8 benchmark datasets\ndemonstrate that the proposed method outperforms the state-of-the-art\napproaches with a margin.\n", "title": "Integrated Deep and Shallow Networks for Salient Object Detection" }
null
null
null
null
true
null
20287
null
Default
null
null
null
{ "abstract": " In this paper, we investigate the fundamental limitations of feedback\nmechanism in dealing with uncertainties for network systems. The study of\nmaximum capability of feedback control was pioneered in Xie and Guo (2000) for\nscalar systems with nonparametric nonlinear uncertainty. In a network setting,\nnodes with unknown and nonlinear dynamics are interconnected through a directed\ninteraction graph. Nodes can design feedback controls based on all available\ninformation, where the objective is to stabilize the network state. Using\ninformation structure and decision pattern as criteria, we specify three\ncategories of network feedback laws, namely the\nglobal-knowledge/global-decision, network-flow/local-decision, and\nlocal-flow/local-decision feedback. We establish a series of network capacity\ncharacterizations for these three fundamental types of network control laws.\nFirst of all, we prove that for global-knowledge/global-decision and\nnetwork-flow/local-decision control where nodes know the information flow\nacross the entire network, there exists a critical number\n$\\big(3/2+\\sqrt{2}\\big)/\\|A_{\\mathrm{G}}\\|_\\infty$, where $3/2+\\sqrt{2}$ is as\nknown as the Xie-Guo constant and $A_{\\mathrm{G}}$ is the network adjacency\nmatrix, defining exactly how much uncertainty in the node dynamics can be\novercome by feedback. Interestingly enough, the same feedback capacity can be\nachieved under max-consensus enhanced local flows where nodes only observe\ninformation flows from neighbors as well as extreme (max and min) states in the\nnetwork. Next, for local-flow/local-decision control, we prove that there\nexists a structure-determined value being a lower bound of the network feedback\ncapacity. These results reveal the important connection between network\nstructure and fundamental capabilities of in-network feedback control.\n", "title": "Feedback Capacity over Networks" }
null
null
null
null
true
null
20288
null
Default
null
null
null
{ "abstract": " We identify graphene layer on a disordered substrate as a possible system\nwhere Anderson localization of phonons can be observed. Generally, observation\nof localization for scattering waves is not simple, because the Rayleigh\nscattering is inversely proportional to a high power of wavelength. The\nsituation is radically different for the out of plane vibrations, so-called\nflexural phonons, scattered by pinning centers induced by a substrate. In this\ncase, the scattering time for vanishing wave vector tends to a finite limit.\nOne may, therefore, expect that physics of the flexural phonons exhibits\nfeatures characteristic for electron localization in two dimensions, albeit\nwithout complications caused by the electron-electron interactions. We confirm\nthis idea by calculating statistical properties of the Anderson localization of\nflexural phonons for a model of elastic sheet in the presence of the pinning\ncenters. Finally, we discuss possible manifestations of the flexural phonons,\nincluding the localized ones, in the electronic thermal conductance.\n", "title": "Flexural phonons in supported graphene: from pinning to localization" }
null
null
[ "Physics" ]
null
true
null
20289
null
Validated
null
null
null
{ "abstract": " Many approaches for testing configurable software systems start from the same\nassumption: it is impossible to test all configurations. This motivated the\ndefinition of variability-aware abstractions and sampling techniques to cope\nwith large configuration spaces. Yet, there is no theoretical barrier that\nprevents the exhaustive testing of all configurations by simply enumerating\nthem, if the effort required to do so remains acceptable. Not only this: we\nbelieve there is lots to be learned by systematically and exhaustively testing\na configurable system. In this case study, we report on the first ever\nendeavour to test all possible configurations of an industry-strength, open\nsource configurable software system, JHipster, a popular code generator for web\napplications. We built a testing scaffold for the 26,000+ configurations of\nJHipster using a cluster of 80 machines during 4 nights for a total of 4,376\nhours (182 days) CPU time. We find that 35.70% configurations fail and we\nidentify the feature interactions that cause the errors. We show that sampling\nstrategies (like dissimilarity and 2-wise): (1) are more effective to find\nfaults than the 12 default configurations used in the JHipster continuous\nintegration; (2) can be too costly and exceed the available testing budget. We\ncross this quantitative analysis with the qualitative assessment of JHipster's\nlead developers.\n", "title": "Test them all, is it worth it? Assessing configuration sampling on the JHipster Web development stack" }
null
null
null
null
true
null
20290
null
Default
null
null
null
{ "abstract": " We propose a reinforcement learning (RL) based closed loop power control\nalgorithm for the downlink of the voice over LTE (VoLTE) radio bearer for an\nindoor environment served by small cells. The main contributions of our paper\nare to 1) use RL to solve performance tuning problems in an indoor cellular\nnetwork for voice bearers and 2) show that our derived lower bound loss in\neffective signal to interference plus noise ratio due to neighboring cell\nfailure is sufficient for VoLTE power control purposes in practical cellular\nnetworks. In our simulation, the proposed RL-based power control algorithm\nsignificantly improves both voice retainability and mean opinion score compared\nto current industry standards. The improvement is due to maintaining an\neffective downlink signal to interference plus noise ratio against adverse\nnetwork operational issues and faults.\n", "title": "Q-Learning Algorithm for VoLTE Closed-Loop Power Control in Indoor Small Cells" }
null
null
null
null
true
null
20291
null
Default
null
null
null
{ "abstract": " We consider the parabolic Allen-Cahn equation in $\\mathbb{R}^n$, $n\\ge 2$,\n$$u_t= \\Delta u + (1-u^2)u \\quad \\hbox{ in } \\mathbb{R}^n \\times (-\\infty,\n0].$$ We construct an ancient radially symmetric solution $u(x,t)$ with any\ngiven number $k$ of transition layers between $-1$ and $+1$. At main order they\nconsist of $k$ time-traveling copies of $w$ with spherical interfaces distant\n$O(\\log |t| )$ one to each other as $t\\to -\\infty$. These interfaces are\nresemble at main order copies of the {\\em shrinking sphere} ancient solution to\nmean the flow by mean curvature of surfaces: $|x| = \\sqrt{- 2(n-1)t}$. More\nprecisely, if $w(s)$ denotes the heteroclinic 1-dimensional solution of $w'' +\n(1-w^2)w=0$ $w(\\pm \\infty)= \\pm 1$ given by $w(s) = \\tanh \\left(\\frac\ns{\\sqrt{2}} \\right) $ we have $$ u(x,t) \\approx \\sum_{j=1}^k\n(-1)^{j-1}w(|x|-\\rho_j(t)) - \\frac 12 (1+ (-1)^{k}) \\quad \\hbox{ as } t\\to\n-\\infty $$ where\n$$\\rho_j(t)=\\sqrt{-2(n-1)t}+\\frac{1}{\\sqrt{2}}\\left(j-\\frac{k+1}{2}\\right)\\log\\left(\\frac\n{|t|}{\\log |t| }\\right)+ O(1),\\quad j=1,\\ldots ,k.$$\n", "title": "Ancient shrinking spherical interfaces in the Allen-Cahn flow" }
null
null
[ "Mathematics" ]
null
true
null
20292
null
Validated
null
null
null
{ "abstract": " The long-tail phenomenon tells us that there are many items in the tail.\nHowever, not all tail items are the same. Each item acquires different kinds of\nusers. Some items are loved by the general public, while some items are\nconsumed by eccentric fans. In this paper, we propose a novel metric, item\neccentricity, to incorporate this difference between consumers of the items.\nEccentric items are defined as items that are consumed by eccentric users. We\nused this metric to analyze two real-world datasets of music and movies and\nobserved the characteristics of items in terms of eccentricity. The results\nshowed that our defined eccentricity of an item does not change much over time,\nand classified eccentric and noneccentric items present significantly distinct\ncharacteristics. The proposed metric effectively separates the eccentric and\nnoneccentric items mixed in the tail, which could not be done with the previous\nmeasures, which only consider the popularity of items.\n", "title": "Measuring the Eccentricity of Items" }
null
null
null
null
true
null
20293
null
Default
null
null
null
{ "abstract": " A real-world dataset is provided from a pulp-and-paper manufacturing\nindustry. The dataset comes from a multivariate time series process. The data\ncontains a rare event of paper break that commonly occurs in the industry. The\ndata contains sensor readings at regular time-intervals (x's) and the event\nlabel (y). The primary purpose of the data is thought to be building a\nclassification model for early prediction of the rare event. However, it can\nalso be used for multivariate time series data exploration and building other\nsupervised and unsupervised models.\n", "title": "Dataset: Rare Event Classification in Multivariate Time Series" }
null
null
null
null
true
null
20294
null
Default
null
null
null
{ "abstract": " An experimental setup for consecutive measurement of ion and x-ray absorption\nin tissue or other materials is introduced. With this setup using a 3D-printed\nsample container, the reference stopping-power ratio (SPR) of materials can be\nmeasured with an uncertainty of below 0.1%. A total of 65 porcine and bovine\ntissue samples were prepared for measurement, comprising five samples each of\n13 tissue types representing about 80% of the total body mass (three different\nmuscle and fatty tissues, liver, kidney, brain, heart, blood, lung and bone).\nUsing a standard stoichiometric calibration for single-energy CT (SECT) as well\nas a state-of-the-art dual-energy CT (DECT) approach, SPR was predicted for all\ntissues and then compared to the measured reference. With the SECT approach,\nthe SPRs of all tissues were predicted with a mean error of (-0.84 $\\pm$ 0.12)%\nand a mean absolute error of (1.27 $\\pm$ 0.12)%. In contrast, the DECT-based\nSPR predictions were overall consistent with the measured reference with a mean\nerror of (-0.02 $\\pm$ 0.15)% and a mean absolute error of (0.10 $\\pm$ 0.15)%.\nThus, in this study, the potential of DECT to decrease range uncertainty could\nbe confirmed in biological tissue.\n", "title": "Experimental verification of stopping-power prediction from single- and dual-energy computed tomography in biological tissues" }
null
null
null
null
true
null
20295
null
Default
null
null
null
{ "abstract": " In their recent book Merchants of Doubt [New York:Bloomsbury 2010], Naomi\nOreskes and Erik Conway describe the \"tobacco strategy\", which was used by the\ntobacco industry to influence policy makers regarding the health risks of\ntobacco products. The strategy involved two parts, consisting of (1) promoting\nand sharing independent research supporting the industry's preferred position\nand (2) funding additional research, but selectively publishing the results. We\nintroduce a model of the Tobacco Strategy, and use it to argue that both prongs\nof the strategy can be extremely effective--even when policy makers rationally\nupdate on all evidence available to them. As we elaborate, this model helps\nillustrate the conditions under which the Tobacco Strategy is particularly\nsuccessful. In addition, we show how journalists engaged in \"fair\" reporting\ncan inadvertently mimic the effects of industry on public belief.\n", "title": "How to Beat Science and Influence People: Policy Makers and Propaganda in Epistemic Networks" }
null
null
null
null
true
null
20296
null
Default
null
null
null
{ "abstract": " In this paper we deal with the well-known nonlinear Lorenz system that\ndescribes the deterministic chaos phenomenon. We consider an interesting\nproblem with time-varying phenomena in quantum optics. Then we establish from\nthe motion equations the passage to the Lorenz system. Furthermore, we show\nthat the reduction to the third order non linear equation can be performed.\nTherefore, the obtained differential equation can be analytically solved in\nsome special cases and transformed to Abel, Dufing, Painlevé and\ngeneralized Emden-Fowler equations. So, a motivating technique that permitted a\ncomplete and partial integrability of the Lorenz system is presented.\n", "title": "A complete and partial integrability technique of the Lorenz system" }
null
null
null
null
true
null
20297
null
Default
null
null
null
{ "abstract": " In preprint we consider and compare different definitions of generalized\nsolution of the Cauchy problem for 1d-scalar quasilinear equation (conservation\nlaw). We start from the classical approaches goes back to I.M. Gelfand, O.A.\nOleinik, S.N. Kruzhkov and move to the modern finite-difference approximations\napproaches belongs to A.A. Shananin and G.M. Henkin. We discuss the conditions\nthat provide definitions to be equivalent.\n", "title": "Comparision of the definitions of generalized solution of the Cauchy problem for quasi-linear equation" }
null
null
null
null
true
null
20298
null
Default
null
null
null
{ "abstract": " This paper investigates the computational complexity of sparse label\npropagation which has been proposed recently for processing network structured\ndata. Sparse label propagation amounts to a convex optimization problem and\nmight be considered as an extension of basis pursuit from sparse vectors to\nnetwork structured datasets. Using a standard first-order oracle model, we\ncharacterize the number of iterations for sparse label propagation to achieve a\nprescribed accuracy. In particular, we derive an upper bound on the number of\niterations required to achieve a certain accuracy and show that this upper\nbound is sharp for datasets having a chain structure (e.g., time series).\n", "title": "On The Complexity of Sparse Label Propagation" }
null
null
null
null
true
null
20299
null
Default
null
null
null
{ "abstract": " The Hard Problem of consciousness has been dismissed as an illusion. By\nshowing that computers are capable of experiencing, we show that they are at\nleast rudimentarily conscious with potential to eventually reach\nsuperconsciousness. The main contribution of the paper is a test for confirming\ncertain subjective experiences in a tested agent. We follow with analysis of\nbenefits and problems with conscious machines and implications of such\ncapability on future of computing, machine rights and artificial intelligence\nsafety.\n", "title": "Detecting Qualia in Natural and Artificial Agents" }
null
null
null
null
true
null
20300
null
Default
null
null