text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We formulate, and present a numerical method for solving, an inverse problem\nfor inferring parameters of a deterministic model from stochastic observational\ndata (quantities of interest). The solution, given as a probability measure, is\nderived using a Bayesian updating approach for measurable maps that finds a\nposterior probability measure, that when propagated through the deterministic\nmodel produces a push-forward measure that exactly matches the observed\nprobability measure on the data. Our approach for finding such posterior\nmeasures, which we call consistent Bayesian inference, is simple and only\nrequires the computation of the push-forward probability measure induced by the\ncombination of a prior probability measure and the deterministic model. We\nestablish existence and uniqueness of observation-consistent posteriors and\npresent stability and error analysis. We also discuss the relationships between\nconsistent Bayesian inference, classical/statistical Bayesian inference, and a\nrecently developed measure-theoretic approach for inference. Finally,\nanalytical and numerical results are presented to highlight certain properties\nof the consistent Bayesian approach and the differences between this approach\nand the two aforementioned alternatives for inference.\n", "title": "A Consistent Bayesian Formulation for Stochastic Inverse Problems Based on Push-forward Measures" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
18601
null
Validated
null
null
null
{ "abstract": " We introduce the coherent state mapping ring-polymer molecular dynamics\n(CS-RPMD), a new method that accurately describes electronic non-adiabatic\ndynamics with explicit nuclear quantization. This new approach is derived by\nusing coherent state mapping representation for the electronic degrees of\nfreedom (DOF) and the ring-polymer path-integral representation for the nuclear\nDOF. CS-RPMD Hamiltonian does not contain any inter-bead coupling term in the\nstate-dependent potential, which is a key feature that ensures correct\nelectronic Rabi oscillations. Hamilton's equation of motion is used to sample\ninitial configurations and propagate the trajectories, preserving the\ndistribution with classical symplectic evolution. In the special one-bead limit\nfor mapping variables, CS-RPMD preserves the detailed balance. Numerical tests\nof this method with a two-state model system show a very good agreement with\nexact quantum results over a broad range of electronic couplings.\n", "title": "Coherent State Mapping Ring-Polymer Molecular Dynamics for Non-Adiabatic quantum propagations" }
null
null
[ "Physics" ]
null
true
null
18602
null
Validated
null
null
null
{ "abstract": " Stochastic gradient descent in continuous time (SGDCT) provides a\ncomputationally efficient method for the statistical learning of\ncontinuous-time models, which are widely used in science, engineering, and\nfinance. The SGDCT algorithm follows a (noisy) descent direction along a\ncontinuous stream of data. The parameter updates occur in continuous time and\nsatisfy a stochastic differential equation. This paper analyzes the asymptotic\nconvergence rate of the SGDCT algorithm by proving a central limit theorem\n(CLT) for strongly convex objective functions and, under slightly stronger\nconditions, for non-convex objective functions as well. An L$^p$ convergence\nrate is also proven for the algorithm in the strongly convex case. The\nmathematical analysis lies at the intersection of stochastic analysis and\nstatistical learning.\n", "title": "Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem" }
null
null
null
null
true
null
18603
null
Default
null
null
null
{ "abstract": " We consider a family of $*$-commuting local homeomorphisms on a compact\nspace, and build a compactly aligned product system of Hilbert bimodules (in\nthe sense of Fowler). This product system has a Nica-Toeplitz algebra and a\nCuntz-Pimsner algebra. Both algebras carry a gauge action of a\nhigher-dimensional torus, and there are many possible dynamics obtained by\ncomposing with different embeddings of the real line in this torus. We study\nthe KMS states of these dynamics. For large inverse temperatures, we describe\nthe simplex of KMS states on the Nica-Toeplitz algebra. To study KMS states for\nsmaller inverse temperature, we consider a preferred dynamics for which there\nis a single critical inverse temperature. We find a KMS state on the\nNica-Toeplitz algebra at this critical inverse temperature which factors\nthrough the Cuntz-Pimsner algebra. We then illustrate our results by\nconsidering backward shifts on the infinite-path spaces of a class of\n$k$-graphs.\n", "title": "KMS states on $C^*$-algebras associated to a family of $*$-commuting local homeomorphisms" }
null
null
null
null
true
null
18604
null
Default
null
null
null
{ "abstract": " This paper proposes a new approach to a novel value network architecture for\nthe game Go, called a multi-labelled (ML) value network. In the ML value\nnetwork, different values (win rates) are trained simultaneously for different\nsettings of komi, a compensation given to balance the initiative of playing\nfirst. The ML value network has three advantages, (a) it outputs values for\ndifferent komi, (b) it supports dynamic komi, and (c) it lowers the mean\nsquared error (MSE). This paper also proposes a new dynamic komi method to\nimprove game-playing strength. This paper also performs experiments to\ndemonstrate the merits of the architecture. First, the MSE of the ML value\nnetwork is generally lower than the value network alone. Second, the program\nbased on the ML value network wins by a rate of 67.6% against the program based\non the value network alone. Third, the program with the proposed dynamic komi\nmethod significantly improves the playing strength over the baseline that does\nnot use dynamic komi, especially for handicap games. To our knowledge, up to\ndate, no handicap games have been played openly by programs using value\nnetworks. This paper provides these programs with a useful approach to playing\nhandicap games.\n", "title": "Multi-Labelled Value Networks for Computer Go" }
null
null
null
null
true
null
18605
null
Default
null
null
null
{ "abstract": " We study the problem of recovering a structured signal $\\mathbf{x}_0$ from\nhigh-dimensional data $\\mathbf{y}_i=f(\\mathbf{a}_i^T\\mathbf{x}_0)$ for some\nnonlinear (and potentially unknown) link function $f$, when the regressors\n$\\mathbf{a}_i$ are iid Gaussian. Brillinger (1982) showed that ordinary\nleast-squares estimates $\\mathbf{x}_0$ up to a constant of proportionality\n$\\mu_\\ell$, which depends on $f$. Recently, Plan & Vershynin (2015) extended\nthis result to the high-dimensional setting deriving sharp error bounds for the\ngeneralized Lasso. Unfortunately, both least-squares and the Lasso fail to\nrecover $\\mathbf{x}_0$ when $\\mu_\\ell=0$. For example, this includes all even\nlink functions. We resolve this issue by proposing and analyzing an alternative\nconvex recovery method. In a nutshell, our method treats such link functions as\nif they were linear in a lifted space of higher-dimension. Interestingly, our\nerror analysis captures the effect of both the nonlinearity and the problem's\ngeometry in a few simple summary parameters.\n", "title": "Lifting high-dimensional nonlinear models with Gaussian regressors" }
null
null
[ "Statistics" ]
null
true
null
18606
null
Validated
null
null
null
{ "abstract": " We describe a new cognitive ability, i.e., functional conceptual substratum,\nused implicitly in the generation of several mathematical proofs and\ndefinitions. Furthermore, we present an initial (first-order) formalization of\nthis mechanism together with its relation to classic notions like primitive\npositive definability and Diophantiveness. Additionally, we analyze the\nsemantic variability of functional conceptual substratum when small syntactic\nmodifications are done. Finally, we describe mathematically natural inference\nrules for definitions inspired by functional conceptual substratum and show\nthat they are sound and complete w.r.t. standard calculi.\n", "title": "Functional Conceptual Substratum as a New Cognitive Mechanism for Mathematical Creation" }
null
null
null
null
true
null
18607
null
Default
null
null
null
{ "abstract": " Let $M$ be a compact connected smooth Riemannian $n$-manifold with boundary.\nWe combine Gromov's amenable localization technique with the Poincaré\nduality to study the traversally generic geodesic flows on $SM$, the space of\nthe spherical tangent bundle. Such flows generate stratifications of $SM$,\ngoverned by rich universal combinatorics. The stratification reflects the ways\nin which the geodesic flow trajectories interact with the boundary $\\d(SM)$.\nSpecifically, we get lower estimates of the numbers of connected components of\nthese flow-generated strata of any given codimension $k$. These universal\nbounds are expressed in terms of the normed homology $H_k(M; \\R)$ and $H_k(DM;\n\\R)$, where $DM = M\\cup_{\\d M} M$ denotes the double of $M$. The norms here are\nthe Gromov simplicial semi-norms in homology. The more complex the metric on\n$M$ is, the more numerous the strata of $SM$ and $S(DM)$ are. So one may regard\nour estimates as analogues of the Morse inequalities for the geodesics on\nmanifolds with boundary.\nIt turns out that some close relatives of the normed homology spaces form\nobstructions to the existence of globally $k$-convex traversally generic\nmetrics on $M$.\n", "title": "Applying Gromov's Amenable Localization to Geodesic Flows" }
null
null
null
null
true
null
18608
null
Default
null
null
null
{ "abstract": " We present a formal measure of argument strength, which combines the ideas\nthat conclusions of strong arguments are (i) highly probable and (ii) their\nuncertainty is relatively precise. Likewise, arguments are weak when their\nconclusion probability is low or when it is highly imprecise. We show how the\nproposed measure provides a new model of the Ellsberg paradox. Moreover, we\nfurther substantiate the psychological plausibility of our approach by an\nexperiment (N = 60). The data show that the proposed measure predicts human\ninferences in the original Ellsberg task and in corresponding argument strength\ntasks. Finally, we report qualitative data taken from structured interviews on\nfolk psychological conceptions on what argument strength means.\n", "title": "Modeling the Ellsberg Paradox by Argument Strength" }
null
null
null
null
true
null
18609
null
Default
null
null
null
{ "abstract": " The property 4 in Proposition 2.3 from the paper \"Some remarks on Davie's\nuniqueness theorem\" is replaced with a weaker assertion which is sufficient for\nthe proof of the main results. Technical details and improvements are given.\n", "title": "Correction to the paper \"Some remarks on Davie's uniqueness theorem\"" }
null
null
null
null
true
null
18610
null
Default
null
null
null
{ "abstract": " The Rasch model is widely used for item response analysis in applications\nranging from recommender systems to psychology, education, and finance. While a\nnumber of estimators have been proposed for the Rasch model over the last\ndecades, the available analytical performance guarantees are mostly asymptotic.\nThis paper provides a framework that relies on a novel linear minimum\nmean-squared error (L-MMSE) estimator which enables an exact, nonasymptotic,\nand closed-form analysis of the parameter estimation error under the Rasch\nmodel. The proposed framework provides guidelines on the number of items and\nresponses required to attain low estimation errors in tests or surveys. We\nfurthermore demonstrate its efficacy on a number of real-world collaborative\nfiltering datasets, which reveals that the proposed L-MMSE estimator performs\non par with state-of-the-art nonlinear estimators in terms of predictive\nperformance.\n", "title": "An Estimation and Analysis Framework for the Rasch Model" }
null
null
null
null
true
null
18611
null
Default
null
null
null
{ "abstract": " One of the most directly observable features of a transiting multi-planet\nsystem is their size-ordering when ranked in orbital separation. Kepler has\nrevealed a rich diversity of outcomes, from perfectly ordered systems, like\nKepler-80, to ostensibly disordered systems, like Kepler-20. Under the\nhypothesis that systems are born via preferred formation pathways, one might\nreasonably expect non-random size-orderings reflecting these processes.\nHowever, subsequent dynamical evolution, often chaotic and turbulent in nature,\nmay erode this information and so here we ask - do systems remember how they\nformed? To address this, we devise a model to define the entropy of a planetary\nsystem's size-ordering, by first comparing differences between neighboring\nplanets and then extending to accommodate differences across the chain. We\nderive closed-form solutions for many of the micro state occupancies and\nprovide public code with look-up tables to compute entropy for up to ten-planet\nsystems. All three proposed entropy definitions exhibit the expected property\nthat their credible interval increases with respect to a proxy for time. We\nfind that the observed Kepler multis display a highly significant deficit in\nentropy compared to a randomly generated population. Incorporating a filter for\nsystems deemed likely to be dynamically packed, we show that this result is\nrobust against the possibility of missing planets too. Put together, our work\nestablishes that Kepler systems do indeed remember something of their younger\nyears and highlights the value of information theory for exoplanetary science.\n", "title": "Do planets remember how they formed?" }
null
null
null
null
true
null
18612
null
Default
null
null
null
{ "abstract": " We investigate the atmospheric dynamics of terrestrial planets in synchronous\nrotation within the habitable zone of low-mass stars using the Community\nAtmosphere Model (CAM). The surface temperature contrast between day and night\nhemispheres decreases with an increase in incident stellar flux, which is\nopposite the trend seen on gas giants. We define three dynamical regimes in\nterms of the equatorial Rossby deformation radius and the Rhines length. The\nslow rotation regime has a mean zonal circulation that spans from day to night\nside, with both the Rossby deformation radius and the Rhines length exceeding\nplanetary radius, which occurs for planets around stars with effective\ntemperatures of 3300 K to 4500 K (rotation period > 20 days). Rapid rotators\nhave a mean zonal circulation that partially spans a hemisphere and with banded\ncloud formation beneath the substellar point, with the Rossby deformation\nradius is less than planetary radius, which occurs for planets orbiting stars\nwith effective temperatures of less than 3000 K (rotation period < 5 days). In\nbetween is the Rhines rotation regime, which retains a thermally-direct\ncirculation from day to night side but also features midlatitude\nturbulence-driven zonal jets. Rhines rotators occur for planets around stars in\nthe range of 3000 K to 3300 K (rotation period ~ 5 to 20 days), where the\nRhines length is greater than planetary radius but the Rossby deformation\nradius is less than planetary radius. The dynamical state can be\nobservationally inferred from comparing the morphology of the thermal emission\nphase curves of synchronously rotating planets.\n", "title": "Demarcating circulation regimes of synchronously rotating terrestrial planets within the habitable zone" }
null
null
null
null
true
null
18613
null
Default
null
null
null
{ "abstract": " As the focus of applied research in topological insulators (TI) evolves, the\nneed to synthesize large-area TI films for practical device applications takes\ncenter stage. However, constructing scalable and adaptable processes for\nhigh-quality TI compounds remains a challenge. To this end, a versatile van der\nWaals epitaxy (vdWE) process for custom-feature Bismuth Telluro-Sulfide TI\ngrowth and fabrication is presented, achieved through selective-area\nfluorination and modification of surface free-energy on mica. The TI features\ngrow epitaxially in large single-crystal trigonal domains, exhibiting armchair\nor zigzag crystalline edges highly oriented with the underlying mica lattice\nand only two preferred domain orientations mirrored at $180^\\circ$. As-grown\nfeature thickness dependence on lateral dimensions and denuded zones at\nboundaries are observed, as explained by a semi-empirical two-species surface\nmigration model with robust estimates of growth parameters and elucidating the\nrole of selective-area surface modification. Topological surface states\ncontribute up to 60% of device conductance at room-temperature, indicating\nexcellent electronic quality. High-yield microfabrication and the adaptable\nvdWE growth mechanism with readily alterable precursor and substrate\ncombinations, lend the process versatility to realize crystalline TI synthesis\nin arbitrary shapes and arrays suitable for facile integration with processes\nranging from rapid prototyping to scalable manufacturing.\n", "title": "Versatile Large-Area Custom-Feature van der Waals Epitaxy of Topological Insulators" }
null
null
null
null
true
null
18614
null
Default
null
null
null
{ "abstract": " We prove that for $1<p\\le q<\\infty$, $qp\\geq {p'}^2$ or $p'q'\\geq q^2$,\n$\\frac{1}{p}+\\frac{1}{p'}=\\frac{1}{q}+\\frac{1}{q'}=1$, $$\\|\\omega\nP_\\alpha(f)\\|_{L^p(\\mathcal{H},y^{\\alpha+(2+\\alpha)(\\frac{q}{p}-1)}dxdy)}\\le\nC_{p,q,\\alpha}[\\omega]_{B_{p,q,\\alpha}}^{(\\frac{1}{p'}+\\frac{1}{q})\\max\\{1,\\frac{p'}{q}\\}}\\|\\omega\nf\\|_{L^p(\\mathcal{H},y^{\\alpha}dxdy)}$$ where $P_\\alpha$ is the weighted\nBergman projection of the upper-half plane $\\mathcal{H}$, and\n$$[\\omega]_{B_{p,q,\\alpha}}:=\\sup_{I\\subset\n\\mathbb{R}}\\left(\\frac{1}{|I|^{2+\\alpha}}\\int_{Q_I}\\omega^{q}dV_\\alpha\\right)\\left(\\frac{1}{|I|^{2+\\alpha}}\\int_{Q_I}\\omega^{-p'}dV_\\alpha\\right)^{\\frac{q}{p'}},$$\nwith $Q_I=\\{z=x+iy\\in \\mathbb{C}: x\\in I, 0<y<|I|\\}$.\n", "title": "Sharp off-diagonal weighted norm estimates for the Bergman projection" }
null
null
null
null
true
null
18615
null
Default
null
null
null
{ "abstract": " The Low Frequency Array (LOFAR) radio telescope is an international aperture\nsynthesis radio telescope used to study the Universe at low frequencies. One of\nthe goals of the LOFAR telescope is to conduct deep wide-field surveys. Here we\nwill discuss a framework for the processing of the LOFAR Two Meter Sky Survey\n(LoTSS). This survey will produce close to 50 PB of data within five years.\nThese data rates require processing at locations with high-speed access to the\narchived data. To complete the LoTSS project, the processing software needs to\nbe made portable and moved to clusters with a high bandwidth connection to the\ndata archive. This work presents a framework that makes the LOFAR software\nportable, and is used to scale out LOFAR data reduction. Previous work was\nsuccessful in preprocessing LOFAR data on a cluster of isolated nodes. This\nframework builds upon it and and is currently operational. It is designed to be\nportable, scalable, automated and general. This paper describes its design and\nhigh level operation and the initial results processing LoTSS data.\n", "title": "An Automated Scalable Framework for Distributing Radio Astronomy Processing Across Clusters and Clouds" }
null
null
null
null
true
null
18616
null
Default
null
null
null
{ "abstract": " There is a growing interest in learning data representations that work well\nfor many different types of problems and data. In this paper, we look in\nparticular at the task of learning a single visual representation that can be\nsuccessfully utilized in the analysis of very different types of images, from\ndog breeds to stop signs and digits. Inspired by recent work on learning\nnetworks that predict the parameters of another, we develop a tunable deep\nnetwork architecture that, by means of adapter residual modules, can be steered\non the fly to diverse visual domains. Our method achieves a high degree of\nparameter sharing while maintaining or even improving the accuracy of\ndomain-specific representations. We also introduce the Visual Decathlon\nChallenge, a benchmark that evaluates the ability of representations to capture\nsimultaneously ten very different visual domains and measures their ability to\nrecognize well uniformly.\n", "title": "Learning multiple visual domains with residual adapters" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
18617
null
Validated
null
null
null
{ "abstract": " We consider the nonunitary quantum dynamics of neutral massless scalar\nparticles used to model photons around a massive gravitational lens. The\ngravitational interaction between the lensing mass and asymptotically free\nparticles is described by their second-quantized scattering wavefunctions.\nRemarkably, the zero-point spacetime fluctuations can induce significant\ndecoherence of the scattered states with spontaneous emission of gravitons,\nthereby reducing the particles' coherence as well as energy. This new effect\nsuggests that, when photon polarizations are negligible, such quantum gravity\nphenomena could lead to measurable anomalous redshift of recently studied\nastrophysical lasers through a gravitational lens in the range of black holes\nand galaxy clusters.\n", "title": "Cosmic quantum optical probing of quantum gravity through a gravitational lensLens" }
null
null
null
null
true
null
18618
null
Default
null
null
null
{ "abstract": " We propose a slightly revised Miller-Hagberg (MH) algorithm that efficiently\ngenerates a random network from a given expected degree sequence. The revision\nwas to replace the approximated edge probability between a pair of nodes with a\ncombinatorically calculated edge probability that better captures the\nlikelihood of edge presence especially where edges are dense. The computational\ncomplexity of this combinatorial MH algorithm is still in the same order as the\noriginal one. We evaluated the proposed algorithm through several numerical\nexperiments. The results demonstrated that the proposed algorithm was\nparticularly good at accurately representing high-degree nodes in dense,\nheterogeneous networks. This algorithm may be a useful alternative of other\nmore established network randomization methods, given that the data are\nincreasingly becoming larger and denser in today's network science research.\n", "title": "Combinatorial Miller-Hagberg Algorithm for Randomization of Dense Networks" }
null
null
null
null
true
null
18619
null
Default
null
null
null
{ "abstract": " Our understanding of topological insulators is based on an underlying\ncrystalline lattice where the local electronic degrees of freedom at different\nsites hybridize with each other in ways that produce nontrivial band topology,\nand the search for material systems to realize such phases have been strongly\ninfluenced by this. Here we theoretically demonstrate topological insulators in\nsystems with a random distribution of sites in space, i. e., a random lattice.\nThis is achieved by constructing hopping models on random lattices whose ground\nstates possess nontrivial topological nature (characterized e. g., by Bott\nindices) that manifests as quantized conductances in systems with a boundary.\nBy tuning parameters such as the density of sites (for a given range of fermion\nhopping), we can achieve transitions from trivial to topological phases. We\ndiscuss interesting features of these transitions. In two spatial dimensions,\nwe show this for all five symmetry classes (A, AII, D, DIII and C) that are\nknown to host nontrivial topology in crystalline systems. We expect similar\nphysics to be realizable in any dimension and provide an explicit example of a\n$Z_2$ topological insulator on a random lattice in three spatial dimensions.\nOur study not only provides a deeper understanding of the topological phases of\nnon-interacting fermions, but also suggests new directions in the pursuit of\nthe laboratory realization of topological quantum matter.\n", "title": "Topological Insulators in Random Lattices" }
null
null
null
null
true
null
18620
null
Default
null
null
null
{ "abstract": " Most reinforcement learning algorithms are inefficient for learning multiple\ntasks in complex robotic systems, where different tasks share a set of actions.\nIn such environments a compound policy may be learnt with shared neural network\nparameters, which performs multiple tasks concurrently. However such compound\npolicy may get biased towards a task or the gradients from different tasks\nnegate each other, making the learning unstable and sometimes less data\nefficient. In this paper, we propose a new approach for simultaneous training\nof multiple tasks sharing a set of common actions in continuous action spaces,\nwhich we call as DiGrad (Differential Policy Gradient). The proposed framework\nis based on differential policy gradients and can accommodate multi-task\nlearning in a single actor-critic network. We also propose a simple heuristic\nin the differential policy gradient update to further improve the learning. The\nproposed architecture was tested on 8 link planar manipulator and 27 degrees of\nfreedom(DoF) Humanoid for learning multi-goal reachability tasks for 3 and 2\nend effectors respectively. We show that our approach supports efficient\nmulti-task learning in complex robotic systems, outperforming related methods\nin continuous action spaces.\n", "title": "DiGrad: Multi-Task Reinforcement Learning with Shared Actions" }
null
null
null
null
true
null
18621
null
Default
null
null
null
{ "abstract": " Quantum technologies can be presented to the public with or without\nintroducing a strange trait of quantum theory responsible for their\nnon-classical efficiency. Traditionally the message was centered on the\nsuperposition principle, while entanglement and properties such as\ncontextuality have been gaining ground recently. A less theoretical approach is\nfocused on simple protocols that enable technological applications. It results\nin a pragmatic narrative built with the help of the resource paradigm and\nprinciple-based reconstructions. I discuss the advantages and weaknesses of\nthese methods. To illustrate the importance of new metaphors beyond the\nSchrödinger cat, I briefly describe a non-mathematical narrative about\nentanglement that conveys an idea of some of its unusual properties. If quantum\ntechnologists are to succeed in building trust in their work, they ought to\nprovoke an aesthetic perception in the public commensurable with the\nmathematical beauty of quantum theory experienced by the physicist. The power\nof the narrative method lies in its capacity to do so.\n", "title": "Narratives of Quantum Theory in the Age of Quantum Technologies" }
null
null
null
null
true
null
18622
null
Default
null
null
null
{ "abstract": " The question of continuous-versus-discrete information representation in the\nbrain is a fundamental yet unresolved physiological question. Historically,\nmost analyses assume a continuous representation without considering the\nalternative possibility of a discrete representation. Our work explores the\nplausibility of both representations, and answers the question from a\ncommunications engineering perspective. Drawing on the well-established\nShannon's communications theory, we posit that information in the brain is\nrepresented in a discrete form. Using a computer simulation, we show that\ninformation cannot be communicated reliably between neurons using a continuous\nrepresentation, due to the presence of noise; neural information has to be in a\ndiscrete form. In addition, we designed 3 (human) behavioral experiments on\nprobability estimation and analyzed the data using a novel discrete (quantized)\nmodel of probability. Under a discrete model of probability, two distinct\nprobabilities (say, 0.57 and 0.58) are treated indifferently. We found that\ndata from all participants were better fit to discrete models than continuous\nones. Furthermore, we re-analyzed the data from a published (human) behavioral\nstudy on intertemporal choice using a novel discrete (quantized) model of\nintertemporal choice. Under such a model, two distinct time delays (say, 16\ndays and 17 days) are treated indifferently. We found corroborating results,\nshowing that data from all participants were better fit to discrete models than\ncontinuous ones. In summary, all results reported here support our discrete\nhypothesis of information representation in the brain, which signifies a major\ndemarcation from the current understanding of the brain's physiology.\n", "title": "Is Information in the Brain Represented in Continuous or Discrete Form?" }
null
null
null
null
true
null
18623
null
Default
null
null
null
{ "abstract": " Background-Foreground classification is a fundamental well-studied problem in\ncomputer vision. Due to the pixel-wise nature of modeling and processing in the\nalgorithm, it is usually difficult to satisfy real-time constraints. There is a\ntrade-off between the speed (because of model complexity) and accuracy.\nInspired by the rejection cascade of Viola-Jones classifier, we decompose the\nGaussian Mixture Model (GMM) into an adaptive cascade of classifiers. This way\nwe achieve a good improvement in speed without compensating for accuracy. In\nthe training phase, we learn multiple KDEs for different durations to be used\nas strong prior distribution and detect probable oscillating pixels which\nusually results in misclassifications. We propose a confidence measure for the\nclassifier based on temporal consistency and the prior distribution. The\nconfidence measure thus derived is used to adapt the learning rate and the\nthresholds of the model, to improve accuracy. The confidence measure is also\nemployed to perform temporal and spatial sampling in a principled way. We\ndemonstrate a speed-up factor of 5x to 10x and 17 percent average improvement\nin accuracy over several standard videos.\n", "title": "Real-Time Background Subtraction Using Adaptive Sampling and Cascade of Gaussians" }
null
null
null
null
true
null
18624
null
Default
null
null
null
{ "abstract": " We investigate the problem of computing a nested expectation of the form\n$\\mathbb{P}[\\mathbb{E}[X|Y]\n\\!\\geq\\!0]\\!=\\!\\mathbb{E}[\\textrm{H}(\\mathbb{E}[X|Y])]$ where $\\textrm{H}$ is\nthe Heaviside function. This nested expectation appears, for example, when\nestimating the probability of a large loss from a financial portfolio. We\npresent a method that combines the idea of using Multilevel Monte Carlo (MLMC)\nfor nested expectations with the idea of adaptively selecting the number of\nsamples in the approximation of the inner expectation, as proposed by (Broadie\net al., 2011). We propose and analyse an algorithm that adaptively selects the\nnumber of inner samples on each MLMC level and prove that the resulting MLMC\nmethod with adaptive sampling has an $\\mathcal{O}\\left(\n\\varepsilon^{-2}|\\log\\varepsilon|^2 \\right)$ complexity to achieve a root\nmean-squared error $\\varepsilon$. The theoretical analysis is verified by\nnumerical experiments on a simple model problem. We also present a stochastic\nroot-finding algorithm that, combined with our adaptive methods, can be used to\ncompute other risk measures such as Value-at-Risk (VaR) and Conditional\nValue-at-Risk (CVaR), with the latter being achieved with\n$\\mathcal{O}\\left(\\varepsilon^{-2}\\right)$ complexity.\n", "title": "Multilevel nested simulation for efficient risk estimation" }
null
null
null
null
true
null
18625
null
Default
null
null
null
{ "abstract": " Based on a version of Dudley's Wiener process on the mass shell in the\nmomentum Minkowski space of a massive point particle, a model of a relativistic\nOrnstein--Uhlenbeck process is constructed by addition of a specific drift\nterm. The invariant distribution of this momentum process as well as other\nassociated processes are computed.\n", "title": "Construction of a relativistic Ornstein-Uhlenbeck process" }
null
null
null
null
true
null
18626
null
Default
null
null
null
{ "abstract": " Path planning for autonomous vehicles in arbitrary environments requires a\nguarantee of safety, but this can be impractical to ensure in real-time when\nthe vehicle is described with a high-fidelity model. To address this problem,\nthis paper develops a method to perform trajectory design by considering a\nlow-fidelity model that accounts for model mismatch. The presented method\nbegins by computing a conservative Forward Reachable Set (FRS) of a\nhigh-fidelity model's trajectories produced when tracking trajectories of a\nlow-fidelity model over a finite time horizon. At runtime, the vehicle\nintersects this FRS with obstacles in the environment to eliminate trajectories\nthat can lead to a collision, then selects an optimal plan from the remaining\nsafe set. By bounding the time for this set intersection and subsequent path\nselection, this paper proves a lower bound for the FRS time horizon and sensing\nhorizon to guarantee safety. This method is demonstrated in simulation using a\nkinematic Dubin's car as the low-fidelity model and a dynamic unicycle as the\nhigh-fidelity model.\n", "title": "Safe Trajectory Synthesis for Autonomous Driving in Unforeseen Environments" }
null
null
null
null
true
null
18627
null
Default
null
null
null
{ "abstract": " Training deep networks is expensive and time-consuming with the training\nperiod increasing with data size and growth in model parameters. In this paper,\nwe provide a framework for distributed training of deep networks over a cluster\nof CPUs in Apache Spark. The framework implements both Data Parallelism and\nModel Parallelism making it suitable to use for deep networks which require\nhuge training data and model parameters which are too big to fit into the\nmemory of a single machine. It can be scaled easily over a cluster of cheap\ncommodity hardware to attain significant speedup and obtain better results\nmaking it quite economical as compared to farm of GPUs and supercomputers. We\nhave proposed a new algorithm for training of deep networks for the case when\nthe network is partitioned across the machines (Model Parallelism) along with\ndetailed cost analysis and proof of convergence of the same. We have developed\nimplementations for Fully-Connected Feedforward Networks, Convolutional Neural\nNetworks, Recurrent Neural Networks and Long Short-Term Memory architectures.\nWe present the results of extensive simulations demonstrating the speedup and\naccuracy obtained by our framework for different sizes of the data and model\nparameters with variation in the number of worker cores/partitions; thereby\nshowing that our proposed framework can achieve significant speedup (upto 11X\nfor CNN) and is also quite scalable.\n", "title": "A Data and Model-Parallel, Distributed and Scalable Framework for Training of Deep Networks in Apache Spark" }
null
null
null
null
true
null
18628
null
Default
null
null
null
{ "abstract": " The electroencephalogram (EEG) provides a non-invasive, minimally\nrestrictive, and relatively low cost measure of mesoscale brain dynamics with\nhigh temporal resolution. Although signals recorded in parallel by multiple,\nnear-adjacent EEG scalp electrode channels are highly-correlated and combine\nsignals from many different sources, biological and non-biological, independent\ncomponent analysis (ICA) has been shown to isolate the various source generator\nprocesses underlying those recordings. Independent components (IC) found by ICA\ndecomposition can be manually inspected, selected, and interpreted, but doing\nso requires both time and practice as ICs have no particular order or intrinsic\ninterpretations and therefore require further study of their properties.\nAlternatively, sufficiently-accurate automated IC classifiers can be used to\nclassify ICs into broad source categories, speeding the analysis of EEG studies\nwith many subjects and enabling the use of ICA decomposition in near-real-time\napplications. While many such classifiers have been proposed recently, this\nwork presents the ICLabel project comprised of (1) an IC dataset containing\nspatiotemporal measures for over 200,000 ICs from more than 6,000 EEG\nrecordings, (2) a website for collecting crowdsourced IC labels and educating\nEEG researchers and practitioners about IC interpretation, and (3) the\nautomated ICLabel classifier. The classifier improves upon existing methods in\ntwo ways: by improving the accuracy of the computed label estimates and by\nenhancing its computational efficiency. The ICLabel classifier outperforms or\nperforms comparably to the previous best publicly available method for all\nmeasured IC categories while computing those labels ten times faster than that\nclassifier as shown in a rigorous comparison against all other publicly\navailable EEG IC classifiers.\n", "title": "ICLabel: An automated electroencephalographic independent component classifier, dataset, and website" }
null
null
null
null
true
null
18629
null
Default
null
null
null
{ "abstract": " We derive new variance formulas for inference on a general class of estimands\nof causal average treatment effects in a Randomized Control Trial (RCT). We\ngeneralize Robins (1988) and show that when the estimand of interest is the\nSample Average Treatment Effect of the Treated (SATT or SATC for controls), a\nconsistent variance estimator exists. Although these estimands are equal to the\nSample Average Treatment Effect (SATE) in expectation, potentially large\ndifferences in both accuracy and coverage can occur by the change of estimand,\neven asymptotically. Inference on the SATE, even using a conservative\nconfidence interval, provides incorrect coverage of the SATT or SATC. We derive\nthe variance and limiting distribution of a new and general class of\nestimands---any mixing between SATT and SATC---for which the SATE is a specific\ncase. We demonstrate the applicability of the new theoretical results using\nMonte-Carlo simulations and an empirical application with hundreds of online\nexperiments with an average sample size of approximately one hundred million\nobservations per experiment. An R package, estCI, that implements all the\nproposed estimation procedures is available.\n", "title": "Inference on a New Class of Sample Average Treatment Effects" }
null
null
null
null
true
null
18630
null
Default
null
null
null
{ "abstract": " In this paper we propose a method to solve the Kadomtsev--Petviashvili\nequation based on splitting the linear part of the equation from the nonlinear\npart. The linear part is treated using FFTs, while the nonlinear part is\napproximated using a semi-Lagrangian discontinuous Galerkin approach of\narbitrary order.\nWe demonstrate the efficiency and accuracy of the numerical method by\nproviding a range of numerical simulations. In particular, we find that our\napproach can outperform the numerical methods considered in the literature by\nup to a factor of five. Although we focus on the Kadomtsev--Petviashvili\nequation in this paper, the proposed numerical scheme can be extended to a\nrange of related models as well.\n", "title": "A split step Fourier/discontinuous Galerkin scheme for the Kadomtsev--Petviashvili equation" }
null
null
null
null
true
null
18631
null
Default
null
null
null
{ "abstract": " The key idea of current deep learning methods for dense prediction is to\napply a model on a regular patch centered on each pixel to make pixel-wise\npredictions. These methods are limited in the sense that the patches are\ndetermined by network architecture instead of learned from data. In this work,\nwe propose the dense transformer networks, which can learn the shapes and sizes\nof patches from data. The dense transformer networks employ an encoder-decoder\narchitecture, and a pair of dense transformer modules are inserted into each of\nthe encoder and decoder paths. The novelty of this work is that we provide\ntechnical solutions for learning the shapes and sizes of patches from data and\nefficiently restoring the spatial correspondence required for dense prediction.\nThe proposed dense transformer modules are differentiable, thus the entire\nnetwork can be trained. We apply the proposed networks on natural and\nbiological image segmentation tasks and show superior performance is achieved\nin comparison to baseline methods.\n", "title": "Dense Transformer Networks" }
null
null
null
null
true
null
18632
null
Default
null
null
null
{ "abstract": " Halide perovskite (HaP) semiconductors are revolutionizing photovoltaic (PV)\nsolar energy conversion by showing remarkable performance of solar cells made\nwith esp. tetragonal methylammonium lead tri-iodide (MAPbI3). In particular,\nthe low voltage loss of these cells implies a remarkably low recombination rate\nof photogenerated carriers. It was suggested that low recombination can be due\nto spatial separation of electrons and holes, a possibility if MAPbI3 is a\nsemiconducting ferroelectric, which, however, requires clear experimental\nevidence. As a first step we show that, in operando, MAPbI3 (unlike MAPbBr3) is\npyroelectric, which implies it can be ferroelectric. The next step, proving it\nis (not) ferroelectric, is challenging, because of the material s relatively\nhigh electrical conductance (a consequence of an optical band gap suitable for\nPV conversion!) and low stability under high applied bias voltage. This\nexcludes normal measurements of a ferroelectric hysteresis loop to prove\nferroelctricity s hallmark for switchable polarization. By adopting an approach\nsuitable for electrically leaky materials as MAPbI3, we show here ferroelectric\nhysteresis from well-characterized single crystals at low temperature (still\nwithin the tetragonal phase, which is the room temperature stable phase). Using\nchemical etching, we also image polar domains, the structural fingerprint for\nferroelectricity, periodically stacked along the polar axis of the crystal,\nwhich, as predicted by theory, scale with the overall crystal size. We also\nsucceeded in detecting clear second-harmonic generation, direct evidence for\nthe material s non-centrosymmetry. We note that the material s ferroelectric\nnature, can, but not obviously need to be important in a PV cell, operating\naround room temperature.\n", "title": "Tetragonal CH3NH3PbI3 Is Ferroelectric" }
null
null
null
null
true
null
18633
null
Default
null
null
null
{ "abstract": " Data cube materialization is a classical database operator introduced in Gray\net al.~(Data Mining and Knowledge Discovery, Vol.~1), which is critical for\nmany analysis tasks. Nandi et al.~(Transactions on Knowledge and Data\nEngineering, Vol.~6) first studied cube materialization for large scale\ndatasets using the MapReduce framework, and proposed a sophisticated\nmodification of a simple broadcast algorithm to handle a dataset with a 216GB\ncube size within 25 minutes with 2k machines in 2012. We take a different\napproach, and propose a simple MapReduce algorithm which (1) minimizes the\ntotal number of copy-add operations, (2) leverages locality of computation, and\n(3) balances work evenly across machines. As a result, the algorithm shows\nexcellent performance, and materialized a real dataset with a cube size of\n35.0G tuples and 1.75T bytes in 54 minutes, with 0.4k machines in 2014.\n", "title": "A Simple and Efficient MapReduce Algorithm for Data Cube Materialization" }
null
null
null
null
true
null
18634
null
Default
null
null
null
{ "abstract": " The success of deep convolutional architectures is often attributed in part\nto their ability to learn multiscale and invariant representations of natural\nsignals. However, a precise study of these properties and how they affect\nlearning guarantees is still missing. In this paper, we consider deep\nconvolutional representations of signals; we study their invariance to\ntranslations and to more general groups of transformations, their stability to\nthe action of diffeomorphisms, and their ability to preserve signal\ninformation. This analysis is carried by introducing a multilayer kernel based\non convolutional kernel networks and by studying the geometry induced by the\nkernel mapping. We then characterize the corresponding reproducing kernel\nHilbert space (RKHS), showing that it contains a large class of convolutional\nneural networks with homogeneous activation functions. This analysis allows us\nto separate data representation from learning, and to provide a canonical\nmeasure of model complexity, the RKHS norm, which controls both stability and\ngeneralization of any learned model. In addition to models in the constructed\nRKHS, our stability analysis also applies to convolutional networks with\ngeneric activations such as rectified linear units, and we discuss its\nrelationship with recent generalization bounds based on spectral norms.\n", "title": "Group Invariance, Stability to Deformations, and Complexity of Deep Convolutional Representations" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
18635
null
Validated
null
null
null
{ "abstract": " In this paper, we aim at the completion problem of high order tensor data\nwith missing entries. The existing tensor factorization and completion methods\nsuffer from the curse of dimensionality when the order of tensor N>>3. To\novercome this problem, we propose an efficient algorithm called TT-WOPT\n(Tensor-train Weighted OPTimization) to find the latent core tensors of tensor\ndata and recover the missing entries. Tensor-train decomposition, which has the\npowerful representation ability with linear scalability to tensor order, is\nemployed in our algorithm. The experimental results on synthetic data and\nnatural image completion demonstrate that our method significantly outperforms\nthe other related methods. Especially when the missing rate of data is very\nhigh, e.g., 85% to 99%, our algorithm can achieve much better performance than\nother state-of-the-art algorithms.\n", "title": "Completion of High Order Tensor Data with Missing Entries via Tensor-train Decomposition" }
null
null
null
null
true
null
18636
null
Default
null
null
null
{ "abstract": " In this paper we introduce a family of Deligne--Lusztig type varieties\nattached to connected reductive groups over quotients of discrete valuation\nrings, naturally generalising the higher Deligne--Lusztig varieties and some\nconstructions related to the algebraisation problem raised by Lusztig. We\nestablish the inner product formula between the representations associated to\nthese varieties and the higher Deligne--Lusztig representations.\n", "title": "On the inner products of some Deligne--Lusztig type representations" }
null
null
null
null
true
null
18637
null
Default
null
null
null
{ "abstract": " For an embedded Fano manifold $X$, we introduce a new invariant $S_X$ related\nto the dimension of covering linear spaces. The aim of this paper is to\nclassify Fano manifolds $X$ which have large $S_X$.\n", "title": "An invariant for embedded Fano manifolds covered by linear spaces" }
null
null
null
null
true
null
18638
null
Default
null
null
null
{ "abstract": " A Fog Radio Access Network (F-RAN) is a cellular wireless system that enables\ncontent delivery via the caching of popular content at edge nodes (ENs) and\ncloud processing. The existing information-theoretic analyses of F-RAN systems,\nand special cases thereof, make the assumption that all requests should be\nguaranteed the same delivery latency, which results in identical latency for\nall files in the content library. In practice, however, contents may have\nheterogeneous timeliness requirements depending on the applications that\noperate on them. Given per-EN cache capacity constraint, there exists a\nfundamental trade-off among the delivery latencies of different users'\nrequests, since contents that are allocated more cache space generally enjoy\nlower delivery latencies. For the case with two ENs and two users, the optimal\nlatency trade-off is characterized in the high-SNR regime in terms of the\nNormalized Delivery Time (NDT) metric. The main results are illustrated by\nnumerical examples.\n", "title": "Delivery Latency Trade-Offs of Heterogeneous Contents in Fog Radio Access Networks" }
null
null
null
null
true
null
18639
null
Default
null
null
null
{ "abstract": " Intel software guard extensions (SGX) aims to provide an isolated execution\nenvironment, known as an enclave, for a user-level process to maximize its\nconfidentiality and integrity. In this paper, we study how uninitialized data\ninside a secure enclave can be leaked via structure padding. We found that,\nduring ECALL and OCALL, proxy functions that are automatically generated by the\nIntel SGX Software Development Kit (SDK) fully copy structure variables from an\nenclave to the normal memory to return the result of an ECALL function and to\npass input parameters to an OCALL function. If the structure variables contain\npadding bytes, uninitialized enclave memory, which might contain confidential\ndata like a private key, can be copied to the normal memory through the padding\nbytes. We also consider potential countermeasures against these security\nthreats.\n", "title": "Leaking Uninitialized Secure Enclave Memory via Structure Padding (Extended Abstract)" }
null
null
[ "Computer Science" ]
null
true
null
18640
null
Validated
null
null
null
{ "abstract": " Volume transmission is an important neural communication pathway in which\nneurons in one brain region influence the neurotransmitter concentration in the\nextracellular space of a distant brain region. In this paper, we apply\nasymptotic analysis to a stochastic partial differential equation model of\nvolume transmission to calculate the neurotransmitter concentration in the\nextracellular space. Our model involves the diffusion equation in a\nthree-dimensional domain with interior holes that randomly switch between being\neither sources or sinks. These holes model nerve varicosities that alternate\nbetween releasing and absorbing neurotransmitter, according to when they fire\naction potentials. In the case that the holes are small, we compute\nanalytically the first two nonzero terms in an asymptotic expansion of the\naverage neurotransmitter concentration. The first term shows that the\nconcentration is spatially constant to leading order and that this constant is\nindependent of many details in the problem. Specifically, this constant first\nterm is independent of the number and location of nerve varicosities, neural\nfiring correlations, and the size and geometry of the extracellular space. The\nsecond term shows how these factors affect the concentration at second order.\nInterestingly, the second term is also spatially constant under some mild\nassumptions. We verify our asymptotic results by high-order numerical\nsimulation using radial basis function-generated finite differences.\n", "title": "Asymptotic and numerical analysis of a stochastic PDE model of volume transmission" }
null
null
[ "Quantitative Biology" ]
null
true
null
18641
null
Validated
null
null
null
{ "abstract": " The mixture of factor analyzers model was first introduced over 20 years ago\nand, in the meantime, has been extended to several non-Gaussian analogues. In\ngeneral, these analogues account for situations with heavy tailed and/or skewed\nclusters. An approach is introduced that unifies many of these approaches into\none very general model: the mixture of hidden truncation hyperbolic factor\nanalyzers (MHTHFA) model. In the process of doing this, a hidden truncation\nhyperbolic factor analysis model is also introduced. The MHTHFA model is\nillustrated for clustering as well as semi-supervised classification using two\nreal datasets.\n", "title": "Mixtures of Hidden Truncation Hyperbolic Factor Analyzers" }
null
null
null
null
true
null
18642
null
Default
null
null
null
{ "abstract": " This is a semi--expository update and rewrite of my 1974 AMS AMS Memoir\ndescribing Plancherel formulae and partial Dolbeault cohomology realizations\nfor standard tempered representations for general real reductive Lie groups.\nEven after so many years, much of that Memoir is up to date, but of course\nthere have been a number of refinements, advances and new developments, most of\nwhich have applied to smaller classes of real reductive Lie groups. Here we\nrewrite that AMS Memoir in in view of these advances and indicate the ties with\nsome of the more recent (or at least less classical) approaches to geometric\nrealization of unitary representations.\n", "title": "Representations on Partially Holomorphic Cohomology Spaces, Revisited" }
null
null
[ "Mathematics" ]
null
true
null
18643
null
Validated
null
null
null
{ "abstract": " By performing X-rays measurements in the \"cosmic silence\" of the underground\nlaboratory of Gran Sasso, LNGS-INFN, we test a basic principle of quantum\nmechanics: the Pauli Exclusion Principle (PEP), for electrons. We present the\nachieved results of the VIP experiment and the ongoing VIP2 measurement aiming\nto gain two orders of magnitude improvement in testing PEP. We also use a\nsimilar experimental technique to search for radiation (X and gamma) predicted\nby continuous spontaneous localization models, which aim to solve the\n\"measurement problem\".\n", "title": "Underground tests of quantum mechanics. Whispers in the cosmic silence?" }
null
null
[ "Physics" ]
null
true
null
18644
null
Validated
null
null
null
{ "abstract": " We investigate core-collapse supernova (CCSN) nucleosynthesis with\nself-consistent, axisymmetric (2D) simulations performed using the\nradiation-hydrodynamics code Chimera. Computational costs have traditionally\nconstrained the evolution of the nuclear composition within multidimensional\nCCSN models to, at best, a 14-species $\\alpha$-network capable of tracking only\n$(\\alpha,\\gamma)$ reactions from $^{4}$He to $^{60}$Zn. Such a simplified\nnetwork limits the ability to accurately evolve detailed composition and\nneutronization or calculate the nuclear energy generation rate. Lagrangian\ntracer particles are commonly used to extend the nuclear network evolution by\nincorporating more realistic networks in post-processing nucleosynthesis\ncalculations. However, limitations such as poor spatial resolution of the\ntracer particles, inconsistent thermodynamic evolution, including misestimation\nof expansion timescales, and uncertain determination of the multidimensional\nmass-cut at the end of the simulation impose uncertainties inherent to this\napproach. We present a detailed analysis of the impact of such uncertainties\nfor four self-consistent axisymmetric CCSN models initiated from stellar\nmetallicity, non-rotating progenitors of 12 $M_\\odot$, 15 $M_\\odot$, 20\n$M_\\odot$, and 25 $M_\\odot$ and evolved with the smaller $\\alpha$-network to\nmore than 1 s after the launch of an explosion.\n", "title": "Implications for Post-Processing Nucleosynthesis of Core-Collapse Supernova Models with Lagrangian Particles" }
null
null
null
null
true
null
18645
null
Default
null
null
null
{ "abstract": " Many technologies have been developed to help improve spatial resolution of\nobservational images for ground-based solar telescopes, such as adaptive optics\n(AO) systems and post-processing reconstruction. As any AO system correction is\nonly partial, it is indispensable to use post-processing reconstruction\ntechniques. In the New Vacuum Solar Telescope (NVST), speckle masking method is\nused to achieve the diffraction limited resolution of the telescope. Although\nthe method is very promising, the computation is quite intensive, and the\namount of data is tremendous, requiring several months to reconstruct\nobservational data of one day on a high-end computer. To accelerate image\nreconstruction, we parallelize the program package on a high performance\ncluster. We describe parallel implementation details for several reconstruction\nprocedures. The code is written in C language using Message Passing Interface\n(MPI) and optimized for parallel processing in a multi-processor environment.\nWe show the excellent performance of parallel implementation, and the whole\ndata processing speed is about 71 times faster than before. Finally, we analyze\nthe scalability of the code to find possible bottlenecks, and propose several\nways to further improve the parallel performance. We conclude that the\npresented program is capable of executing in real-time reconstruction\napplications at NVST.\n", "title": "High Performance Parallel Image Reconstruction for New Vacuum Solar Telescope" }
null
null
null
null
true
null
18646
null
Default
null
null
null
{ "abstract": " An interactive session of video-on-demand (VOD) streaming procedure deserves\nsmooth data transportation for the viewer, irrespective of their geographic\nlocation. To access the required video, bandwidth management during the video\nobjects transportation at any interactive session is a mandatory prerequisite.\nIt has been observed in the domain likes movie on demand, electronic\nencyclopedia, interactive games, and educational resources. The required data\nis imported from the distributed storage servers through the high speed\nbackbone network. This paper presents the viewer driven session based\nmulti-user model with respect to the overlay mesh network. In virtue of\nreality, the direct implication of this work elaborately shows the required\nbandwidth is a causal part in the video on demand system. The analytic model of\nsession based single viewer bandwidth requirement model presents the bandwidth\nrequirement for any interactive session like, pause, move slow, rewind, skip\nsome number of frames, or move fast with some constant number of frames. This\nwork presents the bandwidth requirement model for any interactive session that\nbrings the trade-off in data-transportation and storage costs for different\nsystem resources and also for the various system configurations.\n", "title": "Approximation of Bandwidth for the Interactive Operation in Video on Demand System" }
null
null
null
null
true
null
18647
null
Default
null
null
null
{ "abstract": " In this paper we show how polynomial walks can be used to establish a twisted\nrecurrence for sets of positive density in $\\mathbb{Z}^d$. In particular, we\nprove that if $\\Gamma \\leq \\operatorname{GL}_d(\\mathbb{Z})$ is finitely\ngenerated by unipotents and acts irreducibly on $\\mathbb{R}^d$, then for any\nset $B \\subset \\mathbb{Z}^d$ of positive density, there exists $k \\geq 1$ such\nthat for any $v \\in k \\mathbb{Z}^d$ one can find $\\gamma \\in \\Gamma$ with\n$\\gamma v \\in B - B$. Our method does not require the linearity of the action,\nand we prove a twisted recurrence for semigroups of maps from $\\mathbb{Z}^d$ to\n$\\mathbb{Z}^d$ satisfying some irreducibility and polynomial assumptions. As\none of the consequences, we prove a non-linear analog of Bogolubov's theorem --\nfor any set $B \\subset \\mathbb{Z}^2$ of positive density, and $p(n) \\in\n\\mathbb{Z}[n]$, with $p(0) = 0$ and $\\operatorname{deg}(p) \\geq 2$, there\nexists $k \\geq 1$ such that $k \\mathbb{Z} \\subset \\{ x - p(y) \\, | \\, (x,y) \\in\nB-B \\}$. Unlike the previous works on twisted recurrence that used recent\nresults of Benoist-Quint and Bourgain-Furman-Lindenstrauss-Mozes on\nequidistribution of random walks on automorphism groups of tori, our method\nrelies on the classical Weyl equidistribution for polynomial orbits on tori.\n", "title": "Twisted Recurrence via Polynomial Walks" }
null
null
null
null
true
null
18648
null
Default
null
null
null
{ "abstract": " We prove the global existence of the unique mild solution for the Cauchy\nproblem of the cut-off Boltzmann equation for soft potential model $\\gamma=2-N$\nwith initial data small in $L^N_{x,v}$ where $N=2,3$ is the dimension. The\nproof relies on the existing inhomogeneous Strichartz estimates for the kinetic\nequation by Ovcharov and convolution-like estimates for the gain term of the\nBoltzmann collision operator by Alonso, Carneiro and Gamba. The global dynamics\nof the solution is also characterized by showing that the small global solution\nscatters with respect to the kinetic transport operator in $L^N_{x,v}$. Also\nthe connection between function spaces and cut-off soft potential model\n$-N<\\gamma<2-N$ is characterized in the local well-posedness result for the\nCauchy problem with large initial data.\n", "title": "Well-posedness and scattering for the Boltzmann equations: Soft potential with cut-off" }
null
null
null
null
true
null
18649
null
Default
null
null
null
{ "abstract": " This study concerned the active use of Wikipedia as a teaching tool in the\nclassroom in higher education, trying to identify different usage profiles and\ntheir characterization. A questionnaire survey was administrated to all\nfull-time and part-time teachers at the Universitat Oberta de Catalunya and the\nUniversitat Pompeu Fabra, both in Barcelona, Spain. The questionnaire was\ndesigned using the Technology Acceptance Model as a reference, including items\nabout teachers web 2.0 profile, Wikipedia usage, expertise, perceived\nusefulness, easiness of use, visibility and quality, as well as Wikipedia\nstatus among colleagues and incentives to use it more actively. Clustering and\nstatistical analysis were carried out using the k-medoids algorithm and\ndifferences between clusters were assessed by means of contingency tables and\ngeneralized linear models (logit). The respondents were classified in four\nclusters, from less to more likely to adopt and use Wikipedia in the classroom,\nnamely averse (25.4%), reluctant (17.9%), open (29.5%) and proactive (27.2%).\nProactive faculty are mostly men teaching part-time in STEM fields, mainly\nengineering, while averse faculty are mostly women teaching full-time in\nnon-STEM fields. Nevertheless, questionnaire items related to visibility,\nquality, image, usefulness and expertise determine the main differences between\nclusters, rather than age, gender or domain. Clusters involving a positive view\nof Wikipedia and at least some frequency of use clearly outnumber those with a\nstrictly negative stance. This goes against the common view that faculty\nmembers are mostly sceptical about Wikipedia. Environmental factors such as\nacademic culture and colleagues opinion are more important than faculty\npersonal characteristics, especially with respect to what they think about\nWikipedia quality.\n", "title": "Wikipedia in academia as a teaching tool: from averse to proactive faculty profiles" }
null
null
null
null
true
null
18650
null
Default
null
null
null
{ "abstract": " Deep learning (DL), a new-generation of artificial neural network research,\nhas transformed industries, daily lives and various scientific disciplines in\nrecent years. DL represents significant progress in the ability of neural\nnetworks to automatically engineer problem-relevant features and capture highly\ncomplex data distributions. I argue that DL can help address several major new\nand old challenges facing research in water sciences such as\ninter-disciplinarity, data discoverability, hydrologic scaling, equifinality,\nand needs for parameter regionalization. This review paper is intended to\nprovide water resources scientists and hydrologists in particular with a simple\ntechnical overview, trans-disciplinary progress update, and a source of\ninspiration about the relevance of DL to water. The review reveals that various\nphysical and geoscientific disciplines have utilized DL to address data\nchallenges, improve efficiency, and gain scientific insights. DL is especially\nsuited for information extraction from image-like data and sequential data.\nTechniques and experiences presented in other disciplines are of high relevance\nto water research. Meanwhile, less noticed is that DL may also serve as a\nscientific exploratory tool. A new area termed 'AI neuroscience,' where\nscientists interpret the decision process of deep networks and derive insights,\nhas been born. This budding sub-discipline has demonstrated methods including\ncorrelation-based analysis, inversion of network-extracted features,\nreduced-order approximations by interpretable models, and attribution of\nnetwork decisions to inputs. Moreover, DL can also use data to condition\nneurons that mimic problem-specific fundamental organizing units, thus\nrevealing emergent behaviors of these units. Vast opportunities exist for DL to\npropel advances in water sciences.\n", "title": "A trans-disciplinary review of deep learning research for water resources scientists" }
null
null
null
null
true
null
18651
null
Default
null
null
null
{ "abstract": " The ccns3Sim project is an open source implementation of the CCNx 1.0\nprotocols for the NS3 simulator. We describe the implementation and several\nimportant features including modularity and process delay simulation. The\nccns3Sim implementation is a fresh NS3-specific implementation. Like NS3\nitself, it uses C++98 standard, NS3 code style, NS3 smart pointers, NS3 xUnit,\nand integrates with the NS3 documentation and manual. A user or developer does\nnot need to learn two systems. If one knows NS3, one should be able to get\nstarted with the CCNx code right away. A developer can easily use their own\nimplementation of the layer 3 protocol, layer 4 protocol, forwarder, routing\nprotocol, Pending Interest Table (PIT) or Forwarding Information Base (FIB) or\nContent Store (CS). A user may configure or specify a new implementation for\nany of these features at runtime in the simulation script. In this paper, we\ndescribe the software architecture and give examples of using the simulator. We\nevaluate the implementation with several example experiments on ICN caching.\n", "title": "A new NS3 Implementation of CCNx 1.0 Protocol" }
null
null
null
null
true
null
18652
null
Default
null
null
null
{ "abstract": " In this paper, a new approach is proposed for automated software maintenance.\nThe tool is able to perform 26 different refactorings. It also contains a large\nselection of metrics to measure the impact of the refactorings on the software\nand six different search based optimization algorithms to improve the software.\nThis tool contains both mono-objective and multi-objective search techniques\nfor software improvement and is fully automated. The paper describes the\nvarious capabilities of the tool, the unique aspects of it, and also presents\nsome research results from experimentation. The individual metrics are tested\nacross five different codebases to deduce the most effective metrics for\ngeneral quality improvement. It is found that the metrics that relate to more\nspecific elements of the code are more useful for driving change in the search.\nThe mono-objective genetic algorithm is also tested against the multi-objective\nalgorithm to see how comparable the results gained are with three separate\nobjectives. When comparing the best solutions of each individual objective the\nmulti-objective approach generates suitable improvements in quality in less\ntime, allowing for rapid maintenance cycles.\n", "title": "MultiRefactor: Automated Refactoring To Improve Software Quality" }
null
null
[ "Computer Science" ]
null
true
null
18653
null
Validated
null
null
null
{ "abstract": " Every time a person encounters an object with a given degree of familiarity,\nhe/she immediately knows how to grasp it. Adaptation of the movement of the\nhand according to the object geometry happens effortlessly because of the\naccumulated knowledge of previous experiences grasping similar objects. In this\npaper, we present a novel method for inferring grasp configurations based on\nthe object shape. Grasping knowledge is gathered in a synergy space of the\nrobotic hand built by following a human grasping taxonomy. The synergy space is\nconstructed through human demonstrations employing a exoskeleton that provides\nforce feedback, which provides the advantage of evaluating the quality of the\ngrasp. The shape descriptor is obtained by means of a categorical non-rigid\nregistration that encodes typical intra-class variations. This approach is\nespecially suitable for on-line scenarios where only a portion of the object's\nsurface is observable. This method is demonstrated through simulation and real\nrobot experiments by grasping objects never seen before by the robot.\n", "title": "Learning Postural Synergies for Categorical Grasping through Shape Space Registration" }
null
null
null
null
true
null
18654
null
Default
null
null
null
{ "abstract": " In the paper, we prove an analogue of the Kato-Rosenblum theorem in a\nsemifinite von Neumann algebra. Let $\\mathcal{M}$ be a countably decomposable,\nproperly infinite, semifinite von Neumann algebra acting on a Hilbert space\n$\\mathcal{H}$ and let $\\tau$ be a faithful normal semifinite tracial weight of\n$\\mathcal M$. Suppose that $H$ and $H_1$ are self-adjoint operators affiliated\nwith $\\mathcal{M}$. We show that if $H-H_1$ is in $\\mathcal{M}\\cap\nL^{1}\\left(\\mathcal{M},\\tau\\right)$, then the ${norm}$ absolutely continuous\nparts of $H$ and $H_1$ are unitarily equivalent. This implies that the real\npart of a non-normal hyponormal operator in $\\mathcal M$ is not a perturbation\nby $\\mathcal{M}\\cap L^{1}\\left(\\mathcal{M},\\tau\\right)$ of a diagonal operator.\nMeanwhile, for $n\\ge 2$ and $1\\leq p<n$, by modifying Voiculescu's invariant we\ngive examples of commuting $n$-tuples of self-adjoint operators in\n$\\mathcal{M}$ that are not arbitrarily small perturbations of commuting\ndiagonal operators modulo $\\mathcal{M}\\cap L^{p}\\left(\\mathcal{M},\\tau\\right)$.\n", "title": "Perturbations of self-adjoint operators in semifinite von Neumann algebras: Kato-Rosenblum theorem" }
null
null
null
null
true
null
18655
null
Default
null
null
null
{ "abstract": " A peculiar infrared ring-like structure was discovered by {\\em Spitzer}\naround the strongly magnetised neutron star SGR 1900$+$14. This infrared\nstructure was suggested to be due to a dust-free cavity, produced by the SGR\nGiant Flare occurred in 1998, and kept illuminated by surrounding stars. Using\na 3D dust radiative transfer code, we aimed at reproducing the emission\nmorphology and the integrated emission flux of this structure assuming\ndifferent spatial distributions and densities for the dust, and different\npositions for the illuminating stars. We found that a dust-free ellipsoidal\ncavity can reproduce the shape, flux, and spectrum of the ring-like infrared\nemission, provided that the illuminating stars are inside the cavity and that\nthe interstellar medium has high gas density ($n_H\\sim$1000 cm$^{-3}$). We\nfurther constrain the emitting region to have a sharp inner boundary and to be\nsignificantly extended in the radial direction, possibly even just a cavity in\na smooth molecular cloud. We discuss possible scenarios for the formation of\nthe dustless cavity and the particular geometry that allows it to be IR-bright.\n", "title": "Dust radiative transfer modelling of the infrared ring around the magnetar SGR 1900$+$14" }
null
null
null
null
true
null
18656
null
Default
null
null
null
{ "abstract": " The question of suitability of transfer matrix description of electrons\ntraversing grating-type dielectric laser acceleration (DLA) structures is\naddressed. It is shown that although matrix considerations lead to interesting\ninsights, the basic transfer properties of DLA cells cannot be described by a\nmatrix. A more general notion of a transfer function is shown to be a simple\nand useful tool for formulating problems of particle dynamics in DLA. As an\nexample, a focusing structure is proposed which works simultaneously for all\nelectron phases.\n", "title": "Application of transfer matrix and transfer function analysis to grating-type dielectric laser accelerators: ponderomotive focusing of electrons" }
null
null
null
null
true
null
18657
null
Default
null
null
null
{ "abstract": " This paper studies Bayesian ranking and selection (R&S) problems with\ncorrelated prior beliefs and continuous domains, i.e. Bayesian optimization\n(BO). Knowledge gradient methods [Frazier et al., 2008, 2009] have been widely\nstudied for discrete R&S problems, which sample the one-step Bayes-optimal\npoint. When used over continuous domains, previous work on the knowledge\ngradient [Scott et al., 2011, Wu and Frazier, 2016, Wu et al., 2017] often rely\non a discretized finite approximation. However, the discretization introduces\nerror and scales poorly as the dimension of domain grows. In this paper, we\ndevelop a fast discretization-free knowledge gradient method for Bayesian\noptimization. Our method is not restricted to the fully sequential setting, but\nuseful in all settings where knowledge gradient can be used over continuous\ndomains. We show how our method can be generalized to handle (i) batch of\npoints suggestion (parallel knowledge gradient); (ii) the setting where\nderivative information is available in the optimization process\n(derivative-enabled knowledge gradient). In numerical experiments, we\ndemonstrate that the discretization-free knowledge gradient method finds global\noptima significantly faster than previous Bayesian optimization algorithms on\nboth synthetic test functions and real-world applications, especially when\nfunction evaluations are noisy; and derivative-enabled knowledge gradient can\nfurther improve the performances, even outperforming the gradient-based\noptimizer such as BFGS when derivative information is available.\n", "title": "Discretization-free Knowledge Gradient Methods for Bayesian Optimization" }
null
null
null
null
true
null
18658
null
Default
null
null
null
{ "abstract": " In this paper, we propose a perturbation framework to measure the robustness\nof graph properties. Although there are already perturbation methods proposed\nto tackle this problem, they are limited by the fact that the strength of the\nperturbation cannot be well controlled. We firstly provide a perturbation\nframework on graphs by introducing weights on the nodes, of which the magnitude\nof perturbation can be easily controlled through the variance of the weights.\nMeanwhile, the topology of the graphs are also preserved to avoid\nuncontrollable strength in the perturbation. We then extend the measure of\nrobustness in the robust statistics literature to the graph properties.\n", "title": "Measuring the Robustness of Graph Properties" }
null
null
null
null
true
null
18659
null
Default
null
null
null
{ "abstract": " This paper claims that a new field of empirical software engineering research\nand practice is emerging: data mining using/used-by optimizers for empirical\nstudies, or DUO. For example, data miners can generate the models that are\nexplored by optimizers.Also, optimizers can advise how to best adjust the\ncontrol parameters of a data miner. This combined approach acts like an agent\nleaning over the shoulder of an analyst that advises \"ask this question next\"\nor \"ignore that problem, it is not relevant to your goals\". Further, those\nagents can help us build \"better\" predictive models, where \"better\" can be\neither greater predictive accuracy, or faster modeling time (which, in turn,\nenables the exploration of a wider range of options). We also caution that the\nera of papers that just use data miners is coming to an end. Results obtained\nfrom an unoptimized data miner can be quickly refuted, just by applying an\noptimizer to produce a different (and better performing) model. Our conclusion,\nhence, is that for software analytics it is possible, useful and necessary to\ncombine data mining and optimization using DUO.\n", "title": "Better Software Analytics via \"DUO\": Data Mining Algorithms Using/Used-by Optimizers" }
null
null
null
null
true
null
18660
null
Default
null
null
null
{ "abstract": " We propose an automatic diabetic retinopathy (DR) analysis algorithm based on\ntwo-stages deep convolutional neural networks (DCNN). Compared to existing\nDCNN-based DR detection methods, the proposed algorithm have the following\nadvantages: (1) Our method can point out the location and type of lesions in\nthe fundus images, as well as giving the severity grades of DR. Moreover, since\nretina lesions and DR severity appear with different scales in fundus images,\nthe integration of both local and global networks learn more complete and\nspecific features for DR analysis. (2) By introducing imbalanced weighting map,\nmore attentions will be given to lesion patches for DR grading, which\nsignificantly improve the performance of the proposed algorithm. In this study,\nwe label 12,206 lesion patches and re-annotate the DR grades of 23,595 fundus\nimages from Kaggle competition dataset. Under the guidance of clinical\nophthalmologists, the experimental results show that our local lesion detection\nnet achieve comparable performance with trained human observers, and the\nproposed imbalanced weighted scheme also be proved to significantly improve the\ncapability of our DCNN-based DR grading algorithm.\n", "title": "Lesion detection and Grading of Diabetic Retinopathy via Two-stages Deep Convolutional Neural Networks" }
null
null
null
null
true
null
18661
null
Default
null
null
null
{ "abstract": " We present the DRYVR framework for verifying hybrid control systems that are\ndescribed by a combination of a black-box simulator for trajectories and a\nwhite-box transition graph specifying mode switches. The framework includes (a)\na probabilistic algorithm for learning sensitivity of the continuous\ntrajectories from simulation data, (b) a bounded reachability analysis\nalgorithm that uses the learned sensitivity, and (c) reasoning techniques based\non simulation relations and sequential composition, that enable verification of\ncomplex systems under long switching sequences, from the reachability analysis\nof a simpler system under shorter sequences. We demonstrate the utility of the\nframework by verifying a suite of automotive benchmarks that include powertrain\ncontrol, automatic transmission, and several autonomous and ADAS features like\nautomatic emergency braking, lane-merge, and auto-passing controllers.\n", "title": "DRYVR:Data-driven verification and compositional reasoning for automotive systems" }
null
null
[ "Computer Science" ]
null
true
null
18662
null
Validated
null
null
null
{ "abstract": " Novice programmers often struggle with the formal syntax of programming\nlanguages. To assist them, we design a novel programming language correction\nframework amenable to reinforcement learning. The framework allows an agent to\nmimic human actions for text navigation and editing. We demonstrate that the\nagent can be trained through self-exploration directly from the raw input, that\nis, program text itself, without any knowledge of the formal syntax of the\nprogramming language. We leverage expert demonstrations for one tenth of the\ntraining data to accelerate training. The proposed technique is evaluated on\n6975 erroneous C programs with typographic errors, written by students during\nan introductory programming course. Our technique fixes 14% more programs and\n29% more compiler error messages relative to those fixed by a state-of-the-art\ntool, DeepFix, which uses a fully supervised neural machine translation\napproach.\n", "title": "Deep Reinforcement Learning for Programming Language Correction" }
null
null
null
null
true
null
18663
null
Default
null
null
null
{ "abstract": " In this article, we will construct the additional perturbative quantum torus\nsymmetry of the dispersionless BKP hierarchy basing on the $W_{\\infty}$\ninfinite dimensional Lie symmetry. These results show that the complete quantum\ntorus symmetry is broken from the BKP hierarchy to its dispersionless\nhierarchy. Further a series of additional flows of the multicomponent BKP\nhierarchy will be defined and these flows constitute an $N$-folds direct\nproduct of the positive half of the quantum torus symmetries.\n", "title": "Dispersionless and multicomponent BKP hierarchies with quantum torus symmetries" }
null
null
null
null
true
null
18664
null
Default
null
null
null
{ "abstract": " Partial differential equations are central to describing many physical\nphenomena. In many applications these phenomena are observed through a sensor\nnetwork, with the aim of inferring their underlying properties. Leveraging from\ncertain results in sampling and approximation theory, we present a new\nframework for solving a class of inverse source problems for physical fields\ngoverned by linear partial differential equations. Specifically, we demonstrate\nthat the unknown field sources can be recovered from a sequence of, so called,\ngeneralised measurements by using multidimensional frequency estimation\ntechniques. Next we show that---for physics-driven fields---this sequence of\ngeneralised measurements can be estimated by computing a linear weighted-sum of\nthe sensor measurements; whereby the exact weights (of the sums) correspond to\nthose that reproduce multidimensional exponentials, when used to linearly\ncombine translates of a particular prototype function related to the Green's\nfunction of our underlying field. Explicit formulae are then derived for the\nsequence of weights, that map sensor samples to the exact sequence of\ngeneralised measurements when the Green's function satisfies the generalised\nStrang-Fix condition. Otherwise, the same mapping yields a close approximation\nof the generalised measurements. Based on this new framework we develop\npractical, noise robust, sensor network strategies for solving the inverse\nsource problem, and then present numerical simulation results to verify their\nperformance.\n", "title": "A Sampling Framework for Solving Physics-driven Inverse Source Problems" }
null
null
null
null
true
null
18665
null
Default
null
null
null
{ "abstract": " Manifold calculus is a form of functor calculus that analyzes contravariant\nfunctors from some categories of manifolds to topological spaces by providing\nanalytic approximations to them. In this paper we apply the theory of\nh-principle to construct several examples of analytic functors in this sense.\nWe prove that the analytic approximation of the Lagrangian embeddings functor\n$\\mathrm{emb}_{\\mathrm{Lag}}(-,N)$ is the totally real embeddings functor\n$\\mathrm{emb}_{\\mathrm{TR}}(-,N)$. Under certain conditions we provide a\ngeometric construction for the homotopy fiber of $ \\mathrm{emb}(M,N)\n\\rightarrow \\mathrm{imm}(M,N)$. This construction also provides an example of a\nfunctor which is itself empty when evaluated on most manifolds but it's\nanalytic approximation is almost always non-empty.\n", "title": "An Application of $h$-principle to Manifold Calculus" }
null
null
null
null
true
null
18666
null
Default
null
null
null
{ "abstract": " In this paper, a quick and efficient method is presented for grasping unknown\nobjects in clutter. The grasping method relies on real-time superquadric (SQ)\nrepresentation of partial view objects and incomplete object modelling, well\nsuited for unknown symmetric objects in cluttered scenarios which is followed\nby optimized antipodal grasping. The incomplete object models are processed\nthrough a mirroring algorithm that assumes symmetry to first create an\napproximate complete model and then fit for SQ representation. The grasping\nalgorithm is designed for maximum force balance and stability, taking advantage\nof the quick retrieval of dimension and surface curvature information from the\nSQ parameters. The pose of the SQs with respect to the direction of gravity is\ncalculated and used together with the parameters of the SQs and specification\nof the gripper, to select the best direction of approach and contact points.\nThe SQ fitting method has been tested on custom datasets containing objects in\nisolation as well as in clutter. The grasping algorithm is evaluated on a PR2\nand real time results are presented. Initial results indicate that though the\nmethod is based on simplistic shape information, it outperforms other learning\nbased grasping algorithms that also work in clutter in terms of time-efficiency\nand accuracy.\n", "title": "Grasping Unknown Objects in Clutter by Superquadric Representation" }
null
null
null
null
true
null
18667
null
Default
null
null
null
{ "abstract": " Infants are experts at playing, with an amazing ability to generate novel\nstructured behaviors in unstructured environments that lack clear extrinsic\nreward signals. We seek to mathematically formalize these abilities using a\nneural network that implements curiosity-driven intrinsic motivation. Using a\nsimple but ecologically naturalistic simulated environment in which an agent\ncan move and interact with objects it sees, we propose a \"world-model\" network\nthat learns to predict the dynamic consequences of the agent's actions.\nSimultaneously, we train a separate explicit \"self-model\" that allows the agent\nto track the error map of its own world-model, and then uses the self-model to\nadversarially challenge the developing world-model. We demonstrate that this\npolicy causes the agent to explore novel and informative interactions with its\nenvironment, leading to the generation of a spectrum of complex behaviors,\nincluding ego-motion prediction, object attention, and object gathering.\nMoreover, the world-model that the agent learns supports improved performance\non object dynamics prediction, detection, localization and recognition tasks.\nTaken together, our results are initial steps toward creating flexible\nautonomous agents that self-supervise in complex novel physical environments.\n", "title": "Learning to Play with Intrinsically-Motivated Self-Aware Agents" }
null
null
[ "Statistics" ]
null
true
null
18668
null
Validated
null
null
null
{ "abstract": " A computational method, based on $\\ell_1$-minimization, is proposed for the\nproblem of link flow correction, when the available traffic flow data on many\nlinks in a road network are inconsistent with respect to the flow conservation\nlaw. Without extra information, the problem is generally ill-posed when a large\nportion of the link sensors are unhealthy. It is possible, however, to correct\nthe corrupted link flows \\textit{accurately} with the proposed method under a\nrecoverability condition if there are only a few bad sensors which are located\nat certain links. We analytically identify the links that are robust to\nmiscounts and relate them to the geometric structure of the traffic network by\nintroducing the recoverability concept and an algorithm for computing it. The\nrecoverability condition for corrupted links is simply the associated\nrecoverability being greater than 1. In a more realistic setting, besides the\nunhealthy link sensors, small measurement noises may be present at the other\nsensors. Under the same recoverability condition, our method guarantees to give\nan estimated traffic flow fairly close to the ground-truth data and leads to a\nbound for the correction error. Both synthetic and real-world examples are\nprovided to demonstrate the effectiveness of the proposed method.\n", "title": "$\\ell_1$-minimization method for link flow correction" }
null
null
null
null
true
null
18669
null
Default
null
null
null
{ "abstract": " Current flow closeness centrality (CFCC) has a better discriminating ability\nthan the ordinary closeness centrality based on shortest paths. In this paper,\nwe extend this notion to a group of vertices in a weighted graph, and then\nstudy the problem of finding a subset $S$ of $k$ vertices to maximize its CFCC\n$C(S)$, both theoretically and experimentally. We show that the problem is\nNP-hard, but propose two greedy algorithms for minimizing the reciprocal of\n$C(S)$ with provable guarantees using the monotoncity and supermodularity. The\nfirst is a deterministic algorithm with an approximation factor\n$(1-\\frac{k}{k-1}\\cdot\\frac{1}{e})$ and cubic running time; while the second is\na randomized algorithm with a\n$(1-\\frac{k}{k-1}\\cdot\\frac{1}{e}-\\epsilon)$-approximation and nearly-linear\nrunning time for any $\\epsilon > 0$. Extensive experiments on model and real\nnetworks demonstrate that our algorithms are effective and efficient, with the\nsecond algorithm being scalable to massive networks with more than a million\nvertices.\n", "title": "Current Flow Group Closeness Centrality for Complex Networks" }
null
null
null
null
true
null
18670
null
Default
null
null
null
{ "abstract": " The work in the paper presents an animation extension ($CHR^{vis}$) to\nConstraint Handling Rules (CHR). Visualizations have always helped programmers\nunderstand data and debug programs. A picture is worth a thousand words. It can\nhelp identify where a problem is or show how something works. It can even\nillustrate a relation that was not clear otherwise. $CHR^{vis}$ aims at\nembedding animation and visualization features into CHR programs. It thus\nenables users, while executing programs, to have such executions animated. The\npaper aims at providing the operational semantics for $CHR^{vis}$. The\ncorrectness of $CHR^{vis}$ programs is also discussed. Some applications of the\nnew extension are also introduced.\n", "title": "Visualization of Constraint Handling Rules: Semantics and Applications" }
null
null
null
null
true
null
18671
null
Default
null
null
null
{ "abstract": " Networks capture pairwise interactions between entities and are frequently\nused in applications such as social networks, food networks, and protein\ninteraction networks, to name a few. Communities, cohesive groups of nodes,\noften form in these applications, and identifying them gives insight into the\noverall organization of the network. One common quality function used to\nidentify community structure is modularity. In Hu et al. [SIAM J. App. Math.,\n73(6), 2013], it was shown that modularity optimization is equivalent to\nminimizing a particular nonconvex total variation (TV) based functional over a\ndiscrete domain. They solve this problem, assuming the number of communities is\nknown, using a Merriman, Bence, Osher (MBO) scheme.\nWe show that modularity optimization is equivalent to minimizing a convex\nTV-based functional over a discrete domain, again, assuming the number of\ncommunities is known. Furthermore, we show that modularity has no convex\nrelaxation satisfying certain natural conditions. We therefore, find a\nmanageable non-convex approximation using a Ginzburg Landau functional, which\nprovably converges to the correct energy in the limit of a certain parameter.\nWe then derive an MBO algorithm with fewer hand-tuned parameters than in Hu et\nal. and which is 7 times faster at solving the associated diffusion equation\ndue to the fact that the underlying discretization is unconditionally stable.\nOur numerical tests include a hyperspectral video whose associated graph has\n2.9x10^7 edges, which is roughly 37 times larger than was handled in the paper\nof Hu et al.\n", "title": "Simplified Energy Landscape for Modularity Using Total Variation" }
null
null
null
null
true
null
18672
null
Default
null
null
null
{ "abstract": " We present $\\emph{NuSTAR}$ observations of neutron star (NS) low-mass X-ray\nbinaries: 4U 1636-53, GX 17+2, and 4U 1705-44. We observed 4U 1636-53 in the\nhard state, with an Eddington fraction, $F_{\\mathrm{Edd}}$, of 0.01; GX 17+2\nand 4U 1705-44 were in the soft state with fractions of 0.57 and 0.10,\nrespectively. Each spectrum shows evidence for a relativistically broadened Fe\nK$_{\\alpha}$ line. Through accretion disk reflection modeling, we constrain the\nradius of the inner disk in 4U 1636-53 to be $R_{in}=1.03\\pm0.03$ ISCO\n(innermost stable circular orbit) assuming a dimensionless spin parameter\n$a_{*}=cJ/GM^{2}=0.0$, and $R_{in}=1.08\\pm0.06$ ISCO for $a_{*}=0.3$ (errors\nquoted at 1 $\\sigma$). This value proves to be model independent. For\n$a_{*}=0.3$ and $M=1.4\\ M_{\\odot}$, for example, $1.08\\pm0.06$ ISCO translates\nto a physical radius of $R=10.8\\pm0.6$ km, and the neutron star would have to\nbe smaller than this radius (other outcomes are possible for allowed spin\nparameters and masses). For GX 17+2, $R_{in}=1.00-1.04$ ISCO for $a_{*}=0.0$\nand $R_{in}=1.03-1.30$ ISCO for $a_{*}=0.3$. For $a_{*}=0.3$ and $M=1.4\\\nM_{\\odot}$, $R_{in}=1.03-1.30$ ISCO translates to $R=10.3-13.0$ km. The inner\naccretion disk in 4U 1705-44 may be truncated just above the stellar surface,\nperhaps by a boundary layer or magnetosphere; reflection models give a radius\nof 1.46-1.64 ISCO for $a_{*}=0.0$ and 1.69-1.93 ISCO for $a_{*}=0.3$. We\ndiscuss the implications that our results may have on the equation of state of\nultradense, cold matter and our understanding of the innermost accretion flow\nonto neutron stars with low surface magnetic fields, and systematic errors\nrelated to the reflection models and spacetime metric around less idealized\nneutron stars.\n", "title": "A Hard Look at the Neutron Stars and Accretion Disks in 4U 1636-53, GX 17+2, and 4U 1705-44 with $\\emph{NuSTAR}$" }
null
null
null
null
true
null
18673
null
Default
null
null
null
{ "abstract": " Motivated by comparative genomics, Chen et al. [9] introduced the Maximum\nDuo-preservation String Mapping (MDSM) problem in which we are given two\nstrings $s_1$ and $s_2$ from the same alphabet and the goal is to find a\nmapping $\\pi$ between them so as to maximize the number of duos preserved. A\nduo is any two consecutive characters in a string and it is preserved in the\nmapping if its two consecutive characters in $s_1$ are mapped to same two\nconsecutive characters in $s_2$. The MDSM problem is known to be NP-hard and\nthere are approximation algorithms for this problem [3, 5, 13], but all of them\nconsider only the \"unweighted\" version of the problem in the sense that a duo\nfrom $s_1$ is preserved by mapping to any same duo in $s_2$ regardless of their\npositions in the respective strings. However, it is well-desired in comparative\ngenomics to find mappings that consider preserving duos that are \"closer\" to\neach other under some distance measure [19]. In this paper, we introduce a\ngeneralized version of the problem, called the Maximum-Weight Duo-preservation\nString Mapping (MWDSM) problem that captures both duos-preservation and\nduos-distance measures in the sense that mapping a duo from $s_1$ to each\npreserved duo in $s_2$ has a weight, indicating the \"closeness\" of the two\nduos. The objective of the MWDSM problem is to find a mapping so as to maximize\nthe total weight of preserved duos. In this paper, we give a polynomial-time\n6-approximation algorithm for this problem.\n", "title": "Approximating Weighted Duo-Preservation in Comparative Genomics" }
null
null
[ "Computer Science" ]
null
true
null
18674
null
Validated
null
null
null
{ "abstract": " We study the algebraic structures of the virtual singular braid monoid,\n$VSB_n$, and the virtual singular pure braid monoid, $VSP_n$. The monoid\n$VSB_n$ is the splittable extension of $VSP_n$ by the symmetric group $S_n$. We\nalso construct a representation of $VSB_n$.\n", "title": "On the virtual singular braid monoid" }
null
null
null
null
true
null
18675
null
Default
null
null
null
{ "abstract": " In games of friendship links and behaviors, I propose $k$-player Nash\nstability---a family of equilibria, indexed by a measure of robustness given by\nthe number of permitted link changes, which is (ordinally and cardinally)\nranked in a probabilistic sense. Application of the proposed framework to\nadolescents' tobacco smoking and friendship decisions suggests that: (a.)\nfriendship networks respond to increases of tobacco prices and this response\namplifies the intended policy effect on smoking, (b.) racially desegregating\nhigh-schools, via stimulating the social interactions of students with\ndifferent intrinsic propensity to smoke, decreases the overall smoking\nprevalence, (c.) adolescents are averse to sharing friends so that there is a\nrivalry for friendships, (d.) when data on individuals' friendship network is\nnot available, the importance of price centered policy tools is underestimated.\n", "title": "Discrete Games in Endogenous Networks: Equilibria and Policy" }
null
null
null
null
true
null
18676
null
Default
null
null
null
{ "abstract": " Proxies for regulatory reforms based on categorical variables are\nincreasingly used in empirical evaluation models. We surveyed 63 studies that\nrely on such indices to analyze the effects of entry liberalization,\nprivatization, unbundling, and independent regulation of the electricity,\nnatural gas, and telecommunications sectors. We highlight methodological issues\nrelated to the use of these proxies. Next, taking stock of the literature, we\nprovide practical advice for the design of the empirical strategy and discuss\nthe selection of control and instrumental variables to attenuate endogeneity\nproblems undermining identification of the effects of regulatory reforms.\n", "title": "Evaluating regulatory reform of network industries: a survey of empirical models based on categorical proxies" }
null
null
null
null
true
null
18677
null
Default
null
null
null
{ "abstract": " We apply the theory of ground states for classical, finite, Heisenberg spin\nsystems previously published to a couple of spin systems that can be considered\nas finite models $K_{12},\\,K_{15}$ and $K_{18}$ of the AF Kagome lattice. The\nmodel $K_{12}$ is isomorphic to the cuboctahedron. In particular, we find\nthree-dimensional ground states that cannot be viewed as resulting from the\nwell-known independent rotation of subsets of spin vectors. For a couple of\nground states with translational symmetry we calculate the corresponding wave\nnumbers. Finally we study the model $K_{12w}$ without boundary conditions which\nexhibits new phenomena as, e.~g., two-dimensional families of three-dimensional\nground states.\n", "title": "Theory of ground states for classical Heisenberg spin systems II" }
null
null
null
null
true
null
18678
null
Default
null
null
null
{ "abstract": " We show that relative Property (T) for the abelianization of a nilpotent\nnormal subgroup implies relative Property (T) for the subgroup itself. This and\nother results are a consequence of a theorem of independent interest, which\nstates that if $H$ is a closed subgroup of a locally compact group $G$, and $A$\nis a closed subgroup of the center of $H$, such that $A$ is normal in $G$, and\n$(G/A, H/A)$ has relative Property (T), then $(G, H^{(1)})$ has relative\nProperty (T), where $H^{(1)}$ is the closure of the commutator subgroup of $H$.\nIn fact, the assumption that $A$ is in the center of $H$ can be replaced with\nthe weaker assumption that $A$ is abelian and every $H$-invariant finite\nmeasure on the unitary dual of $A$ is supported on the set of fixed points.\n", "title": "Relative Property (T) for Nilpotent Subgroups" }
null
null
[ "Mathematics" ]
null
true
null
18679
null
Validated
null
null
null
{ "abstract": " We study the distributional properties of the linear discriminant function\nunder the assumption of normality by comparing two groups with the same\ncovariance matrix but different mean vectors. A stochastic representation for\nthe discriminant function coefficients is derived which is then used to obtain\ntheir asymptotic distribution under the high-dimensional asymptotic regime. We\ninvestigate the performance of the classification analysis based on the\ndiscriminant function in both small and large dimensions. A stochastic\nrepresentation is established which allows to compute the error rate in an\nefficient way. We further compare the calculated error rate with the optimal\none obtained under the assumption that the covariance matrix and the two mean\nvectors are known. Finally, we present an analytical expression of the error\nrate calculated in the high-dimensional asymptotic regime. The finite-sample\nproperties of the derived theoretical results are assessed via an extensive\nMonte Carlo study.\n", "title": "Discriminant analysis in small and large dimensions" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
18680
null
Validated
null
null
null
{ "abstract": " Despite impressive advances in simultaneous localization and mapping, dense\nrobotic mapping remains challenging due to its inherent nature of being a\nhigh-dimensional inference problem. In this paper, we propose a dense semantic\nrobotic mapping technique that exploits sparse Bayesian models, in particular,\nthe relevance vector machine, for high-dimensional sequential inference. The\ntechnique is based on the principle of automatic relevance determination and\nproduces sparse models that use a small subset of the original dense training\nset as the dominant basis. The resulting map posterior is continuous, and\nqueries can be made efficiently at any resolution. Moreover, the technique has\nprobabilistic outputs per semantic class through Bayesian inference. We\nevaluate the proposed relevance vector semantic map using publicly available\nbenchmark datasets, NYU Depth V2 and KITTI; and the results show promising\nimprovements over the state-of-the-art techniques.\n", "title": "Sparse Bayesian Inference for Dense Semantic Mapping" }
null
null
null
null
true
null
18681
null
Default
null
null
null
{ "abstract": " We overview our recent work defining and studying normal crossings varieties\nand subvarieties in symplectic topology. This work answers a question of Gromov\non the feasibility of introducing singular (sub)varieties into symplectic\ntopology in the case of normal crossings singularities. It also provides a\nnecessary and sufficient condition for smoothing normal crossings symplectic\nvarieties. In addition, we explain some connections with other areas of\nmathematics and discuss a few directions for further research.\n", "title": "Singularities and Semistable Degenerations for Symplectic Topology" }
null
null
[ "Mathematics" ]
null
true
null
18682
null
Validated
null
null
null
{ "abstract": " In this paper, robust nonparametric estimators, instead of local linear\nestimators, are adapted for infinitesimal coefficients associated with\nintegrated jump-diffusion models to avoid the impact of outliers on accuracy.\nFurthermore, consider the complexity of iteration of the solution for local\nM-estimator, we propose the one-step local M-estimators to release the\ncomputation burden. Under appropriate regularity conditions, we prove that\none-step local M-estimators and the fully iterative M-estimators have the same\nperformance in consistency and asymptotic normality. Through simulation, our\nmethod present advantages in bias reduction, robustness and reducing\ncomputation cost. In addition, the estimators are illustrated empirically\nthrough stock index under different sampling frequency.\n", "title": "One-step Local M-estimator for Integrated Jump-Diffusion Models" }
null
null
null
null
true
null
18683
null
Default
null
null
null
{ "abstract": " Numerical simulations of Einstein's field equations provide unique insights\ninto the physics of compact objects moving at relativistic speeds, and which\nare driven by strong gravitational interactions. Numerical relativity has\nplayed a key role to firmly establish gravitational wave astrophysics as a new\nfield of research, and it is now paving the way to establish whether\ngravitational wave radiation emitted from compact binary mergers is accompanied\nby electromagnetic and astro-particle counterparts. As numerical relativity\ncontinues to blend in with routine gravitational wave data analyses to validate\nthe discovery of gravitational wave events, it is essential to develop open\nsource tools to streamline these studies. Motivated by our own experience as\nusers and developers of the open source, community software, the Einstein\nToolkit, we present an open source, Python package that is ideally suited to\nmonitor and post-process the data products of numerical relativity simulations,\nand compute the gravitational wave strain at future null infinity in high\nperformance environments. We showcase the application of this new package to\npost-process a large numerical relativity catalog and extract higher-order\nwaveform modes from numerical relativity simulations of eccentric binary black\nhole mergers and neutron star mergers. This new software fills a critical void\nin the arsenal of tools provided by the Einstein Toolkit Consortium to the\nnumerical relativity community.\n", "title": "Python Open Source Waveform Extractor (POWER): An open source, Python package to monitor and post-process numerical relativity simulations" }
null
null
null
null
true
null
18684
null
Default
null
null
null
{ "abstract": " This paper develops a method to construct uniform confidence bands for a\nnonparametric regression function where a predictor variable is subject to a\nmeasurement error. We allow for the distribution of the measurement error to be\nunknown, but assume that there is an independent sample from the measurement\nerror distribution. The sample from the measurement error distribution need not\nbe independent from the sample on response and predictor variables. The\navailability of a sample from the measurement error distribution is satisfied\nif, for example, either 1) validation data or 2) repeated measurements (panel\ndata) on the latent predictor variable with measurement errors, one of which is\nsymmetrically distributed, are available. The proposed confidence band builds\non the deconvolution kernel estimation and a novel application of the\nmultiplier (or wild) bootstrap method. We establish asymptotic validity of the\nproposed confidence band under ordinary smooth measurement error densities,\nshowing that the proposed confidence band contains the true regression function\nwith probability approaching the nominal coverage probability. To the best of\nour knowledge, this is the first paper to derive asymptotically valid uniform\nconfidence bands for nonparametric errors-in-variables regression. We also\npropose a novel data-driven method to choose a bandwidth, and conduct\nsimulation studies to verify the finite sample performance of the proposed\nconfidence band. Applying our method to a combination of two empirical data\nsets, we draw confidence bands for nonparametric regressions of medical costs\non the body mass index (BMI), accounting for measurement errors in BMI.\nFinally, we discuss extensions of our results to specification testing, cases\nwith additional error-free regressors, and confidence bands for conditional\ndistribution functions.\n", "title": "Uniform confidence bands for nonparametric errors-in-variables regression" }
null
null
null
null
true
null
18685
null
Default
null
null
null
{ "abstract": " We investigate the superconducting-gap anisotropy in one of the recently\ndiscovered BiS$_2$-based superconductors, NdO$_{0.71}$F$_{0.29}$BiS$_2$ ($T_c$\n$\\sim$ 5 K), using laser-based angle-resolved photoemission spectroscopy.\nWhereas the previously discovered high-$T_c$ superconductors such as copper\noxides and iron-based superconductors, which are believed to have\nunconventional superconducting mechanisms, have $3d$ electrons in their\nconduction bands, the conduction band of BiS$_2$-based superconductors mainly\nconsists of Bi 6$p$ electrons, and hence the conventional superconducting\nmechanism might be expected. Contrary to this expectation, we observe a\nstrongly anisotropic superconducting gap. This result strongly suggests that\nthe pairing mechanism for NdO$_{0.71}$F$_{0.29}$BiS$_2$ is unconventional one\nand we attribute the observed anisotropy to competitive or cooperative multiple\nparing interactions.\n", "title": "Unconventional superconductivity in the BiS$_2$-based layered superconductor NdO$_{0.71}$F$_{0.29}$BiS$_2$" }
null
null
null
null
true
null
18686
null
Default
null
null
null
{ "abstract": " The class of Lq-regularized least squares (LQLS) are considered for\nestimating a p-dimensional vector \\b{eta} from its n noisy linear observations\ny = X\\b{eta}+w. The performance of these schemes are studied under the\nhigh-dimensional asymptotic setting in which p grows linearly with n. In this\nasymptotic setting, phase transition diagrams (PT) are often used for comparing\nthe performance of different estimators. Although phase transition analysis is\nshown to provide useful information for compressed sensing, the fact that it\nignores the measurement noise not only limits its applicability in many\napplication areas, but also may lead to misunderstandings. For instance,\nconsider a linear regression problem in which n > p and the signal is not\nexactly sparse. If the measurement noise is ignored in such systems,\nregularization techniques, such as LQLS, seem to be irrelevant since even the\nordinary least squares (OLS) returns the exact solution. However, it is\nwell-known that if n is not much larger than p then the regularization\ntechniques improve the performance of OLS. In response to this limitation of PT\nanalysis, we consider the low-noise sensitivity analysis. We show that this\nanalysis framework (i) reveals the advantage of LQLS over OLS, (ii) captures\nthe difference between different LQLS estimators even when n > p, and (iii)\nprovides a fair comparison among different estimators in high signal-to-noise\nratios. As an application of this framework, we will show that under mild\nconditions LASSO outperforms other LQLS even when the signal is dense. Finally,\nby a simple transformation we connect our low-noise sensitivity framework to\nthe classical asymptotic regime in which n/p goes to infinity and characterize\nhow and when regularization techniques offer improvements over ordinary least\nsquares, and which regularizer gives the most improvement when the sample size\nis large.\n", "title": "Low noise sensitivity analysis of Lq-minimization in oversampled systems" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
18687
null
Validated
null
null
null
{ "abstract": " In this paper, we firstly exploit the inter-user interference (IUI) and\ninter-cell interference (ICI) as useful references to develop a robust\ntransceiver design based on interference alignment for a downlink multi-user\nmulti-cell multiple-input multiple-output (MIMO) interference network under\nchannel estimation error. At transmitters, we propose a two-tier transmit\nbeamforming strategy, we first achieve the inner beamforming direction and\nallocated power by minimizing the interference leakage as well as maximizing\nthe system energy efficiency, respectively. Then, for the outer beamformer\ndesign, we develop an efficient conjugate gradient Grassmann manifold subspace\ntracking algorithm to minimize the distances between the subspace spanned by\ninterference and the interference subspace in the time varying channel. At\nreceivers, we propose a practical interference alignment based on fast and\nrobust fast data projection method (FDPM) subspace tracking algorithm, to\nachieve the receive beamformer under channel uncertainty. Numerical results\nshow that our proposed robust transceiver design achieves better performance\ncompared with some existing methods in terms of the sum rate and the energy\nefficiency.\n", "title": "Robust Transceiver Design Based on Interference Alignment for Multi-User Multi-Cell MIMO Networks with Channel Uncertainty" }
null
null
[ "Computer Science" ]
null
true
null
18688
null
Validated
null
null
null
{ "abstract": " Inspired by the recent work of Carleo and Troyer[1], we apply machine\nlearning methods to quantum mechanics in this article. The radial basis\nfunction network in a discrete basis is used as the variational wavefunction\nfor the ground state of a quantum system. Variational Monte Carlo(VMC)\ncalculations are carried out for some simple Hamiltonians. The results are in\ngood agreements with theoretical values. The smallest eigenvalue of a Hermitian\nmatrix can also be acquired using VMC calculations. Our results demonstrate\nthat machine learning techniques are capable of solving quantum mechanical\nproblems.\n", "title": "Machine learning quantum mechanics: solving quantum mechanics problems using radial basis function networks" }
null
null
null
null
true
null
18689
null
Default
null
null
null
{ "abstract": " A central claim in modern network science is that real-world networks are\ntypically \"scale free,\" meaning that the fraction of nodes with degree $k$\nfollows a power law, decaying like $k^{-\\alpha}$, often with $2 < \\alpha < 3$.\nHowever, empirical evidence for this belief derives from a relatively small\nnumber of real-world networks. We test the universality of scale-free structure\nby applying state-of-the-art statistical tools to a large corpus of nearly 1000\nnetwork data sets drawn from social, biological, technological, and\ninformational sources. We fit the power-law model to each degree distribution,\ntest its statistical plausibility, and compare it via a likelihood ratio test\nto alternative, non-scale-free models, e.g., the log-normal. Across domains, we\nfind that scale-free networks are rare, with only 4% exhibiting the\nstrongest-possible evidence of scale-free structure and 52% exhibiting the\nweakest-possible evidence. Furthermore, evidence of scale-free structure is not\nuniformly distributed across sources: social networks are at best weakly scale\nfree, while a handful of technological and biological networks can be called\nstrongly scale free. These results undermine the universality of scale-free\nnetworks and reveal that real-world networks exhibit a rich structural\ndiversity that will likely require new ideas and mechanisms to explain.\n", "title": "Scale-free networks are rare" }
null
null
[ "Computer Science", "Statistics", "Quantitative Biology" ]
null
true
null
18690
null
Validated
null
null
null
{ "abstract": " We extend the deep and important results of Lichnerowicz, Connes, and\nGromov-Lawson which relate geometry and characteristic numbers to the existence\nand non-existence of metrics of positive scalar curvature (PSC). In particular,\nwe show: that a spin foliation with Hausdorff homotopy groupoid of an\nenlargeable manifold admits no PSC metric; that any metric of PSC on such a\nfoliation is bounded by a multiple of the reciprocal of the foliation K-area of\nthe ambient manifold; and that Connes' vanishing theorem for characteristic\nnumbers of PSC foliations extends to a vanishing theorem for Haefliger\ncohomology classes.\n", "title": "Enlargeability, foliations, and positive scalar curvature" }
null
null
null
null
true
null
18691
null
Default
null
null
null
{ "abstract": " We propose a simple mathematical model for unemployment. Despite its\nsimpleness, we claim that the model is more realistic and useful than recent\nmodels available in the literature. A case study with real data from Portugal\nsupports our claim. An optimal control problem is formulated and solved, which\nprovides some non-trivial and interesting conclusions.\n", "title": "A simple mathematical model for unemployment: a case study in Portugal with optimal control" }
null
null
null
null
true
null
18692
null
Default
null
null
null
{ "abstract": " While the success of deep neural networks (DNNs) is well-established across a\nvariety of domains, our ability to explain and interpret these methods is\nlimited. Unlike previously proposed local methods which try to explain\nparticular classification decisions, we focus on global interpretability and\nask a universally applicable question: given a trained model, which features\nare the most important? In the context of neural networks, a feature is rarely\nimportant on its own, so our strategy is specifically designed to leverage\npartial covariance structures and incorporate variable dependence into feature\nranking. Our methodological contributions in this paper are two-fold. First, we\npropose an effect size analogue for DNNs that is appropriate for applications\nwith highly collinear predictors (ubiquitous in computer vision). Second, we\nextend the recently proposed \"RelATive cEntrality\" (RATE) measure (Crawford et\nal., 2019) to the Bayesian deep learning setting. RATE applies an information\ntheoretic criterion to the posterior distribution of effect sizes to assess\nfeature significance. We apply our framework to three broad application areas:\ncomputer vision, natural language processing, and social science.\n", "title": "Interpreting Deep Neural Networks Through Variable Importance" }
null
null
null
null
true
null
18693
null
Default
null
null
null
{ "abstract": " The Sachdev-Ye--Kitaev is a quantum mechanical model of $N$ Majorana fermions\nwhich displays a number of appealing features -- solvability in the strong\ncoupling regime, near-conformal invariance and maximal chaos -- which make it a\nsuitable model for black holes in the context of the AdS/CFT holography. In\nthis paper we show for the colored SYK model and several of its tensor model\ncousins that the next-to-leading order in the $N$ expansion preserves the\nconformal invariance of the $2$-point function in the strong coupling regime,\nup to the contribution of the Goldstone bosons leading to the spontaneous\nbreaking of the symmetry and which are already seen in the leading order\n$4$-point function. We also comment on the composite field approach for\ncomputing correlation functions in colored tensor models.\n", "title": "Conformality of $1/N$ corrections in SYK-like models" }
null
null
null
null
true
null
18694
null
Default
null
null
null
{ "abstract": " Higher category theory is an exceedingly active area of research, whose rapid\ngrowth has been driven by its penetration into a diverse range of scientific\nfields. Its influence extends through key mathematical disciplines, notably\nhomotopy theory, algebraic geometry and algebra, mathematical physics, to\nencompass important applications in logic, computer science and beyond. Higher\ncategories provide a unifying language whose greatest strength lies in its\nability to bridge between diverse areas and uncover novel applications.\nIn this foundational work we introduce a new approach to higher categories.\nIt builds upon the theory of iterated internal categories, one of the simplest\npossible higher categorical structures available, by adopting a novel and\nremarkably simple \"weak globularity\" postulate and demonstrating that the\nresulting model provides a fully general theory of weak n-categories. The\nlatter are among the most complex of the higher structures, and are crucial for\napplications. We show that this new model of \"weakly globular n-fold\ncategories\" is suitably equivalent to the well studied model of weak\nn-categories due to Tamsamani and Simpson.\n", "title": "Segal-type models of higher categories" }
null
null
[ "Mathematics" ]
null
true
null
18695
null
Validated
null
null
null
{ "abstract": " The SuperCDMS experiment is designed to directly detect weakly interacting\nmassive particles (WIMPs) that may constitute the dark matter in our Galaxy.\nDuring its operation at the Soudan Underground Laboratory, germanium detectors\nwere run in the CDMSlite mode to gather data sets with sensitivity specifically\nfor WIMPs with masses ${<}$10 GeV/$c^2$. In this mode, a higher detector-bias\nvoltage is applied to amplify the phonon signals produced by drifting charges.\nThis paper presents studies of the experimental noise and its effect on the\nachievable energy threshold, which is demonstrated to be as low as 56\neV$_{\\text{ee}}$ (electron equivalent energy). The detector-biasing\nconfiguration is described in detail, with analysis corrections for voltage\nvariations to the level of a few percent. Detailed studies of the\nelectric-field geometry, and the resulting successful development of a fiducial\nparameter, eliminate poorly measured events, yielding an energy resolution\nranging from ${\\sim}$9 eV$_{\\text{ee}}$ at 0 keV to 101 eV$_{\\text{ee}}$ at\n${\\sim}$10 eV$_{\\text{ee}}$. New results are derived for astrophysical\nuncertainties relevant to the WIMP-search limits, specifically examining how\nthey are affected by variations in the most probable WIMP velocity and the\nGalactic escape velocity. These variations become more important for WIMP\nmasses below 10 GeV/$c^2$. Finally, new limits on spin-dependent low-mass\nWIMP-nucleon interactions are derived, with new parameter space excluded for\nWIMP masses $\\lesssim$3 GeV/$c^2$\n", "title": "Low-Mass Dark Matter Search with CDMSlite" }
null
null
null
null
true
null
18696
null
Default
null
null
null
{ "abstract": " In standard general relativity the universe cannot be started with arbitrary\ninitial conditions, because four of the ten components of the Einstein's field\nequations (EFE) are constraints on initial conditions. In the previous work it\nwas proposed to extend the gravity theory to allow free initial conditions,\nwith a motivation to solve the cosmological constant problem. This was done by\nsetting four constraints on metric variations in the action principle, which is\nreasonable because the gravity's physical degrees of freedom are at most six.\nHowever, there are two problems about this theory; the three constraints in\naddition to the unimodular condition were introduced without clear physical\nmeanings, and the flat Minkowski spacetime is unstable against perturbations.\nHere a new set of gravitational field equations is derived by replacing the\nthree constraints with new ones requiring that geodesic paths remain geodesic\nagainst metric variations. The instability problem is then naturally solved.\nImplications for the cosmological constant $\\Lambda$ are unchanged; the theory\nconverges into EFE with nonzero $\\Lambda$ by inflation, but $\\Lambda$ varies on\nscales much larger than the present Hubble horizon. Then galaxies are formed\nonly in small $\\Lambda$ regions, and the cosmological constant problem is\nsolved by the anthropic argument. Because of the increased degrees of freedom\nin metric dynamics, the theory predicts new non-oscillatory modes of metric\nanisotropy generated by quantum fluctuation during inflation, and CMB B-mode\npolarization would be observed differently from the standard predictions by\ngeneral relativity.\n", "title": "Gravity with free initial conditions: a solution to the cosmological constant problem testable by CMB B-mode polarization" }
null
null
null
null
true
null
18697
null
Default
null
null
null
{ "abstract": " Recurrent Neural Networks (RNNs) achieve state-of-the-art results in many\nsequence-to-sequence modeling tasks. However, RNNs are difficult to train and\ntend to suffer from overfitting. Motivated by the Data Processing Inequality\n(DPI), we formulate the multi-layered network as a Markov chain, introducing a\ntraining method that comprises training the network gradually and using\nlayer-wise gradient clipping. We found that applying our methods, combined with\npreviously introduced regularization and optimization methods, resulted in\nimprovements in state-of-the-art architectures operating in language modeling\ntasks.\n", "title": "Gradual Learning of Recurrent Neural Networks" }
null
null
null
null
true
null
18698
null
Default
null
null
null
{ "abstract": " We study classes of Borel subsets of the real line $\\mathbb{R}$ such as\nlevels of the Borel hierarchy and the class of sets that are reducible to the\nset $\\mathbb{Q}$ of rationals, endowed with the Wadge quasi-order of\nreducibility with respect to continuous functions on $\\mathbb{R}$. Notably, we\nexplore several structural properties of Borel subsets of $\\mathbb{R}$ that\ndiverge from those of Polish spaces with dimension zero. Our first main result\nis on the existence of embeddings of several posets into the restriction of\nthis quasi-order to any Borel class that is strictly above the classes of open\nand closed sets, for instance the linear order $\\omega_1$, its reverse\n$\\omega_1^\\star$ and the poset $\\mathcal{P}(\\omega)/\\mathsf{fin}$ of inclusion\nmodulo finite error. As a consequence of its proof, it is shown that there are\nno complete sets for these classes. We further extend the previous theorem to\ntargets that are reducible to $\\mathbb{Q}$. These non-structure results\nmotivate the study of further restrictions of the Wadge quasi-order. In our\nsecond main theorem, we introduce a combinatorial property that is shown to\ncharacterize those $F_\\sigma$ sets that are reducible to $\\mathbb{Q}$. This is\napplied to construct a minimal set below $\\mathbb{Q}$ and prove its uniqueness\nup to Wadge equivalence. We finally prove several results concerning gaps and\ncardinal characteristics of the Wadge quasi-order and thereby answer questions\nof Brendle and Geschke.\n", "title": "Borel subsets of the real line and continuous reducibility" }
null
null
null
null
true
null
18699
null
Default
null
null
null
{ "abstract": " Recently, Prakash et. al. have discovered bulk superconductivity in single\ncrystals of bismuth, which is a semi metal with extremely low carrier density.\nAt such low density, we argue that conventional electron-phonon coupling is too\nweak to be responsible for the binding of electrons into Cooper pairs. We study\na dynamically screened Coulomb interaction with effective attraction generated\non the scale of the collective plasma modes. We model the electronic states in\nbismuth to include three Dirac pockets with high velocity and one hole pocket\nwith a significantly smaller velocity. We find a weak coupling instability,\nwhich is greatly enhanced by the presence of the hole pocket. Therefore, we\nargue that bismuth is the first material to exhibit superconductivity driven by\nretardation effects of Coulomb repulsion alone. By using realistic parameters\nfor bismuth we find that the acoustic plasma mode does not play the central\nrole in pairing. We also discuss a matrix element effect, resulting from the\nDirac nature of the conduction band, which may affect $T_c$ in the $s$-wave\nchannel without breaking time-reversal symmetry.\n", "title": "Pairing from dynamically screened Coulomb repulsion in bismuth" }
null
null
[ "Physics" ]
null
true
null
18700
null
Validated
null
null