text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " This paper introduces a framework for speeding up Bayesian inference\nconducted in presence of large datasets. We design a Markov chain whose\ntransition kernel uses an (unknown) fraction of (fixed size) of the available\ndata that is randomly refreshed throughout the algorithm. Inspired by the\nApproximate Bayesian Computation (ABC) literature, the subsampling process is\nguided by the fidelity to the observed data, as measured by summary statistics.\nThe resulting algorithm, Informed Sub-Sampling MCMC (ISS-MCMC), is a generic\nand flexible approach which, contrary to existing scalable methodologies,\npreserves the simplicity of the Metropolis-Hastings algorithm. Even though\nexactness is lost, i.e. the chain distribution approximates the posterior, we\nstudy and quantify theoretically this bias and show on a diverse set of\nexamples that it yields excellent performances when the computational budget is\nlimited. If available and cheap to compute, we show that setting the summary\nstatistics as the maximum likelihood estimator is supported by theoretical\narguments.\n",
"title": "Informed Sub-Sampling MCMC: Approximate Bayesian Inference for Large Datasets"
}
| null | null | null | null | true | null |
14001
| null |
Default
| null | null |
null |
{
"abstract": " Distributions of anthropogenic signatures (impacts and activities) are\nmathematically analysed. The aim is to understand the Anthropocene and to see\nwhether anthropogenic signatures could be used to determine its beginning. A\ntotal of 23 signatures were analysed and results are presented in 31 diagrams.\nSome of these signatures contain undistinguishable natural components but most\nof them are of purely anthropogenic origin. Great care was taken to identify\nabrupt accelerations, which could be used to determine the beginning of the\nAnthropocene. Results of the analysis can be summarised in three conclusions.\n1. Anthropogenic signatures cannot be used to determine the beginning of the\nAnthropocene. 2. There was no abrupt Great Acceleration around 1950 or around\nany other time. 3. Anthropogenic signatures are characterised by the Great\nDeceleration in the second half of the 20th century. The second half of the\n20th century does not mark the beginning of the Anthropocene but most likely\nthe beginning of the end of the strong anthropogenic impacts, maybe even the\nbeginning of a transition to a sustainable future. The Anthropocene is a unique\nstage in human experience but it has no clearly marked beginning and it is\nprobably not a new geological epoch.\n",
"title": "Mathematical Analysis of Anthropogenic Signatures: The Great Deceleration"
}
| null | null | null | null | true | null |
14002
| null |
Default
| null | null |
null |
{
"abstract": " We consider regret minimization in repeated games with non-convex loss\nfunctions. Minimizing the standard notion of regret is computationally\nintractable. Thus, we define a natural notion of regret which permits efficient\noptimization and generalizes offline guarantees for convergence to an\napproximate local optimum. We give gradient-based methods that achieve optimal\nregret, which in turn guarantee convergence to equilibrium in this framework.\n",
"title": "Efficient Regret Minimization in Non-Convex Games"
}
| null | null | null | null | true | null |
14003
| null |
Default
| null | null |
null |
{
"abstract": " Given a network of nodes, minimizing the spread of a contagion using a\nlimited budget is a well-studied problem with applications in network security,\nviral marketing, social networks, and public health. In real graphs, virus may\ninfect a node which in turn infects its neighbor nodes and this may trigger an\nepidemic in the whole graph. The goal thus is to select the best k nodes\n(budget constraint) that are immunized (vaccinated, screened, filtered) so as\nthe remaining graph is less prone to the epidemic. It is known that the problem\nis, in all practical models, computationally intractable even for moderate\nsized graphs. In this paper we employ ideas from spectral graph theory to\ndefine relevance and importance of nodes. Using novel graph theoretic\ntechniques, we then design an efficient approximation algorithm to immunize the\ngraph. Theoretical guarantees on the running time of our algorithm show that it\nis more efficient than any other known solution in the literature. We test the\nperformance of our algorithm on several real world graphs. Experiments show\nthat our algorithm scales well for large graphs and outperforms state of the\nart algorithms both in quality (containment of epidemic) and efficiency\n(runtime and space complexity).\n",
"title": "Spectral Methods for Immunization of Large Networks"
}
| null | null | null | null | true | null |
14004
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider the nonlinear inhomogeneous compressible elastic\nwaves in three spatial dimensions when the density is a small disturbance\naround a constant state. In homogeneous case, the almost global existence was\nestablished by Klainerman-Sideris [1996_CPAM], and global existence was built\nby Agemi [2000_Invent. Math.] and Sideris [1996_Invent. Math., 2000_Ann. Math.]\nindependently. Here we establish the corresponding almost global and global\nexistence theory in the inhomogeneous case.\n",
"title": "Long-time existence of nonlinear inhomogeneous compressible elastic waves"
}
| null | null | null | null | true | null |
14005
| null |
Default
| null | null |
null |
{
"abstract": " Poisson distribution is used for modeling noise in photon-limited imaging.\nWhile canonical examples include relatively exotic types of sensing like\nspectral imaging or astronomy, the problem is relevant to regular photography\nnow more than ever due to the booming market for mobile cameras. Restricted\nform factor limits the amount of absorbed light, thus computational\npost-processing is called for. In this paper, we make use of the powerful\nframework of deep convolutional neural networks for Poisson denoising. We\ndemonstrate how by training the same network with images having a specific peak\nvalue, our denoiser outperforms previous state-of-the-art by a large margin\nboth visually and quantitatively. Being flexible and data-driven, our solution\nresolves the heavy ad hoc engineering used in previous methods and is an order\nof magnitude faster. We further show that by adding a reasonable prior on the\nclass of the image being processed, another significant boost in performance is\nachieved.\n",
"title": "Deep Convolutional Denoising of Low-Light Images"
}
| null | null | null | null | true | null |
14006
| null |
Default
| null | null |
null |
{
"abstract": " Emil Artin defined a zeta function for algebraic curves over finite fields\nand made a conjecture about them analogous to the famous Riemann hypothesis.\nThis and other conjectures about these zeta functions would come to be called\nthe Weil conjectures, which were proved by Weil for curves and later, by\nDeligne for varieties over finite fields. Much work was done in the search for\na proof of these conjectures, including the development in algebraic geometry\nof a Weil cohomology theory for these varieties, which uses the Frobenius\noperator on a finite field. The zeta function is then expressed as a\ndeterminant, allowing the properties of the function to relate to those of the\noperator. The search for a suitable cohomology theory and associated operator\nto prove the Riemann hypothesis is still on. In this paper, we study the\nproperties of the derivative operator $D = \\frac{d}{dz}$ on a particular\nweighted Bergman space of entire functions. The operator $D$ can be naturally\nviewed as the `infinitesimal shift of the complex plane'. Furthermore, this\noperator is meant to be the replacement for the Frobenius operator in the\ngeneral case and is used to construct an operator associated to any suitable\nmeromorphic function. We then show that the meromorphic function can be\nrecovered by using a regularized determinant involving the above operator. This\nis illustrated in some important special cases: rational functions, zeta\nfunctions of curves over finite fields, the Riemann zeta function, and\nculminating in a quantized version of the Hadamard factorization theorem that\napplies to any entire function of finite order. Our construction is motivated\nin part by [23] on the infinitesimal shift of the real line, as well as by\nearlier work of Deninger [10] on cohomology in number theory and a conjectural\n`fractal cohomology theory' envisioned in [25] and [28].\n",
"title": "Towards a fractal cohomology: Spectra of Polya--Hilbert operators, regularized determinants and Riemann zeros"
}
| null | null | null | null | true | null |
14007
| null |
Default
| null | null |
null |
{
"abstract": " The forgotten topological index or F-index of a graph is defined as the sum\nof cubes of the degree of all the vertices of the graph. In this paper we study\nthe F-index of four operations related to the lexicographic product on graphs\nwhich were introduced by Sarala et al. [D. Sarala, H. Deng, S.K. Ayyaswamya and\nS. Balachandrana, The Zagreb indices of graphs based on four new operations\nrelated to the lexicographic product, \\textit{Applied Mathematics and\nComputation}, 309 (2017) 156--169.].\n",
"title": "F-index of graphs based on four operations related to the lexicographic product"
}
| null | null | null | null | true | null |
14008
| null |
Default
| null | null |
null |
{
"abstract": " We show that a problem of deleting a minimum number of vertices from a graph\nto obtain a graph embeddable on a surface of a given Euler genus is solvable in\ntime $2^{C_g \\cdot k^2 \\log k} n^{O(1)}$, where $k$ is the size of the deletion\nset, $C_g$ is a constant depending on the Euler genus $g$ of the target\nsurface, and $n$ is the size of the input graph. On the way to this result, we\ndevelop an algorithm solving the problem in question in time $2^{O((t+g) \\log\n(t+g))} n$, given a tree decomposition of the input graph of width $t$. The\nresults generalize previous algorithms for the surface being a sphere by Marx\nand Schlotter [Algorithmica 2012], Kawarabayashi [FOCS 2009], and Jansen,\nLokshtanov, and Saurabh [SODA 2014].\n",
"title": "Deleting vertices to graphs of bounded genus"
}
| null | null | null | null | true | null |
14009
| null |
Default
| null | null |
null |
{
"abstract": " We propose using the storage ring EDM method to search for the axion dark\nmatter induced EDM oscillation in nucleons. The method uses a combination of B\nand E-fields to produce a resonance between the $g-2$ spin precession frequency\nand the background axion field oscillation to greatly enhance sensitivity to\nit. An axion frequency range from $10^{-9}$ Hz to 100 MHz can in principle be\nscanned with high sensitivity, corresponding to an $f_a$ range of $10^{13} $\nGeV $\\leq f_a \\leq 10^{30}$ GeV, the breakdown scale of the global symmetry\ngenerating the axion or axion like particles (ALPs).\n",
"title": "Axion dark matter search using the storage ring EDM method"
}
| null | null | null | null | true | null |
14010
| null |
Default
| null | null |
null |
{
"abstract": " Time crystals are quantum many-body systems which, due to interactions\nbetween particles, are able to spontaneously self-organize their motion in a\nperiodic way in time by analogy with the formation of crystalline structures in\nspace in condensed matter physics. In solid state physics properties of space\ncrystals are often investigated with the help of external potentials that are\nspatially periodic and reflect various crystalline structures. A similar\napproach can be applied for time crystals, as periodically driven systems\nconstitute counterparts of spatially periodic systems, but in the time domain.\nHere we show that condensed matter problems ranging from single particles in\npotentials of quasi-crystal structure to many-body systems with exotic\nlong-range interactions can be realized in the time domain with an appropriate\nperiodic driving. Moreover, it is possible to create molecules where atoms are\nbound together due to destructive interference if the atomic scattering length\nis modulated in time.\n",
"title": "Time crystal platform: from quasi-crystal structures in time to systems with exotic interactions"
}
| null | null | null | null | true | null |
14011
| null |
Default
| null | null |
null |
{
"abstract": " Echocardiography is essential to modern cardiology. However, human\ninterpretation limits high throughput analysis, limiting echocardiography from\nreaching its full clinical and research potential for precision medicine. Deep\nlearning is a cutting-edge machine-learning technique that has been useful in\nanalyzing medical images but has not yet been widely applied to\nechocardiography, partly due to the complexity of echocardiograms' multi view,\nmulti modality format. The essential first step toward comprehensive computer\nassisted echocardiographic interpretation is determining whether computers can\nlearn to recognize standard views. To this end, we anonymized 834,267\ntransthoracic echocardiogram (TTE) images from 267 patients (20 to 96 years, 51\npercent female, 26 percent obese) seen between 2000 and 2017 and labeled them\naccording to standard views. Images covered a range of real world clinical\nvariation. We built a multilayer convolutional neural network and used\nsupervised learning to simultaneously classify 15 standard views. Eighty\npercent of data used was randomly chosen for training and 20 percent reserved\nfor validation and testing on never seen echocardiograms. Using multiple images\nfrom each clip, the model classified among 12 video views with 97.8 percent\noverall test accuracy without overfitting. Even on single low resolution\nimages, test accuracy among 15 views was 91.7 percent versus 70.2 to 83.5\npercent for board-certified echocardiographers. Confusional matrices, occlusion\nexperiments, and saliency mapping showed that the model finds recognizable\nsimilarities among related views and classifies using clinically relevant image\nfeatures. In conclusion, deep neural networks can classify essential\nechocardiographic views simultaneously and with high accuracy. Our results\nprovide a foundation for more complex deep learning assisted echocardiographic\ninterpretation.\n",
"title": "Fast and accurate classification of echocardiograms using deep learning"
}
| null | null |
[
"Computer Science"
] | null | true | null |
14012
| null |
Validated
| null | null |
null |
{
"abstract": " Anomaly detection in database management systems (DBMSs) is difficult because\nof increasing number of statistics (stat) and event metrics in big data system.\nIn this paper, I propose an automatic DBMS diagnosis system that detects\nanomaly periods with abnormal DB stat metrics and finds causal events in the\nperiods. Reconstruction error from deep autoencoder and statistical process\ncontrol approach are applied to detect time period with anomalies. Related\nevents are found using time series similarity measures between events and\nabnormal stat metrics. After training deep autoencoder with DBMS metric data,\nefficacy of anomaly detection is investigated from other DBMSs containing\nanomalies. Experiment results show effectiveness of proposed model, especially,\nbatch temporal normalization layer. Proposed model is used for publishing\nautomatic DBMS diagnosis reports in order to determine DBMS configuration and\nSQL tuning.\n",
"title": "Anomaly Detection in Multivariate Non-stationary Time Series for Automatic DBMS Diagnosis"
}
| null | null | null | null | true | null |
14013
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we deal with Seifert fibre spaces, which are compact\n3-manifolds admitting a foliation by circles. We give a combinatorial\ndescription for these manifolds in all the possible cases: orientable,\nnon-orientable, closed, with boundary. Moreover, we compute a potentially sharp\nupper bound for their complexity in terms of the invariants of the\ncombinatorial description, extending to the non-orientable case results by\nFominykh and Wiest for the orientable case with boundary and by Martelli and\nPetronio for the closed orientable case.\n",
"title": "On the complexity of non-orientable Seifert fibre spaces"
}
| null | null | null | null | true | null |
14014
| null |
Default
| null | null |
null |
{
"abstract": " The UFMC modulation is among the most considered solutions for the\nrealization of beyond-OFDM air interfaces for future wireless networks. This\npaper focuses on the design and analysis of an UFMC transceiver equipped with\nmultiple antennas and operating at millimeter wave carrier frequencies. The\npaper provides the full mathematical model of a MIMO-UFMC transceiver, taking\ninto account the presence of hybrid analog/digital beamformers at both ends of\nthe communication links. Then, several detection structures are proposed, both\nfor the case of single-packet isolated transmission, and for the case of\nmultiple-packet continuous transmission. In the latter situation, the paper\nalso considers the case in which no guard time among adjacent packets is\ninserted, trading off an increased level of interference with higher values of\nspectral efficiency. At the analysis stage, the several considered detection\nstructures and transmission schemes are compared in terms of bit-error-rate,\nroot-mean-square-error, and system throughput. The numerical results show that\nthe proposed transceiver algorithms are effective and that the linear MMSE data\ndetector is capable of well managing the increased interference brought by the\nremoval of guard times among consecutive packets, thus yielding throughput\ngains of about 10 - 13 $\\%$. The effect of phase noise at the receiver is also\nnumerically assessed, and it is shown that the recursive implementation of the\nlinear MMSE exhibits some degree of robustness against this disturbance.\n",
"title": "MIMO-UFMC Transceiver Schemes for Millimeter Wave Wireless Communications"
}
| null | null | null | null | true | null |
14015
| null |
Default
| null | null |
null |
{
"abstract": " Humans can easily describe, imagine, and, crucially, predict a wide variety\nof behaviors of liquids--splashing, squirting, gushing, sloshing, soaking,\ndripping, draining, trickling, pooling, and pouring--despite tremendous\nvariability in their material and dynamical properties. Here we propose and\ntest a computational model of how people perceive and predict these liquid\ndynamics, based on coarse approximate simulations of fluids as collections of\ninteracting particles. Our model is analogous to a \"game engine in the head\",\ndrawing on techniques for interactive simulations (as in video games) that\noptimize for efficiency and natural appearance rather than physical accuracy.\nIn two behavioral experiments, we found that the model accurately captured\npeople's predictions about how liquids flow among complex solid obstacles, and\nwas significantly better than two alternatives based on simple heuristics and\ndeep neural networks. Our model was also able to explain how people's\npredictions varied as a function of the liquids' properties (e.g., viscosity\nand stickiness). Together, the model and empirical results extend the recent\nproposal that human physical scene understanding for the dynamics of rigid,\nsolid objects can be supported by approximate probabilistic simulation, to the\nmore complex and unexplored domain of fluid dynamics.\n",
"title": "Modeling human intuitions about liquid flow with particle-based simulation"
}
| null | null | null | null | true | null |
14016
| null |
Default
| null | null |
null |
{
"abstract": " Recently a paper of Klimovskikh et al. was published presenting experimental\nand theoretical analysis of the graphene/Pb/Pt(111) system. The authors\ninvestigate the crystallographic and electronic structure of this\ngraphene-based system by means of LEED, ARPES, and spin-resolved PES of the\ngraphene $\\pi$ states in the vicinity of the Dirac point of graphene. The\nauthors of this paper demonstrate that an energy gap of approx. 200 meV is\nopened in the spectral function of graphene directly at the Dirac point of\ngraphene and spin-splitting of 100 meV is detected for the upper part of the\nDirac cone. On the basis of the spin-resolved photoelectron spectroscopy\nmeasurements of the region around the gap the authors claim that these\nsplittings are of a spin-orbit nature and that the observed spin structure\nconfirms the observation of the quantum spin Hall state in graphene, proposed\nin earlier theoretical works. Here we will show that careful systematic\nanalysis of the experimental data presented in this manuscript is needed and\ntheir interpretation require more critical consideration for making such\nconclusions. Our analysis demonstrates that the proposed effects and\ninterpretations are questionable and require further more careful experiments.\n",
"title": "Comment on \"Spin-Orbit Coupling Induced Gap in Graphene on Pt(111) with Intercalated Pb Monolayer\""
}
| null | null | null | null | true | null |
14017
| null |
Default
| null | null |
null |
{
"abstract": " Learning representation from relative similarity comparisons, often called\nordinal embedding, gains rising attention in recent years. Most of the existing\nmethods are batch methods designed mainly based on the convex optimization,\nsay, the projected gradient descent method. However, they are generally\ntime-consuming due to that the singular value decomposition (SVD) is commonly\nadopted during the update, especially when the data size is very large. To\novercome this challenge, we propose a stochastic algorithm called SVRG-SBB,\nwhich has the following features: (a) SVD-free via dropping convexity, with\ngood scalability by the use of stochastic algorithm, i.e., stochastic variance\nreduced gradient (SVRG), and (b) adaptive step size choice via introducing a\nnew stabilized Barzilai-Borwein (SBB) method as the original version for convex\nproblems might fail for the considered stochastic \\textit{non-convex}\noptimization problem. Moreover, we show that the proposed algorithm converges\nto a stationary point at a rate $\\mathcal{O}(\\frac{1}{T})$ in our setting,\nwhere $T$ is the number of total iterations. Numerous simulations and\nreal-world data experiments are conducted to show the effectiveness of the\nproposed algorithm via comparing with the state-of-the-art methods,\nparticularly, much lower computational cost with good prediction performance.\n",
"title": "Stochastic Non-convex Ordinal Embedding with Stabilized Barzilai-Borwein Step Size"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
14018
| null |
Validated
| null | null |
null |
{
"abstract": " We present an updated version of the mass--metallicity relation (MZR) using\nintegral field spectroscopy data obtained from 734 galaxies observed by the\nCALIFA survey. These unparalleled spatially resolved spectroscopic data allow\nus to determine the metallicity at the same physical scale ($\\mathrm{R_{e}}$)\nfor different calibrators. We obtain MZ relations with similar shapes for all\ncalibrators, once the scale factors among them are taken into account. We do\nnot find any significant secondary relation of the MZR with either the star\nformation rate (SFR) or the specific SFR for any of the calibrators used in\nthis study, based on the analysis of the residuals of the best fitted relation.\nHowever we do see a hint for a (s)SFR-dependent deviation of the MZ-relation at\nlow masses (M$<$10$^{9.5}$M$_\\odot$), where our sample is not complete. We are\nthus unable to confirm the results by Mannucci et al. (2010), although we\ncannot exclude that this result is due to the differences in the analysed\ndatasets. In contrast, our results are inconsistent with the results by\nLara-Lopez et al. (2010), and we can exclude the presence of a SFR-Mass-Oxygen\nabundance Fundamental Plane. These results agree with previous findings\nsuggesting that either (1) the secondary relation with the SFR could be induced\nby an aperture effect in single fiber/aperture spectroscopic surveys, (2) it\ncould be related to a local effect confined to the central regions of galaxies,\nor (3) it is just restricted to the low-mass regime, or a combination of the\nthree effects.\n",
"title": "The Mass-Metallicity Relation revisited with CALIFA"
}
| null | null | null | null | true | null |
14019
| null |
Default
| null | null |
null |
{
"abstract": " Instanton partition functions of $\\mathcal{N}=1$ 5d Super Yang-Mills reduced\non $S^1$ can be engineered in type IIB string theory from the $(p,q)$-branes\nweb diagram. To this diagram is superimposed a web of representations of the\nDing-Iohara-Miki (DIM) algebra that acts on the partition function. In this\ncorrespondence, each segment is associated to a representation, and the\n(topological string) vertex is identified with the intertwiner operator\nconstructed by Awata, Feigin and Shiraishi. We define a new intertwiner acting\non the representation spaces of levels $(1,n)\\otimes(0,m)\\to(1,n+m)$, thereby\ngeneralizing to higher rank $m$ the original construction. It allows us to use\na folded version of the usual $(p,q)$-web diagram, bringing great\nsimplifications to actual computations. As a result, the characterization of\nGaiotto states and vertical intertwiners, previously obtained by some of the\nauthors, is uplifted to operator relations acting in the Fock space of\nhorizontal representations. We further develop a method to build qq-characters\nof linear quivers based on the horizontal action of DIM elements. While\nfundamental qq-characters can be built using the coproduct, higher ones require\nthe introduction of a (quantum) Weyl reflection acting on tensor products of\nDIM generators.\n",
"title": "(p,q)-webs of DIM representations, 5d N=1 instanton partition functions and qq-characters"
}
| null | null |
[
"Mathematics"
] | null | true | null |
14020
| null |
Validated
| null | null |
null |
{
"abstract": " This paper presents the kinematic analysis of the 3-PPPS parallel robot with\nan equi-lateral mobile platform and an equilateral-shaped base. Like the other\n3-PPPS robots studied in the literature, it is proved that the parallel\nsingularities depend only on the orientation of the end-effector. The\nquaternion parameters are used to represent the singularity surfaces. The study\nof the direct kinematic model shows that this robot admits a self-motion of the\nCardanic type. This explains why the direct kinematic model admits an infinite\nnumber of solutions in the center of the workspace at the \"home\" position but\nhas never been studied until now.\n",
"title": "Self-Motion of the 3-PPPS Parallel Robot with Delta-Shaped Base"
}
| null | null | null | null | true | null |
14021
| null |
Default
| null | null |
null |
{
"abstract": " BigDatalog is an extension of Datalog that achieves performance and\nscalability on both Apache Spark and multicore systems to the point that its\ngraph analytics outperform those written in GraphX. Looking back, we see how\nthis realizes the ambitious goal pursued by deductive database researchers\nbeginning forty years ago: this is the goal of combining the rigor and power of\nlogic in expressing queries and reasoning with the performance and scalability\nby which relational databases managed Big Data. This goal led to Datalog which\nis based on Horn Clauses like Prolog but employs implementation techniques,\nsuch as Semi-naive Fixpoint and Magic Sets, that extend the bottom-up\ncomputation model of relational systems, and thus obtain the performance and\nscalability that relational systems had achieved, as far back as the 80s, using\ndata-parallelization on shared-nothing architectures. But this goal proved\ndifficult to achieve because of major issues at (i) the language level and (ii)\nat the system level. The paper describes how (i) was addressed by simple rules\nunder which the fixpoint semantics extends to programs using count, sum and\nextrema in recursion, and (ii) was tamed by parallel compilation techniques\nthat achieve scalability on multicore systems and Apache Spark. This paper is\nunder consideration for acceptance in Theory and Practice of Logic Programming\n(TPLP).\n",
"title": "Scaling-Up Reasoning and Advanced Analytics on BigData"
}
| null | null | null | null | true | null |
14022
| null |
Default
| null | null |
null |
{
"abstract": " We present long-baseline ALMA observations of the strong gravitational lens\nH-ATLAS J090740.0-004200 (SDP.9), which consists of an elliptical galaxy at\n$z_{\\mathrm{L}}=0.6129$ lensing a background submillimeter galaxy into two\nextended arcs. The data include Band 6 continuum observations, as well as CO\n$J$=6$-$5 molecular line observations, from which we measure an updated source\nredshift of $z_{\\mathrm{S}}=1.5747$. The image morphology in the ALMA data is\ndifferent from that of the HST data, indicating a spatial offset between the\nstellar, gas, and dust component of the source galaxy. We model the lens as an\nelliptical power law density profile with external shear using a combination of\narchival HST data and conjugate points identified in the ALMA data. Our best\nmodel has an Einstein radius of $\\theta_{\\mathrm{E}}=0.66\\pm0.01$ and a\nslightly steeper than isothermal mass profile slope. We search for the central\nimage of the lens, which can be used constrain the inner mass distribution of\nthe lens galaxy including the central supermassive black hole, but do not\ndetect it in the integrated CO image at a 3$\\sigma$ rms level of 0.0471 Jy km\ns$^{-1}$.\n",
"title": "ALMA Observations of the Gravitational Lens SDP.9"
}
| null | null | null | null | true | null |
14023
| null |
Default
| null | null |
null |
{
"abstract": " The computation of the Noether numbers of all groups of order less than\nthirty-two is completed. It turns out that for these groups in non-modular\ncharacteristic the Noether number is attained on a multiplicity free\nrepresentation, it is strictly monotone on subgroups and factor groups, and it\ndoes not depend on the characteristic. Algorithms are developed and used to\ndetermine the small and large Davenport constants of these groups. For each of\nthese groups the Noether number is greater than the small Davenport constant,\nwhereas the first example of a group whose Noether number exceeds the large\nDavenport constant is found, answering partially a question posed by\nGeroldinger and Grynkiewicz.\n",
"title": "The Noether numbers and the Davenport constants of the groups of order less than 32"
}
| null | null | null | null | true | null |
14024
| null |
Default
| null | null |
null |
{
"abstract": " We are reporting that the Lugiato-Lefever equation describing the frequency\ncomb generation in ring resonators with the localized pump and loss terms also\ndescribes the simultaneous nonlinear resonances leading to the multistability\nof nonlinear modes and coexisting solitons that are associated with the\nspectrally distinct frequency combs.\n",
"title": "Multistability and coexisting soliton combs in ring resonators: the Lugiato-Lefever approach"
}
| null | null | null | null | true | null |
14025
| null |
Default
| null | null |
null |
{
"abstract": " Models of percolation processes on networks currently assume locally\ntree-like structures at low densities, and are derived exactly only in the\nthermodynamic limit. Finite size effects and the presence of short loops in\nreal systems however cause a deviation between the empirical percolation\nthreshold $p_c$ and its model-predicted value $\\pi_c$. Here we show the\nexistence of an empirical linear relation between $p_c$ and $\\pi_c$ across a\nlarge number of real and model networks. Such a putatively universal relation\ncan then be used to correct the estimated value of $\\pi_c$. We further show how\nto obtain a more precise relation using the concept of the complement graph, by\ninvestigating on the connection between the percolation threshold of a network,\n$p_c$, and that of its complement, $\\bar{p}_c$.\n",
"title": "Numerical assessment of the percolation threshold using complement networks"
}
| null | null | null | null | true | null |
14026
| null |
Default
| null | null |
null |
{
"abstract": " The L-intersection graphs are the graphs that have a representation as\nintersection graphs of axis parallel shapes in the plane. A subfamily of these\ngraphs are {L, |, --}-contact graphs which are the contact graphs of axis\nparallel L, |, and -- shapes in the plane. We prove here two results that were\nconjectured by Chaplick and Ueckerdt in 2013. We show that planar graphs are\nL-intersection graphs, and that triangle-free planar graphs are {L, |,\n--}-contact graphs. These results are obtained by a new and simple\ndecomposition technique for 4-connected triangulations. Our results also\nprovide a much simpler proof of the known fact that planar graphs are segment\nintersection graphs.\n",
"title": "Planar graphs as L-intersection or L-contact graphs"
}
| null | null | null | null | true | null |
14027
| null |
Default
| null | null |
null |
{
"abstract": " In the classical problem of scheduling on unrelated parallel machines, a set\nof jobs has to be assigned to a set of machines. The jobs have a processing\ntime depending on the machine and the goal is to minimize the makespan, that is\nthe maximum machine load. It is well known that this problem is NP-hard and\ndoes not allow polynomial time approximation algorithms with approximation\nguarantees smaller than $1.5$ unless P$=$NP. We consider the case that there\nare only a constant number $K$ of machine types. Two machines have the same\ntype if all jobs have the same processing time for them. This variant of the\nproblem is strongly NP-hard already for $K=1$. We present an efficient\npolynomial time approximation scheme (EPTAS) for the problem, that is, for any\n$\\varepsilon > 0$ an assignment with makespan of length at most\n$(1+\\varepsilon)$ times the optimum can be found in polynomial time in the\ninput length and the exponent is independent of $1/\\varepsilon$. In particular\nwe achieve a running time of $2^{\\mathcal{O}(K\\log(K)\n\\frac{1}{\\varepsilon}\\log^4 \\frac{1}{\\varepsilon})}+\\mathrm{poly}(|I|)$, where\n$|I|$ denotes the input length. Furthermore, we study three other problem\nvariants and present an EPTAS for each of them: The Santa Claus problem, where\nthe minimum machine load has to be maximized; the case of scheduling on\nunrelated parallel machines with a constant number of uniform types, where\nmachines of the same type behave like uniformly related machines; and the\nmultidimensional vector scheduling variant of the problem where both the\ndimension and the number of machine types are constant. For the Santa Claus\nproblem we achieve the same running time. The results are achieved, using mixed\ninteger linear programming and rounding techniques.\n",
"title": "An EPTAS for Scheduling on Unrelated Machines of Few Different Types"
}
| null | null | null | null | true | null |
14028
| null |
Default
| null | null |
null |
{
"abstract": " Determination of the pairing symmetry in monolayer FeSe films on SrTiO3 is a\nrequisite for understanding the high superconducting transition temperature in\nthis system, which has attracted intense theoretical and experimental studies\nbut remains controversial. Here, by introducing several types of point defects\nin FeSe monolayer films, we conduct a systematic investigation on the\nimpurity-induced electronic states by spatially resolved scanning tunneling\nspectroscopy. Ranging from surface adsorption, chemical substitution to\nintrinsic structural modification, these defects generate a variety of\nscattering strength, which renders new insights on the pairing symmetry.\n",
"title": "An extensive impurity-scattering study on the pairing symmetry of monolayer FeSe films on SrTiO3"
}
| null | null | null | null | true | null |
14029
| null |
Default
| null | null |
null |
{
"abstract": " This paper proposes a novel method to filter out the false alarm of LiDAR\nsystem by using the temporal correlation of target reflected photons. Because\nof the inevitable noise, which is due to background light and dark counts of\nthe detector, the depth imaging of LiDAR system exists a large estimation\nerror. Our method combines the Poisson statistical model with the different\ndistribution feature of signal and noise in the time axis. Due to selecting a\nproper threshold, our method can effectively filter out the false alarm of\nsystem and use the ToFs of detected signal photons to rebuild the depth image\nof the scene. The experimental results reveal that by our method it can fast\ndistinguish the distance between two close objects, which is confused due to\nthe high background noise, and acquire the accurate depth image of the scene.\nOur method need not increase the complexity of the system and is useful in\npower-limited depth imaging.\n",
"title": "Fast Depth Imaging Denoising with the Temporal Correlation of Photons"
}
| null | null |
[
"Physics"
] | null | true | null |
14030
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the question of extending propositional logic to a logic of\nplausible reasoning, and posit four requirements that any such extension should\nsatisfy. Each is a requirement that some property of classical propositional\nlogic be preserved in the extended logic; as such, the requirements are simpler\nand less problematic than those used in Cox's Theorem and its variants. As with\nCox's Theorem, our requirements imply that the extended logic must be\nisomorphic to (finite-set) probability theory. We also obtain specific\nnumerical values for the probabilities, recovering the classical definition of\nprobability as a theorem, with truth assignments that satisfy the premise\nplaying the role of the \"possible cases.\"\n",
"title": "From Propositional Logic to Plausible Reasoning: A Uniqueness Theorem"
}
| null | null | null | null | true | null |
14031
| null |
Default
| null | null |
null |
{
"abstract": " Our world can be succinctly and compactly described as structured scenes of\nobjects and relations. A typical room, for example, contains salient objects\nsuch as tables, chairs and books, and these objects typically relate to each\nother by their underlying causes and semantics. This gives rise to correlated\nfeatures, such as position, function and shape. Humans exploit knowledge of\nobjects and their relations for learning a wide spectrum of tasks, and more\ngenerally when learning the structure underlying observed data. In this work,\nwe introduce relation networks (RNs) - a general purpose neural network\narchitecture for object-relation reasoning. We show that RNs are capable of\nlearning object relations from scene description data. Furthermore, we show\nthat RNs can act as a bottleneck that induces the factorization of objects from\nentangled scene description inputs, and from distributed deep representations\nof scene images provided by a variational autoencoder. The model can also be\nused in conjunction with differentiable memory mechanisms for implicit relation\ndiscovery in one-shot learning tasks. Our results suggest that relation\nnetworks are a potentially powerful architecture for solving a variety of\nproblems that require object relation reasoning.\n",
"title": "Discovering objects and their relations from entangled scene representations"
}
| null | null | null | null | true | null |
14032
| null |
Default
| null | null |
null |
{
"abstract": " We present the concept of magnetic gas detection by the Extraordinary Hall\neffect (EHE). The technique is compatible with the existing conductometric gas\ndetection technologies and allows simultaneous measurement of two independent\nparameters: resistivity and magnetization affected by the target gas.\nFeasibility of the approach is demonstrated by detecting low concentration\nhydrogen using thin CoPd films as the sensor material. The Hall effect\nsensitivity of the optimized samples exceeds 240% per 104 ppm at hydrogen\nconcentrations below 0.5% in the hydrogen/nitrogen atmosphere, which is more\nthan two orders of magnitude higher than the sensitivity of the conductance\ndetection.\n",
"title": "Hall effect spintronics for gas detection"
}
| null | null |
[
"Physics"
] | null | true | null |
14033
| null |
Validated
| null | null |
null |
{
"abstract": " In this article we present an automatic method for charge and mass\nidentification of charged nuclear fragments produced in heavy ion collisions at\nintermediate energies. The algorithm combines a generative model of DeltaE - E\nrelation and a Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES). The\nCMA-ES is a stochastic and derivative-free method employed to search parameter\nspace of the model by means of a fitness function. The article describes\ndetails of the method along with results of an application on simulated labeled\ndata.\n",
"title": "An evolutionary strategy for DeltaE - E identification"
}
| null | null | null | null | true | null |
14034
| null |
Default
| null | null |
null |
{
"abstract": " This paper proposes a model of information cascades as directed spanning\ntrees (DSTs) over observed documents. In addition, we propose a contrastive\ntraining procedure that exploits partial temporal ordering of node infections\nin lieu of labeled training links. This combination of model and unsupervised\ntraining makes it possible to improve on models that use infection times alone\nand to exploit arbitrary features of the nodes and of the text content of\nmessages in information cascades. With only basic node and time lag features\nsimilar to previous models, the DST model achieves performance with\nunsupervised training comparable to strong baselines on a blog network\ninference task. Unsupervised training with additional content features achieves\nsignificantly better results, reaching half the accuracy of a fully supervised\nmodel.\n",
"title": "Contrastive Training for Models of Information Cascades"
}
| null | null |
[
"Computer Science"
] | null | true | null |
14035
| null |
Validated
| null | null |
null |
{
"abstract": " We present inferences on the geometry and kinematics of the broad-Hbeta\nline-emitting region in four active galactic nuclei monitored as a part of the\nfall 2010 reverberation mapping campaign at MDM Observatory led by the Ohio\nState University. From modeling the continuum variability and response in\nemission-line profile changes as a function of time, we infer the geometry of\nthe Hbeta- emitting broad line regions to be thick disks that are close to\nface-on to the observer with kinematics that are well-described by either\nelliptical orbits or inflowing gas. We measure the black hole mass to be log\n(MBH) = 7.25 (+/-0.10) for Mrk 335, 7.86 (+0.20, -0.17) for Mrk 1501, 7.84\n(+0.14, -0.19) for 3C 120, and 6.92 (+0.24, -0.23) for PG 2130+099. These black\nhole mass measurements are not based on a particular assumed value of the\nvirial scale factor f, allowing us to compute individual f factors for each\ntarget. Our results nearly double the number of targets that have been modeled\nin this manner, and investigate the properties of a more diverse sample by\nincluding previously modeled objects. We measure an average scale factor f in\nthe entire sample to be log10(f) = 0.54 +/- 0.17 when the line dispersion is\nused to characterize the line width, which is consistent with values derived\nusing the normalization of the MBH-sigma relation. We find that the scale\nfactor f for individual targets is likely correlated with the black hole mass,\ninclination angle, and opening angle of the broad line region but we do not\nfind any correlation with the luminosity.\n",
"title": "The Structure of the Broad-Line Region In Active Galactic Nuclei. II. Dynamical Modeling of Data from the AGN10 Reverberation Mapping Campaign"
}
| null | null | null | null | true | null |
14036
| null |
Default
| null | null |
null |
{
"abstract": " Providing long-range forecasts is a fundamental challenge in time series\nmodeling, which is only compounded by the challenge of having to form such\nforecasts when a time series has never previously been observed. The latter\nchallenge is the time series version of the cold-start problem seen in\nrecommender systems which, to our knowledge, has not been addressed in previous\nwork. A similar problem occurs when a long range forecast is required after\nonly observing a small number of time points --- a warm start forecast. With\nthese aims in mind, we focus on forecasting seasonal profiles---or baseline\ndemand---for periods on the order of a year in three cases: the long range case\nwith multiple previously observed seasonal profiles, the cold start case with\nno previous observed seasonal profiles, and the warm start case with only a\nsingle partially observed profile. Classical time series approaches that\nperform iterated step-ahead forecasts based on previous observations struggle\nto provide accurate long range predictions; in settings with little to no\nobserved data, such approaches are simply not applicable. Instead, we present a\nstraightforward framework which combines ideas from high-dimensional regression\nand matrix factorization on a carefully constructed data matrix. Key to our\nformulation and resulting performance is leveraging (1) repeated patterns over\nfixed periods of time and across series, and (2) metadata associated with the\nindividual series; without this additional data, the cold-start/warm-start\nproblems are nearly impossible to solve. We demonstrate that our framework can\naccurately forecast an array of seasonal profiles on multiple large scale\ndatasets.\n",
"title": "A Unified Framework for Long Range and Cold Start Forecasting of Seasonal Profiles in Time Series"
}
| null | null | null | null | true | null |
14037
| null |
Default
| null | null |
null |
{
"abstract": " AA Tau is the archetype for a class of stars with a peculiar periodic\nphotometric variability thought to be related to a warped inner disk structure\nwith a nearly edge-on viewing geometry. We present high resolution ($\\sim$0.2\")\nALMA observations of the 0.87 and 1.3~mm dust continuum emission from the disk\naround AA Tau. These data reveal an evenly spaced three-ringed emission\nstructure, with distinct peaks at 0.34\", 0.66\", and 0.99\", all viewed at a\nmodest inclination of 59.1$^{\\circ}\\pm$0.3$^{\\circ}$ (decidedly not edge-on).\nIn addition to this ringed substructure, we find non-axisymmetric features\nincluding a `bridge' of emission that connects opposite sides of the innermost\nring. We speculate on the nature of this `bridge' in light of accompanying\nobservations of HCO$^+$ and $^{13}$CO (J=3--2) line emission. The HCO$^+$\nemission is bright interior to the innermost dust ring, with a projected\nvelocity field that appears rotated with respect to the resolved disk geometry,\nindicating the presence of a warp or inward radial flow. We suggest that the\ncontinuum bridge and HCO$^+$ line kinematics could originate from gap-crossing\naccretion streams, which may be responsible for the long-duration dimming of\noptical light from AA Tau.\n",
"title": "A Multi-Ringed, Modestly-Inclined Protoplanetary Disk around AA Tau"
}
| null | null | null | null | true | null |
14038
| null |
Default
| null | null |
null |
{
"abstract": " The paper describes the verifying methods of medical specialty from user\nprofile of online community for health-related advices. To avoid critical\nsituations with the proliferation of unverified and inaccurate information in\nmedical online community, it is necessary to develop a comprehensive software\nsolution for verifying the user medical specialty of online community for\nhealth-related advices. The algorithm for forming the information profile of a\nmedical online community user is designed. The scheme systems of formation of\nindicators of user specialization in the profession based on a training sample\nis presented. The method of forming the user information profile of online\ncommunity for healthrelated advices by computer-linguistic analysis of the\ninformation content is suggested. The system of indicators based on a training\nsample of users in medical online communities is formed. The matrix of medical\nspecialties indicators and method of determining weight coefficients these\nindicators is investigated. The proposed method of verifying the medical\nspecialty from user profile is tested in online medical community.\n",
"title": "Verifying the Medical Specialty from User Profile of Online Community for Health-Related Advices"
}
| null | null | null | null | true | null |
14039
| null |
Default
| null | null |
null |
{
"abstract": " Objects moving in fluids experience patterns of stress on their surfaces\ndetermined by their motion and the geometry of nearby boundaries. Fish and\nunderwater robots can use these patterns for navigation. This paper extends\nthis stress-based navigation to microscopic robots in tiny vessels, where\nrobots can exploit the physics of fluids at low Reynolds number. This applies,\nfor instance, in vessels with sizes and flow speeds comparable to those of\ncapillaries in biological tissues. We describe how a robot can use simple\ncomputations to estimate its motion, orientation and distance to nearby vessel\nwalls from fluid-induced stresses on its surface. Numerically evaluating these\nestimates for a variety of vessel sizes and robot positions shows they are most\naccurate when robots are close to vessel walls.\n",
"title": "Stress-Based Navigation for Microscopic Robots in Viscous Fluids"
}
| null | null | null | null | true | null |
14040
| null |
Default
| null | null |
null |
{
"abstract": " Given a real number $ \\beta > 1$, we study the associated $ (-\\beta)$-shift\nintroduced by S. Ito and T. Sadahiro. We compares some aspects of the\n$(-\\beta)$-shift to the $\\beta$-shift. When the expansion in base $ -\\beta $ of\n$ -\\frac{\\beta}{\\beta+1} $ is periodic with odd period or when $ \\beta $ is\nstrictly less than the golden ratio, the $ (-\\beta)$-shift, as defined by S.\nIto and T. Sadahiro cannot be coded because its language is not transitive.\nThis intransitivity of words explains the existence of gaps in the interval. We\nobserve that an intransitive word appears in the $(-\\beta)$-expansion of a real\nnumber taken in the gap. Furthermore, we determine the Zeta function\n$\\zeta_{-\\beta}$ of the $(-\\beta)$-transformation and the associated\nlap-counting function $L_{T_{-\\beta}}$. These two functions are related by $\n\\zeta_{-\\beta}=(1-z^2)L_{T_{-\\beta}}$. We observe some similarities with the\nzeta function of the $\\beta$-transformation. The function $\\zeta_{-\\beta}$ is\nmeromorphic in the unit disk, is holomorphic in the open disk $ \\{z |z| <\n\\frac{1}{\\beta} \\}$, has a simple pole at $ \\frac{1}{\\beta}$ and no other\nsingularities $ z $ such that $\\|z| = \\frac{1}{\\beta}$. We also note an\ninfluence of gaps ($\\beta$ less than the golden ratio) on the zeta function. In\nfactors of the denominator of $\\zeta_{-\\beta}$, the coefficients count the\nwords generating gaps.\n",
"title": "The $(-β)$-shift and associated Zeta Function"
}
| null | null | null | null | true | null |
14041
| null |
Default
| null | null |
null |
{
"abstract": " We derive the expressions for configurational forces in Kohn-Sham density\nfunctional theory, which correspond to the generalized variational force\ncomputed as the derivative of the Kohn-Sham energy functional with respect to\nthe position of a material point $\\textbf{x}$. These configurational forces\nthat result from the inner variations of the Kohn-Sham energy functional\nprovide a unified framework to compute atomic forces as well as stress tensor\nfor geometry optimization. Importantly, owing to the variational nature of the\nformulation, these configurational forces inherently account for the Pulay\ncorrections. The formulation presented in this work treats both pseudopotential\nand all-electron calculations in single framework, and employs a local\nvariational real-space formulation of Kohn-Sham DFT expressed in terms of the\nnon-orthogonal wavefunctions that is amenable to reduced-order scaling\ntechniques. We demonstrate the accuracy and performance of the proposed\nconfigurational force approach on benchmark all-electron and pseudopotential\ncalculations conducted using higher-order finite-element discretization. To\nthis end, we examine the rates of convergence of the finite-element\ndiscretization in the computed forces and stresses for various materials\nsystems, and, further, verify the accuracy from finite-differencing the energy.\nWherever applicable, we also compare the forces and stresses with those\nobtained from Kohn-Sham DFT calculations employing plane-wave basis\n(pseudopotential calculations) and Gaussian basis (all-electron calculations).\nFinally, we verify the accuracy of the forces on large materials systems\ninvolving a metallic aluminum nanocluster containing 666 atoms and an alkane\nchain containing 902 atoms, where the Kohn-Sham electronic ground state is\ncomputed using a reduced-order scaling subspace projection technique (P.\nMotamarri and V. Gavini, Phys. Rev. B 90, 115127).\n",
"title": "Configurational forces in electronic structure calculations using Kohn-Sham density functional theory"
}
| null | null | null | null | true | null |
14042
| null |
Default
| null | null |
null |
{
"abstract": " Real-time safety analysis has become a hot research topic as it can more\naccurately reveal the relationships between real-time traffic characteristics\nand crash occurrence, and these results could be applied to improve active\ntraffic management systems and enhance safety performance. Most of the previous\nstudies have been applied to freeways and seldom to arterials. This study\nattempts to examine the relationship between crash occurrence and real-time\ntraffic and weather characteristics based on four urban arterials in Central\nFlorida. Considering the substantial difference between the interrupted urban\narterials and the access controlled freeways, the adaptive signal phasing data\nwas introduced in addition to the traditional traffic data. Bayesian\nconditional logistic models were developed by incorporating the Bluetooth,\nadaptive signal control, and weather data, which were extracted for a period of\n20 minutes (four 5-minute intervals) before the time of crash occurrence. Model\ncomparison results indicated that the model based on 5-10 minute interval\ndataset performs the best. It revealed that the average speed, upstream\nleft-turn volume, downstream green ratio, and rainy indicator were found to\nhave significant effects on crash occurrence. Furthermore, both Bayesian random\nparameters logistic and Bayesian random parameters conditional logistic models\nwere developed to compare with the Bayesian conditional logistic model, and the\nBayesian random parameters conditional logistic model was found to have the\nbest model performance in terms of the AUC and DIC values. These results are\nimportant in real-time safety applications in the context of Integrated Active\nTraffic Management.\n",
"title": "Utilizing Bluetooth and Adaptive Signal Control Data for Urban Arterials Safety Analysis"
}
| null | null | null | null | true | null |
14043
| null |
Default
| null | null |
null |
{
"abstract": " We show that a conformal anomaly in Weyl/Dirac semimetals generates a bulk\nelectric current perpendicular to a temperature gradient and the direction of a\nbackground magnetic field. The associated conductivity of this novel\ncontribution to the Nernst effect is fixed by a beta function associated with\nthe electric charge renormalization in the material.\n",
"title": "A Nernst current from the conformal anomaly in Dirac and Weyl semimetals"
}
| null | null | null | null | true | null |
14044
| null |
Default
| null | null |
null |
{
"abstract": " Speaker recognition performance in emotional talking environments is not as\nhigh as it is in neutral talking environments. This work focuses on proposing,\nimplementing, and evaluating a new approach to enhance the performance in\nemotional talking environments. The new proposed approach is based on\nidentifying the unknown speaker using both his/her gender and emotion cues.\nBoth Hidden Markov Models (HMMs) and Suprasegmental Hidden Markov Models\n(SPHMMs) have been used as classifiers in this work. This approach has been\ntested on our collected emotional speech database which is composed of six\nemotions. The results of this work show that speaker identification performance\nbased on using both gender and emotion cues is higher than that based on using\ngender cues only, emotion cues only, and neither gender nor emotion cues by\n7.22%, 4.45%, and 19.56%, respectively. This work also shows that the optimum\nspeaker identification performance takes place when the classifiers are\ncompletely biased towards suprasegmental models and no impact of acoustic\nmodels in the emotional talking environments. The achieved average speaker\nidentification performance based on the new proposed approach falls within\n2.35% of that obtained in subjective evaluation by human judges.\n",
"title": "Employing both Gender and Emotion Cues to Enhance Speaker Identification Performance in Emotional Talking Environments"
}
| null | null | null | null | true | null |
14045
| null |
Default
| null | null |
null |
{
"abstract": " UV absorption studies with FUSE have observed H2 molecular gas in translucent\nand diffuse clouds. Observations of the 158 micron [C II] fine structure line\nwith Herschel also trace the same H2 molecular gas in emission. We present [C\nII] observations along 27 lines of sight (LOSs) towards target stars of which\n25 have FUSE H2 UV absorption. We detect [C II] emission features in all but\none target LOS. For three Target LOSs, which are close to the Galactic plane,\nwe also present position-velocity maps of [C II] emission observed by HIFI in\non-the-fly spectral line mapping. We use the velocity resolved [C II] spectra\ntowards the target LOSs observed by FUSE to identify C II] velocity components\nassociated with the H2 clouds. We analyze the observed velocity integrated [C\nII] spectral line intensities in terms of the densities and thermal pressures\nin the H2 gas using the H2 column densities and temperatures measured by the UV\nabsorption data. We present the H2 gas densities and thermal pressures for 26\ntarget LOSs and from the [C II] intensities derive a mean thermal pressure in\nthe range 6100 to 7700 K cm^-3 in diffuse H2 clouds. We discuss the thermal\npressures and densities towards 14 targets, comparing them to results obtained\nusing the UV absorption data for two other tracers CI and CO.\n",
"title": "Thermal Pressure in Diffuse H2 Gas Measured by Herschel [C II] Emission and FUSE UV H2 Absorption"
}
| null | null |
[
"Physics"
] | null | true | null |
14046
| null |
Validated
| null | null |
null |
{
"abstract": " As radio telescopes become more sensitive, the damaging effects of radio\nfrequency interference (RFI) become more apparent. Near radio telescope arrays,\nRFI sources are often easily removed or replaced; the challenge lies in\nidentifying them. Transient (impulsive) RFI is particularly difficult to\nidentify. We propose a novel dictionary-based approach to transient RFI\nidentification. RFI events are treated as sequences of sub-events, drawn from\nparticular labelled classes. We demonstrate an automated method of extracting\nand labelling sub-events using a dataset of transient RFI. A dictionary of\nlabels may be used in conjunction with hidden Markov models to identify the\nsources of RFI events reliably. We attain improved classification accuracy over\ntraditional approaches such as SVMs or a naïve kNN classifier. Finally, we\ninvestigate why transient RFI is difficult to classify. We show that cluster\nseparation in the principal components domain is influenced by the mains supply\nphase for certain sources.\n",
"title": "A Dictionary Approach to Identifying Transient RFI"
}
| null | null |
[
"Physics"
] | null | true | null |
14047
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the eternal inflation scenario of the slow-roll/chaotic type with\nthe additional element of an objective collapse of the wave function. The\nincorporation of this new agent to the traditional inflationary setting might\nrepresent a possible solution to the quantum measurement problem during\ninflation, a subject that has not reached a consensus among the community.\nSpecifically, it could provide an explanation for the generation of the\nprimordial anisotropies and inhomogeneities, starting from a perfectly\nsymmetric background and invoking symmetric dynamics. We adopt the continuous\nspontaneous localization model, in the context of inflation, as the dynamical\nreduction mechanism that generates the primordial inhomogeneities. Furthermore,\nwhen enforcing the objective reduction mechanism, the condition for eternal\ninflation can be bypassed. In particular, the collapse mechanism incites the\nwave function, corresponding to the inflaton, to localize itself around the\nzero mode of the field. Then the zero mode will evolve essentially unperturbed,\ndriving inflation to an end in any region of the Universe where inflation\noccurred. Also, our approach achieves a primordial spectrum with an amplitude\nand shape consistent with the one that best fits the observational data.\n",
"title": "Eternal inflation and the quantum birth of cosmic structure"
}
| null | null | null | null | true | null |
14048
| null |
Default
| null | null |
null |
{
"abstract": " We present the first exact calculations of the time dependence of causal\ncorrelations in driven nonequilibrium states in (2+1)-dimensional systems using\nholography. Comparing exact results with those obtained from simple prototype\ngeometries that are parametrized only by a time dependent temperature, we find\nthat the universal slowly varying features are controlled just by the pump\nduration and the initial and final temperatures only. We provide numerical\nevidence that the locations of the event and apparent horizons in the dual\ngeometries can be deduced from the nonequilibrium causal correlations without\nany prior knowledge of the dual gravity theory.\n",
"title": "Exact time dependence of causal correlations and nonequilibrium density matrices in holographic systems"
}
| null | null | null | null | true | null |
14049
| null |
Default
| null | null |
null |
{
"abstract": " Rigorous nonequilibrium actions for the many-body problem are usually derived\nby means of path integrals combined with a discrete temporal mesh on the\nSchwinger-Keldysh time contour. The latter suffers from a fundamental\nlimitation: the initial state on this contour cannot be arbitrary, but\nnecessarily needs to be described by a non-interacting density matrix, while\ninteractions are switched on adiabatically. The Kostantinov-Perel' contour\novercomes these and other limitations, allowing generic initial-state\npreparations. In this Article, we apply the technique of the discrete temporal\nmesh to rigorously build the nonequilibrium path integral on the\nKostantinov-Perel' time contour.\n",
"title": "Discrete-time construction of nonequilibrium path integrals on the Kostantinov-Perel' time contour"
}
| null | null |
[
"Physics"
] | null | true | null |
14050
| null |
Validated
| null | null |
null |
{
"abstract": " The first step in statistical reliability studies of coherent systems is the\nestimation of the reliability of each system component. For the cases of\nparallel and series systems the literature is abundant. It seems that the\npresent paper is the first that presents the general case of component\ninferences in coherent systems. The failure time model considered here is the\nthree-parameter Weibull distribution. Furthermore, neither independence nor\nidentically distributed failure times are required restrictions. The proposed\nmodel is general in the sense that it can be used for any coherent system, from\nthe simplest to the more complex structures. It can be considered for all kinds\nof censored data; including interval-censored data. An important property\nobtained for the Weibull model is the fact that the posterior distributions are\nproper, even for non-informative priors. Using several simulations, the\nexcellent performance of the model is illustrated. As a real example, boys\nfirst use of marijuana is considered to show the efficiency of the solution\neven when censored data occurs.\n",
"title": "Estimation of Component Reliability in Coherent Systems"
}
| null | null | null | null | true | null |
14051
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we introduce an online model for communication complexity.\nAnalogous to how online algorithms receive their input piece-by-piece, our\nmodel presents one of the players, Bob, his input piece-by-piece, and has the\nplayers Alice and Bob cooperate to compute a result each time before the next\npiece is revealed to Bob. This model has a closer and more natural\ncorrespondence to dynamic data structures than classic communication models do,\nand hence presents a new perspective on data structures.\nWe first present a tight lower bound for the online set intersection problem\nin the online communication model, demonstrating a general approach for proving\nonline communication lower bounds. The online communication model prevents a\nbatching trick that classic communication complexity allows, and yields a\nstronger lower bound. We then apply the online communication model to prove\ndata structure lower bounds for two dynamic data structure problems: the Group\nRange problem and the Dynamic Connectivity problem for forests. Both of the\nproblems admit a worst case $O(\\log n)$-time data structure. Using online\ncommunication complexity, we prove a tight cell-probe lower bound for each:\nspending $o(\\log n)$ (even amortized) time per operation results in at best an\n$\\exp(-\\delta^2 n)$ probability of correctly answering a\n$(1/2+\\delta)$-fraction of the $n$ queries.\n",
"title": "Cell-Probe Lower Bounds from Online Communication Complexity"
}
| null | null | null | null | true | null |
14052
| null |
Default
| null | null |
null |
{
"abstract": " Due to recent advances - compute, data, models - the role of learning in\nautonomous systems has expanded significantly, rendering new applications\npossible for the first time. While some of the most significant benefits are\nobtained in the perception modules of the software stack, other aspects\ncontinue to rely on known manual procedures based on prior knowledge on\ngeometry, dynamics, kinematics etc. Nonetheless, learning gains relevance in\nthese modules when data collection and curation become easier than manual rule\ndesign. Building on this coarse and broad survey of current research, the final\nsections aim to provide insights into future potentials and challenges as well\nas the necessity of structure in current practical applications.\n",
"title": "On Machine Learning and Structure for Mobile Robots"
}
| null | null | null | null | true | null |
14053
| null |
Default
| null | null |
null |
{
"abstract": " In order to find a way of measuring the degree of incompleteness of an\nincomplete financial market, the rank of the vector price process of the traded\nassets and the dimension of the associated acceptance set are introduced. We\nshow that they are equal and state a variety of consequences.\n",
"title": "On the degree of incompleteness of an incomplete financial market"
}
| null | null | null | null | true | null |
14054
| null |
Default
| null | null |
null |
{
"abstract": " The horseshoe prior has proven to be a noteworthy alternative for sparse\nBayesian estimation, but has previously suffered from two problems. First,\nthere has been no systematic way of specifying a prior for the global shrinkage\nhyperparameter based on the prior information about the degree of sparsity in\nthe parameter vector. Second, the horseshoe prior has the undesired property\nthat there is no possibility of specifying separately information about\nsparsity and the amount of regularization for the largest coefficients, which\ncan be problematic with weakly identified parameters, such as the logistic\nregression coefficients in the case of data separation. This paper proposes\nsolutions to both of these problems. We introduce a concept of effective number\nof nonzero parameters, show an intuitive way of formulating the prior for the\nglobal hyperparameter based on the sparsity assumptions, and argue that the\nprevious default choices are dubious based on their tendency to favor solutions\nwith more unshrunk parameters than we typically expect a priori. Moreover, we\nintroduce a generalization to the horseshoe prior, called the regularized\nhorseshoe, that allows us to specify a minimum level of regularization to the\nlargest values. We show that the new prior can be considered as the continuous\ncounterpart of the spike-and-slab prior with a finite slab width, whereas the\noriginal horseshoe resembles the spike-and-slab with an infinitely wide slab.\nNumerical experiments on synthetic and real world data illustrate the benefit\nof both of these theoretical advances.\n",
"title": "Sparsity information and regularization in the horseshoe and other shrinkage priors"
}
| null | null | null | null | true | null |
14055
| null |
Default
| null | null |
null |
{
"abstract": " Current machine learning techniques proposed to automatically discover a\nrobot kinematics usually rely on a priori information about the robot's\nstructure, sensors properties or end-effector position. This paper proposes a\nmethod to estimate a certain aspect of the forward kinematics model with no\nsuch information. An internal representation of the end-effector configuration\nis generated from unstructured proprioceptive and exteroceptive data flow under\nvery limited assumptions. A mapping from the proprioceptive space to this\nrepresentational space can then be used to control the robot.\n",
"title": "Learning an internal representation of the end-effector configuration space"
}
| null | null | null | null | true | null |
14056
| null |
Default
| null | null |
null |
{
"abstract": " The Siberian Solar Radio Telescope is now being upgraded. The upgrading is\naimed at providing the aperture synthesis imaging in the 4-8 GHz frequency\nrange, instead of the single-frequency direct imaging due to the Earth\nrotation. The first phase of the upgrading is a 48-antenna array - the Siberian\nRadioheliograph. One type of radioheliograph data represents correlation plots.\nIn evaluating the covariance of two-level signals, these plots are sums of\ncomplex correlations, obtained for different antenna pairs. Bearing in mind\nthat correlation of signals from an antenna pair is related to a spatial\nfrequency, we can say that each value of the plot is an integral over a spatial\nspectrum. Limits of the integration are defined by the task. Only high spatial\nfrequencies are integrated to obtain dynamics of compact sources. The whole\nspectrum is integrated to reach maximum sensitivity. We show that the\ncovariance of two-level variables up to Van Vleck correction is a correlation\ncoefficient of these variables.\n",
"title": "Correlation plots of the Siberian radioheliograph"
}
| null | null | null | null | true | null |
14057
| null |
Default
| null | null |
null |
{
"abstract": " We present temperature dependent inelastic neutron scattering measurments,\naccompanied byab-initio calculations of phonon spectra and elastic properties\nas a function of pressure to understand anharmonicity of phonons and to study\nthe mechanism of negative thermal expansion and negative linear compressibility\nbehaviour of ZnAu2(CN)4. The mechanism is identified in terms of specific\nanharmonic modes that involve bending of the Zn(CN)4-Au- Zn(CN)4 linkage. The\nhigh-pressure phase transition at about 2 GPa is also investigated and found to\nbe related to softening of a phonon mode at the L-point at the Brillouin zone\nboundary and its coupling with a zone-centre phonon and an M-point phonon in\nthe ambient pressure phase. Although the phase transition is primarily driven\nby a L-point soft phonon mode, which usually leads to a second order transition\nwith a 2 x 2 x 2 supercell, in the present case the structure is close to an\nelastic instability that leads to a weakly first order transition.\n",
"title": "Anomalous Thermal Expansion, Negative Linear Compressibility and High-Pressure Phase Transition in ZnAu2(CN)4: Neutron Inelastic Scattering and Lattice Dynamics Studies"
}
| null | null | null | null | true | null |
14058
| null |
Default
| null | null |
null |
{
"abstract": " The goal of this study is to test two different computing platforms with\nrespect to their suitability for running deep networks as part of a humanoid\nrobot software system. One of the platforms is the CPU-centered Intel NUC7i7BNH\nand the other is a NVIDIA Jetson TX2 system that puts more emphasis on GPU\nprocessing. The experiments addressed a number of benchmarking tasks including\npedestrian detection using deep neural networks. Some of the results were\nunexpected but demonstrate that platforms exhibit both advantages and\ndisadvantages when taking computational performance and electrical power\nrequirements of such a system into account.\n",
"title": "Comparing Computing Platforms for Deep Learning on a Humanoid Robot"
}
| null | null |
[
"Computer Science"
] | null | true | null |
14059
| null |
Validated
| null | null |
null |
{
"abstract": " Super-resolution fluorescence microscopy, with a resolution beyond the\ndiffraction limit of light, has become an indispensable tool to directly\nvisualize biological structures in living cells at a nanometer-scale\nresolution. Despite advances in high-density super-resolution fluorescent\ntechniques, existing methods still have bottlenecks, including extremely long\nexecution time, artificial thinning and thickening of structures, and lack of\nability to capture latent structures. Here we propose a novel deep learning\nguided Bayesian inference approach, DLBI, for the time-series analysis of\nhigh-density fluorescent images. Our method combines the strength of deep\nlearning and statistical inference, where deep learning captures the underlying\ndistribution of the fluorophores that are consistent with the observed\ntime-series fluorescent images by exploring local features and correlation\nalong time-axis, and statistical inference further refines the ultrastructure\nextracted by deep learning and endues physical meaning to the final image.\nComprehensive experimental results on both real and simulated datasets\ndemonstrate that our method provides more accurate and realistic local patch\nand large-field reconstruction than the state-of-the-art method, the 3B\nanalysis, while our method is more than two orders of magnitude faster. The\nmain program is available at this https URL\n",
"title": "DLBI: Deep learning guided Bayesian inference for structure reconstruction of super-resolution fluorescence microscopy"
}
| null | null | null | null | true | null |
14060
| null |
Default
| null | null |
null |
{
"abstract": " An astonishing fact was established by Lee A. Rubel (1981): there exists a\nfixed non-trivial fourth-order polynomial differential algebraic equation (DAE)\nsuch that for any positive continuous function $\\varphi$ on the reals, and for\nany positive continuous function $\\epsilon(t)$, it has a $\\mathcal{C}^\\infty$\nsolution with $| y(t) - \\varphi(t) | < \\epsilon(t)$ for all $t$. Lee A. Rubel\nprovided an explicit example of such a polynomial DAE. Other examples of\nuniversal DAE have later been proposed by other authors. However, Rubel's DAE\n\\emph{never} has a unique solution, even with a finite number of conditions of\nthe form $y^{(k_i)}(a_i)=b_i$.\nThe question whether one can require the solution that approximates $\\varphi$\nto be the unique solution for a given initial data is a well known open problem\n[Rubel 1981, page 2], [Boshernitzan 1986, Conjecture 6.2]. In this article, we\nsolve it and show that Rubel's statement holds for polynomial ordinary\ndifferential equations (ODEs), and since polynomial ODEs have a unique solution\ngiven an initial data, this positively answers Rubel's open problem. More\nprecisely, we show that there exists a \\textbf{fixed} polynomial ODE such that\nfor any $\\varphi$ and $\\epsilon(t)$ there exists some initial condition that\nyields a solution that is $\\epsilon$-close to $\\varphi$ at all times.\nIn particular, the solution to the ODE is necessarily analytic, and we show\nthat the initial condition is computable from the target function and error\nfunction.\n",
"title": "A Universal Ordinary Differential Equation"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
14061
| null |
Validated
| null | null |
null |
{
"abstract": " Determination of the energy and flux of the gamma photons by Imaging\nAtmospheric Cherenkov Technique is strongly dependent on optical properties of\nthe atmosphere. Therefore, atmospheric monitoring during the future\nobservations of the Cherenkov Telescope Array (CTA) as well as anticipated\nlong-term monitoring in order to characterize overal properties and annual\nvariation of atmospheric conditions are very important. Several instruments are\nalready installed at the CTA sites in order to monitor atmospheric conditions\non long-term. One of them is a Sun/Moon photometer CE318-T, installed at the\nSouthern CTA site. Since the photometer is installed at a place with very\nstable atmospheric conditions, it can be also used for characterization of its\nperformance and testing of new methods of aerosol optical depth (AOD)\nretrieval, cloud-screening and calibration. In this work, we describe our\ncalibration method for nocturnal measurements and the modification of\ncloud-screening for purposes of nocturnal AOD retrieval. We applied these\nmethods on two months of observations and present the distribution of AODs in\nfour photometric passbands together with their uncertainties.\n",
"title": "Sun/Moon photometer for the Cherenkov Telescope Array - first results"
}
| null | null | null | null | true | null |
14062
| null |
Default
| null | null |
null |
{
"abstract": " What can we learn from a connectome? We constructed a simplified model of the\nfirst two stages of the fly visual system, the lamina and medulla. The\nresulting hexagonal lattice convolutional network was trained using\nbackpropagation through time to perform object tracking in natural scene\nvideos. Networks initialized with weights from connectome reconstructions\nautomatically discovered well-known orientation and direction selectivity\nproperties in T4 neurons and their inputs, while networks initialized at random\ndid not. Our work is the first demonstration, that knowledge of the connectome\ncan enable in silico predictions of the functional properties of individual\nneurons in a circuit, leading to an understanding of circuit function from\nstructure alone.\n",
"title": "A Connectome Based Hexagonal Lattice Convolutional Network Model of the Drosophila Visual System"
}
| null | null | null | null | true | null |
14063
| null |
Default
| null | null |
null |
{
"abstract": " In the {claw, diamond}-free edge deletion problem, we are given a graph $G$\nand an integer $k>0$, the question is whether there are at most $k$ edges whose\ndeletion results in a graph without claws and diamonds as induced graphs. Based\non some refined observations, we propose a kernel of $O(k^3)$ vertices and\n$O(k^4)$ edges, significantly improving the previous kernel of $O(k^{12})$\nvertices and $O(k^{24})$ edges. In addition, we derive an $O^*(3.792^k)$-time\nalgorithm for the {claw, diamond}-free edge deletion problem.\n",
"title": "Improved Kernels and Algorithms for Claw and Diamond Free Edge Deletion Based on Refined Observations"
}
| null | null | null | null | true | null |
14064
| null |
Default
| null | null |
null |
{
"abstract": " Caching popular contents at the edge of cellular networks has been proposed\nto reduce the load, and hence the cost of backhaul links. It is significant to\ndecide which files should be cached and where to cache them. In this paper, we\npropose a distributed caching scheme considering the tradeoff between the\ndiversity and redundancy of base stations' cached contents. Whether it is\nbetter to cache the same or different contents in different base stations? To\nfind out this, we formulate an optimal redundancy caching problem. Our goal is\nto minimize the total transmission cost of the network, including cost within\nthe radio access network (RAN) and cost incurred by transmission to the core\nnetwork via backhaul links. The optimal redundancy ratio under given system\nconfiguration is obtained with adapted particle swarm optimization (PSO)\nalgorithm. We analyze the impact of important system parameters through\nMonte-Carlo simulation. Results show that the optimal redundancy ratio is\nmainly influenced by two parameters, which are the backhaul to RAN unit cost\nratio and the steepness of file popularity distribution. The total cost can be\nreduced by up to 54% at given unit cost ratio of backhaul to RAN when the\noptimal redundancy ratio is selected. Under typical file request pattern, the\nreduction amount can be up to 57%.\n",
"title": "Distributed Edge Caching Scheme Considering the Tradeoff Between the Diversity and Redundancy of Cached Content"
}
| null | null | null | null | true | null |
14065
| null |
Default
| null | null |
null |
{
"abstract": " We introduce orbital graphs and discuss some of their basic properties. Then\nwe focus on their usefulness for search algorithms for permutation groups,\nincluding finding the intersection of groups and the stabilizer of sets in a\ngroup.\n",
"title": "Orbital Graphs"
}
| null | null | null | null | true | null |
14066
| null |
Default
| null | null |
null |
{
"abstract": " We prove the existence of singular harmonic ${\\bf Z}_2$ spinors on\n$3$-manifolds with $b_1 > 1$. The proof relies on a wall-crossing formula for\nsolutions to the Seiberg-Witten equation with two spinors. The existence of\nsingular harmonic ${\\bf Z}_2$ spinors and the shape of our wall-crossing\nformula shed new light on recent observations made by Joyce regarding Donaldson\nand Segal's proposal for counting $G_2$-instantons.\n",
"title": "On the existence of harmonic $\\mathbf{Z}_2$ spinors"
}
| null | null | null | null | true | null |
14067
| null |
Default
| null | null |
null |
{
"abstract": " Childhood obesity is associated with increased morbidity and mortality in\nadulthood, leading to substantial healthcare cost. There is an urgent need to\npromote early prevention and develop an accompanying surveillance system. In\nthis paper, we make use of electronic health records (EHRs) and construct a\npenalized multi-level generalized linear model. The model provides regular\ntrend and outlier information simultaneously, both of which may be useful to\nraise public awareness and facilitate targeted intervention. Our strategy is to\ndecompose the regional contribution in the model into smooth and sparse\nsignals, where the characteristics of the signals are encouraged by the\ncombination of fusion and sparse penalties imposed on the likelihood function.\nIn addition, we introduce a weighting scheme to account for the missingness and\npotential non-representativeness arising from the EHRs data. We propose a novel\nalternating minimization algorithm, which is computationally efficient, easy to\nimplement, and guarantees convergence. Simulation shows that the proposed\nmethod has a superior performance compared with traditional counterparts.\nFinally, we apply our method to the University of Wisconsin Population Health\nInformation Exchange database.\n",
"title": "Penalty-based spatial smoothing and outlier detection for childhood obesity surveillance from electronic health records"
}
| null | null | null | null | true | null |
14068
| null |
Default
| null | null |
null |
{
"abstract": " The basic reproduction number ($R_0$) is a threshold parameter for disease\nextinction or survival in isolated populations. However no human population is\nfully isolated from other human or animal populations. We use compartmental\nmodels to derive simple rules for the basic reproduction number for populations\nwith local person-to-person transmission and exposure from some other source:\neither a reservoir exposure or imported cases. We introduce the idea of a\nreservoir-driven or importation-driven disease: diseases that would become\nextinct in the population of interest without reservoir exposure or imported\ncases (since $R_0<1$), but nevertheless may be sufficiently transmissible that\nmany or most infections are acquired from humans in that population. We show\nthat in the simplest case, $R_0<1$ if and only if the proportion of infections\nacquired from the external source exceeds the disease prevalence and explore\nhow population heterogeneity and the interactions of multiple strains affect\nthis rule. We apply these rules in two cases studies of Clostridium difficile\ninfection and colonisation: C. difficile in the hospital setting accounting for\nimported cases, and C. difficile in the general human population accounting for\nexposure to animal reservoirs. We demonstrate that even the hospital-adapted,\nhighly-transmissible NAP1/RT027 strain of C. difficile had a reproduction\nnumber <1 in a landmark study of hospitalised patients and therefore was\nsustained by colonised and infected admissions to the study hospital. We argue\nthat C. difficile should be considered reservoir-driven if as little as 13.0%\nof transmission can be attributed to animal reservoirs.\n",
"title": "Some simple rules for estimating reproduction numbers in the presence of reservoir exposure or imported cases"
}
| null | null | null | null | true | null |
14069
| null |
Default
| null | null |
null |
{
"abstract": " We demonstrate that an applied electric field causes piezoelectric distortion\nacross single molecular monolayers of oligopeptides. We deposited\nself-assembled monolayers ~1.5 nm high onto smooth gold surfaces. These\nmonolayers exhibit strong piezoelectric response that varies linearly with\napplied bias (1-3V), measured using piezoresponse force microscopy (PFM). The\nresponse is markedly greater than control experiments with rigid alkanethiols\nand correlates with surface spectroscopy and theoretical predictions of\nconformational change from applied electric fields. Unlike existing\npiezoelectric oxides, our peptide monolayers are intrinsically flexible, easily\nfabricated, aligned and patterned without poling.\n",
"title": "Self-Assembled Monolayer Piezoelectrics: Electric-Field Driven Conformational Changes"
}
| null | null | null | null | true | null |
14070
| null |
Default
| null | null |
null |
{
"abstract": " Matrix divisors are introduced in the work by A.Weil (1938) which is\nconsidered as a starting point of the theory of holomorphic vector bundles on\nRiemann surfaces. In this theory matrix divisors play the role similar to the\nrole of usual divisors in the theory of line bundles. Moreover, they provide\nexplicit coordinates (Tyurin parameters) in an open subset of the moduli space\nof stable vector bundles. These coordinates turned out to be helpful in\nintegration of soliton equations.\nWe would like to gain attention to one more relationship between matrix\ndivisors of vector G-bundles (where G is a complex semi-simple Lie group) and\nthe theory of integrable systems, namely to the relationship with Lax operator\nalgebras. The result we obtain can be briefly formulated as follows: the moduli\nspace of matrix divisors with certain discrete invariants and fixed support is\na homogeneous space. Its tangent space at the unit is naturally isomorphic to\nthe quotient space of M-operators by L-operators, both spaces essentially\ndefined by the same invariants (the result goes back to Krichever, 2001). We\ngive one more description of the same space in terms of root systems.\n",
"title": "Matrix divisors on Riemann surfaces and Lax operator algebras"
}
| null | null |
[
"Mathematics"
] | null | true | null |
14071
| null |
Validated
| null | null |
null |
{
"abstract": " Pseudogap phase in superconductors continues to be an outstanding puzzle that\ndifferentiates unconventional superconductors from the conventional ones\n(BCS-superconductors). Employing high resolution photoemission spectroscopy on\na highly dense conventional superconductor, MgB2, we discover an interesting\nscenario. While the spectral evolution close to the Fermi energy is\ncommensurate to BCS descriptions as expected, the spectra in the wider energy\nrange reveal emergence of a pseudogap much above the superconducting transition\ntemperature indicating apparent departure from the BCS scenario. The energy\nscale of the pseudogap is comparable to the energy of E2g phonon mode\nresponsible for superconductivity in MgB2 and the pseudogap can be attributed\nto the effect of electron-phonon coupling on the electronic structure. These\nresults reveal a scenario of the emergence of the superconducting gap within an\nelectron-phonon coupling induced pseudogap.\n",
"title": "Observation of pseudogap in MgB2"
}
| null | null | null | null | true | null |
14072
| null |
Default
| null | null |
null |
{
"abstract": " Through a direct comparison of specific heat and magneto-resistance we\ncritically asses the nature of superconducting fluctuations in the same\nnano-gram crystal of SmFeAs(O, F). We show that although the superconducting\nfluctuation contribution to conductivity scales well within the 2D-LLL scheme\nits predictions contrast the inherently 3D nature of SmFeAs(O, F) in the\nvicinity T_{c}. Furthermore the transition seen in specific heat cannot be\nsatisfactory described either by the LLL or the XY scaling. Additionally we\nhave validated, through comparing Hc2 values obtained from the entropy\nconservation construction (Hab=-19.5 T/K and Hab=-2.9 T/K), the analysis of\nfluctuation contribution to conductivity as a reasonable method for estimating\nthe Hc2 slope.\n",
"title": "Critical fields and fluctuations determined from specific heat and magnetoresistance in the same nanogram SmFeAs(O,F) single crystal"
}
| null | null | null | null | true | null |
14073
| null |
Default
| null | null |
null |
{
"abstract": " It is of interest to determine the exit angle of a vortex from a\nsuperconducting surface, since this affects the intervortex interactions and\ntheir consequences. Two ways to determine this angle are to image the vortex\nmagnetic fields above the surface, or the vortex core shape at the surface. In\nthis work we evaluate the field h(x, y, z) above a flat superconducting surface\nx, y and the currents J(x,y) at that surface for a straight vortex tilted\nrelative to the normal to the surface, for both the isotropic and anisotropic\ncases. In principle, these results can be used to determine the vortex exit\ntilt angle from analyses of magnetic field imaging or density of states data.\n",
"title": "Determining the vortex tilt relative to a superconductor surface"
}
| null | null |
[
"Physics"
] | null | true | null |
14074
| null |
Validated
| null | null |
null |
{
"abstract": " A general methodology is proposed to differentiate the likelihood of\nenergetic-particle-driven instabilities to produce frequency chirping or\nfixed-frequency oscillations. The method employs numerically calculated\neigenstructures and multiple resonance surfaces of a given mode in the presence\nof energetic ion drag and stochasticity (due to collisions and\nmicro-turbulence). Toroidicity-induced, reversed-shear and beta-induced\nAlfven-acoustic eigenmodes are used as examples. Waves measured in experiments\nare characterized and compatibility is found between the proposed criterion\npredictions and the experimental observation or lack of observation of chirping\nbehavior of Alfvenic modes in different tokamaks. It is found that the\nstochastic diffusion due to micro-turbulence can be the dominant energetic\nparticle detuning mechanism near the resonances in many plasma experiments, and\nits strength is the key as to whether chirping solutions are likely to arise.\nThe proposed criterion constitutes a useful predictive tool in assessing\nwhether the nature of the transport for fast ion losses in fusion devices will\nbe dominated by convective or diffusive processes.\n",
"title": "Onset of nonlinear structures due to eigenmode destabilization in tokamak plasmas"
}
| null | null | null | null | true | null |
14075
| null |
Default
| null | null |
null |
{
"abstract": " We have developed a semi-analytic framework to model the large-scale\nevolution of the first Population III (Pop III) stars and the transition to\nmetal-enriched star formation. Our model follows dark matter halos from\ncosmological N-body simulations, utilizing their individual merger histories\nand three-dimensional positions, and applies physically motivated prescriptions\nfor star formation and feedback from Lyman-Werner (LW) radiation, hydrogen\nionizing radiation, and external metal enrichment due to supernovae winds. This\nmethod is intended to complement analytic studies, which do not include\nclustering or individual merger histories, and hydrodynamical cosmological\nsimulations, which include detailed physics, but are computationally expensive\nand have limited dynamic range. Utilizing this technique, we compute the\ncumulative Pop III and metal-enriched star formation rate density (SFRD) as a\nfunction of redshift at $z \\geq 20$. We find that varying the model parameters\nleads to significant qualitative changes in the global star formation history.\nThe Pop III star formation efficiency and the delay time between Pop III and\nsubsequent metal-enriched star formation are found to have the largest impact.\nThe effect of clustering (i.e. including the three-dimensional positions of\nindividual halos) on various feedback mechanisms is also investigated. The\nimpact of clustering on LW and ionization feedback is found to be relatively\nmild in our fiducial model, but can be larger if external metal enrichment can\npromote metal-enriched star formation over large distances.\n",
"title": "Self-consistent semi-analytic models of the first stars"
}
| null | null | null | null | true | null |
14076
| null |
Default
| null | null |
null |
{
"abstract": " Research on how hardware imperfections impact security has primarily focused\non side-channel leakage mechanisms produced by power consumption,\nelectromagnetic emanations, acoustic vibrations, and optical emissions.\nHowever, with the proliferation of sensors in security-critical devices, the\nimpact of attacks on sensor-to-microcontroller and microcontroller-to-actuator\ninterfaces using the same channels is starting to become more than an academic\ncuriosity. These out-of-band signal injection attacks target connections which\ntransform physical quantities to analog properties and fundamentally cannot be\nauthenticated, posing previously unexplored security risks. This paper contains\nthe first survey of such out-of-band signal injection attacks, with a focus on\nunifying their terminology, and identifying commonalities in their causes and\neffects. The taxonomy presented contains a chronological, evolutionary, and\nthematic view of out-of-band signal injection attacks which highlights the\ncross-influences that exist and underscores the need for a common language\nirrespective of the method of injection. By placing attack and defense\nmechanisms in the wider context of their dual counterparts of side-channel\nleakage and electromagnetic interference, our paper identifies common threads\nand gaps that can help guide and inform future research. Overall, the\never-increasing reliance on sensors embedded in everyday commodity devices\nnecessitates that a stronger focus be placed on improving the security of such\nsystems against out-of-band signal injection attacks.\n",
"title": "SoK: Taxonomy and Challenges of Out-of-Band Signal Injection Attacks and Defenses"
}
| null | null |
[
"Computer Science"
] | null | true | null |
14077
| null |
Validated
| null | null |
null |
{
"abstract": " Preventable medical errors are estimated to be among the leading causes of\ninjury and death in the United States. To prevent such errors, healthcare\nsystems have implemented patient safety and incident reporting systems. These\nsystems enable clinicians to report unsafe conditions and cases where patients\nhave been harmed due to errors in medical care. These reports are narratives in\nnatural language and while they provide detailed information about the\nsituation, it is non-trivial to perform large scale analysis for identifying\ncommon causes of errors and harm to the patients. In this work, we present a\nmethod based on attentive convolutional and recurrent networks for identifying\nharm events in patient care and categorize the harm based on its severity\nlevel. We demonstrate that our methods can significantly improve the\nperformance over existing methods in identifying harm in clinical care.\n",
"title": "Identifying Harm Events in Clinical Care through Medical Narratives"
}
| null | null |
[
"Computer Science"
] | null | true | null |
14078
| null |
Validated
| null | null |
null |
{
"abstract": " Person re-identification (Re-ID) usually suffers from noisy samples with\nbackground clutter and mutual occlusion, which makes it extremely difficult to\ndistinguish different individuals across the disjoint camera views. In this\npaper, we propose a novel deep self-paced learning (DSPL) algorithm to\nalleviate this problem, in which we apply a self-paced constraint and symmetric\nregularization to help the relative distance metric training the deep neural\nnetwork, so as to learn the stable and discriminative features for person\nRe-ID. Firstly, we propose a soft polynomial regularizer term which can derive\nthe adaptive weights to samples based on both the training loss and model age.\nAs a result, the high-confidence fidelity samples will be emphasized and the\nlow-confidence noisy samples will be suppressed at early stage of the whole\ntraining process. Such a learning regime is naturally implemented under a\nself-paced learning (SPL) framework, in which samples weights are adaptively\nupdated based on both model age and sample loss using an alternative\noptimization method. Secondly, we introduce a symmetric regularizer term to\nrevise the asymmetric gradient back-propagation derived by the relative\ndistance metric, so as to simultaneously minimize the intra-class distance and\nmaximize the inter-class distance in each triplet unit. Finally, we build a\npart-based deep neural network, in which the features of different body parts\nare first discriminately learned in the lower convolutional layers and then\nfused in the higher fully connected layers. Experiments on several benchmark\ndatasets have demonstrated the superior performance of our method as compared\nwith the state-of-the-art approaches.\n",
"title": "Deep Self-Paced Learning for Person Re-Identification"
}
| null | null | null | null | true | null |
14079
| null |
Default
| null | null |
null |
{
"abstract": " This paper illustrates how to calculate the moments and cumulants of the\ntwo-stage Mann-Whitney statistic. These results may be used to calculate the\nasymptotic critical values of the two-stage Mann-Whitney test. In this paper, a\nlarge amount of deductions will be showed.\n",
"title": "Moments and Cumulants of The Two-Stage Mann-Whitney Statistic"
}
| null | null | null | null | true | null |
14080
| null |
Default
| null | null |
null |
{
"abstract": " We present new large samples of Galactic Cepheids and RR Lyrae stars from the\nOGLE Galaxy Variability Survey.\n",
"title": "OGLE Cepheids and RR Lyrae Stars in the Milky Way"
}
| null | null | null | null | true | null |
14081
| null |
Default
| null | null |
null |
{
"abstract": " Kristensen and Mele (2011) developed a new approach to obtain closed-form\napproximations to continuous-time derivatives pricing models. The approach uses\na power series expansion of the pricing bias between an intractable model and\nsome known auxiliary model. Since the resulting approximation formula has\nclosed-form it is straightforward to obtain approximations of greeks. In this\nthesis I will introduce Kristensen and Mele's methods and apply it to a variety\nof stochastic volatility models of European style options as well as a model\nfor commodity futures. The focus of this thesis is the effect of different\nmodel choices and different model parameter values on the numerical stability\nof Kristensen and Mele's approximation.\n",
"title": "Closed-form approximations in derivatives pricing: The Kristensen-Mele approach"
}
| null | null | null | null | true | null |
14082
| null |
Default
| null | null |
null |
{
"abstract": " We study Leinster's notion of magnitude for a compact metric space. For a\nsmooth, compact domain $X\\subset \\mathbb{R}^{2m-1}$, we find geometric\nsignificance in the function $\\mathcal{M}_X(R) = \\mathrm{mag}(R\\cdot X)$. The\nfunction $\\mathcal{M}_X$ extends from the positive half-line to a meromorphic\nfunction in the complex plane. Its poles are generalized scattering resonances.\nIn the semiclassical limit $R \\to \\infty$, $\\mathcal{M}_X$ admits an asymptotic\nexpansion. The three leading terms of $\\mathcal{M}_X$ at $R=+\\infty$ are\nproportional to the volume, surface area and integral of the mean curvature. In\nparticular, for convex $X$ the leading terms are proportional to the intrinsic\nvolumes, and we obtain an asymptotic variant of the convex magnitude conjecture\nby Leinster and Willerton, with corrected coefficients.\n",
"title": "On the magnitude function of domains in Euclidean space"
}
| null | null | null | null | true | null |
14083
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we show that the edge connectivity of a distance-regular\ndigraph $\\Gamma$ with valency $k$ is $k$ and for $k>2$, any minimum edge cut of\n$\\Gamma$ is the set of all edges going into (or coming out of) a single vertex.\nMoreover we show that the same result holds for strongly regular digraphs.\nThese results extend the same known results for undirected case with quite\ndifferent proofs.\n",
"title": "Minimum edge cuts of distance-regular and strongly regular digraphs"
}
| null | null | null | null | true | null |
14084
| null |
Default
| null | null |
null |
{
"abstract": " Expertise of annotators has a major role in crowdsourcing based opinion\naggregation models. In such frameworks, accuracy and biasness of annotators are\noccasionally taken as important features and based on them priority of the\nannotators are assigned. But instead of relying on a single feature, multiple\nfeatures can be considered and separate rankings can be produced to judge the\nannotators properly. Finally, the aggregation of those rankings with perfect\nweightage can be done with an aim to produce better ground truth prediction.\nHere, we propose a novel weighted rank aggregation method and its efficacy with\nrespect to other existing approaches is shown on artificial dataset. The\neffectiveness of weighted rank aggregation to enhance quality prediction is\nalso shown by applying it on an Amazon Mechanical Turk (AMT) dataset.\n",
"title": "Quality Enhancement by Weighted Rank Aggregation of Crowd Opinion"
}
| null | null | null | null | true | null |
14085
| null |
Default
| null | null |
null |
{
"abstract": " We rigorously derive a Kirchhoff plate theory, via $\\Gamma$-convergence, from\na three-di\\-men\\-sio\\-nal model that describes the finite elasticity of an\nelastically heterogeneous, thin sheet. The heterogeneity in the elastic\nproperties of the material results in a spontaneous strain that depends on both\nthe thickness and the plane variables $x'$. At the same time, the spontaneous\nstrain is $h$-close to the identity, where $h$ is the small parameter\nquantifying the thickness. The 2D Kirchhoff limiting model is constrained to\nthe set of isometric immersions of the mid-plane of the plate into\n$\\mathbb{R}^3$, with a corresponding energy that penalizes deviations of the\ncurvature tensor associated with a deformation from a $x'$-dependent target\ncurvature tensor. A discussion on the 2D minimizers is provided in the case\nwhere the target curvature tensor is piecewise constant. Finally, we apply the\nderived plate theory to the modeling of swelling-induced shape changes in\nheterogeneous thin gel sheets.\n",
"title": "Heterogeneous elastic plates with in-plane modulation of the target curvature and applications to thin gel sheets"
}
| null | null | null | null | true | null |
14086
| null |
Default
| null | null |
null |
{
"abstract": " The advent of microcontrollers with enough CPU power and with analog and\ndigital peripherals makes possible to design a complete particle detector with\nrelative acquisition system around one microcontroller chip. The existence of a\nworld wide data infrastructure as internet allows for devising a distributed\nnetwork of cheap detectors capable to elaborate and send data or respond to\nsettings commands. The internet infrastructure enables to distribute the\nabsolute time (with precision of few milliseconds), to the simple devices far\napart, with few milliseconds precision, from a few meters to thousands of\nkilometres. So it is possible to create a crowdsourcing experiment of citizen\nscience that use small scintillation-based particle detectors to monitor the\nhigh energetic cosmic ray and the radiation environment.\n",
"title": "An educational distributed Cosmic Ray detector network based on ArduSiPM"
}
| null | null | null | null | true | null |
14087
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we prove that the growth of the Artin conductor is at most,\nexponential in the degree of the character.\n",
"title": "Irreducible characters with bounded root Artin conductor"
}
| null | null | null | null | true | null |
14088
| null |
Default
| null | null |
null |
{
"abstract": " Internet Protocol (IP) addresses are frequently used as a method of locating\nweb users by researchers in several different fields. However, there are\ncompeting reports concerning the accuracy of those locations, and little\nresearch has been done in manually comparing the IP geolocation databases and\nweb page geographic information. This paper categorized web page from the Yahoo\nsearch engine into twelve categories, ranging from 'Blog' and 'News' to\n'Education' and 'Governmental'. Then we manually compared the mailing or street\naddress of the web page's content creator with the geolocation results by the\ngiven IP address. We introduced a cartographic design method by creating kernel\ndensity maps for visualizing the information landscape of web pages associated\nwith specific keywords.\n",
"title": "Mapping Web Pages by Internet Protocol (IP) addresses: Analyzing Spatial and Temporal Characteristics of Web Search Engine Results"
}
| null | null | null | null | true | null |
14089
| null |
Default
| null | null |
null |
{
"abstract": " In this short paper we generalise a theorem due to Kani and Rosen on\ndecomposition of Jacobian varieties of Riemann surfaces with group action. This\ngeneralisation extends the set of Jacobians for which it is possible to obtain\nan isogeny decomposition where all the factors are Jacobians.\n",
"title": "A generalisation of Kani-Rosen decomposition theorem for Jacobian varieties"
}
| null | null |
[
"Mathematics"
] | null | true | null |
14090
| null |
Validated
| null | null |
null |
{
"abstract": " We construct optimal designs for group testing experiments where the goal is\nto estimate the prevalence of a trait by using a test with uncertain\nsensitivity and specificity. Using optimal design theory for approximate\ndesigns, we show that the most efficient design for simultaneously estimating\nthe prevalence, sensitivity and specificity requires three different group\nsizes with equal frequencies. However, if estimating prevalence as accurately\nas possible is the only focus, the optimal strategy is to have three group\nsizes with unequal frequencies. On the basis of a chlamydia study in the\nU.S.A., we compare performances of competing designs and provide insights into\nhow the unknown sensitivity and specificity of the test affect the performance\nof the prevalence estimator. We demonstrate that the locally D- and Ds-optimal\ndesigns proposed have high efficiencies even when the prespecified values of\nthe parameters are moderately misspecified.\n",
"title": "Optimal group testing designs for estimating prevalence with uncertain testing errors"
}
| null | null | null | null | true | null |
14091
| null |
Default
| null | null |
null |
{
"abstract": " Complex oxides exhibit many intriguing phenomena, including metal-insulator\ntransition, ferroelectricity/multiferroicity, colossal magnetoresistance and\nhigh transition temperature superconductivity. Advances in epitaxial thin film\ngrowth techniques enable us to combine different complex oxides with atomic\nprecision and form an oxide heterostructure. Recent theoretical and\nexperimental work has shown that charge transfer across oxide interfaces\ngenerally occurs and leads to a great diversity of emergent interfacial\nproperties which are not exhibited by bulk constituents. In this report, we\nreview mechanisms and physical consequence of charge transfer across interfaces\nin oxide heterostructures. Both theoretical proposals and experimental\nmeasurements of various oxide heterostructures are discussed and compared. We\nalso review the theoretical methods that are used to calculate charge transfer\nacross oxide interfaces and discuss the success and challenges in theory.\nFinally, we present a summary and perspectives for future research.\n",
"title": "Charge transfer driven emergent phenomena in oxide heterostructures"
}
| null | null | null | null | true | null |
14092
| null |
Default
| null | null |
null |
{
"abstract": " In this article we show the duality between tensor networks and undirected\ngraphical models with discrete variables. We study tensor networks on\nhypergraphs, which we call tensor hypernetworks. We show that the tensor\nhypernetwork on a hypergraph exactly corresponds to the graphical model given\nby the dual hypergraph. We translate various notions under duality. For\nexample, marginalization in a graphical model is dual to contraction in the\ntensor network. Algorithms also translate under duality. We show that belief\npropagation corresponds to a known algorithm for tensor network contraction.\nThis article is a reminder that the research areas of graphical models and\ntensor networks can benefit from interaction.\n",
"title": "Duality of Graphical Models and Tensor Networks"
}
| null | null | null | null | true | null |
14093
| null |
Default
| null | null |
null |
{
"abstract": " A generalization of the Emden-Fowler equation is presented and its solutions\nare investigated. This paper is devoted to asymptotic behavior of its\nsolutions. The procedure is entirely based on a previous paper by the author.\n",
"title": "A further generalization of the Emden-Fowler equation"
}
| null | null |
[
"Mathematics"
] | null | true | null |
14094
| null |
Validated
| null | null |
null |
{
"abstract": " Exploiting the deep generative model's remarkable ability of learning the\ndata-manifold structure, some recent researches proposed a geometric data\ninterpolation method based on the geodesic curves on the learned data-manifold.\nHowever, this interpolation method often gives poor results due to a\ntopological difference between the model and the dataset. The model defines a\nfamily of simply-connected manifolds, whereas the dataset generally contains\ndisconnected regions or holes that make them non-simply-connected. To\ncompensate this difference, we propose a novel density regularizer that make\nthe interpolation path circumvent the holes denoted by low probability density.\nWe confirm that our method gives consistently better interpolation results from\nthe experiments with real-world image datasets.\n",
"title": "Data Interpolations in Deep Generative Models under Non-Simply-Connected Manifold Topology"
}
| null | null | null | null | true | null |
14095
| null |
Default
| null | null |
null |
{
"abstract": " Using atomic force microscopy (AFM) we investigated the interaction of\namyloid beta (Ab) (1 42) peptide with chemically modified surfaces in order to\nbetter understand the mechanism of amyloid toxicity, which involves interaction\nof amyloid with cell membrane surfaces. We compared the structure and density\nof Ab fibrils on positively and negatively charged as well as hydrophobic\nchemically modified surfaces at physiologically relevant conditions.\n",
"title": "Effect of Surfaces on Amyloid Fibril Formation"
}
| null | null | null | null | true | null |
14096
| null |
Default
| null | null |
null |
{
"abstract": " In the article, we discuss the architecture of the polynomial neural network\nthat corresponds to the matrix representation of Lie transform. The matrix form\nof Lie transform is an approximation of general solution for the nonlinear\nsystem of ordinary differential equations. Thus, it can be used for simulation\nand modeling task. On the other hand, one can identify dynamical system from\ntime series data simply by optimization of the coefficient matrices of the Lie\ntransform. Representation of the approach by polynomial neural networks\nintegrates the strength of both neural networks and traditional model-based\nmethods for dynamical systems investigation. We provide a theoretical\nexplanation of learning dynamical systems from time series for the proposed\nmethod, as well as demonstrate it in several applications. Namely, we show\nresults of modeling and identification for both well-known systems like\nLotka-Volterra equation and more complicated examples from retail,\nbiochemistry, and accelerator physics.\n",
"title": "Lie Transform Based Polynomial Neural Networks for Dynamical Systems Simulation and Identification"
}
| null | null | null | null | true | null |
14097
| null |
Default
| null | null |
null |
{
"abstract": " We analyze the left-tail asymptotics of deformed Tracy-Widom distribution\nfunctions describing the fluctuations of the largest eigenvalue in invariant\nrandom matrix ensembles after removing each soft edge eigenvalue independently\nwith probability $1-\\gamma\\in[0,1]$. As $\\gamma$ varies, a transition from\nTracy-Widom statistics ($\\gamma=1$) to classical Weibull statistics\n($\\gamma=0$) was observed in the physics literature by Bohigas, de Carvalho,\nand Pato \\cite{BohigasCP:2009}. We provide a description of this transition by\nrigorously computing the leading-order left-tail asymptotics of the thinned\nGOE, GUE and GSE Tracy-Widom distributions. In this paper, we obtain the\nasymptotic behavior in the non-oscillatory region with $\\gamma\\in[0,1)$ fixed\n(for the GOE, GUE, and GSE distributions) and $\\gamma\\uparrow 1$ at a\ncontrolled rate (for the GUE distribution). This is the first step in an\nongoing program to completely describe the transition between Tracy-Widom and\nWeibull statistics. As a corollary to our results, we obtain a new\ntotal-integral formula involving the Ablowitz-Segur solution to the second\nPainlevé equation.\n",
"title": "Large deformations of the Tracy-Widom distribution I. Non-oscillatory asymptotics"
}
| null | null | null | null | true | null |
14098
| null |
Default
| null | null |
null |
{
"abstract": " Background. Test resources are usually limited and therefore it is often not\npossible to completely test an application before a release. To cope with the\nproblem of scarce resources, development teams can apply defect prediction to\nidentify fault-prone code regions. However, defect prediction tends to low\nprecision in cross-project prediction scenarios.\nAims. We take an inverse view on defect prediction and aim to identify\nmethods that can be deferred when testing because they contain hardly any\nfaults due to their code being \"trivial\". We expect that characteristics of\nsuch methods might be project-independent, so that our approach could improve\ncross-project predictions.\nMethod. We compute code metrics and apply association rule mining to create\nrules for identifying methods with low fault risk. We conduct an empirical\nstudy to assess our approach with six Java open-source projects containing\nprecise fault data at the method level.\nResults. Our results show that inverse defect prediction can identify approx.\n32-44% of the methods of a project to have a low fault risk; on average, they\nare about six times less likely to contain a fault than other methods. In\ncross-project predictions with larger, more diversified training sets,\nidentified methods are even eleven times less likely to contain a fault.\nConclusions. Inverse defect prediction supports the efficient allocation of\ntest resources by identifying methods that can be treated with less priority in\ntesting activities and is well applicable in cross-project prediction\nscenarios.\n",
"title": "Too Trivial To Test? An Inverse View on Defect Prediction to Identify Methods with Low Fault Risk"
}
| null | null | null | null | true | null |
14099
| null |
Default
| null | null |
null |
{
"abstract": " We make the case for studying the complexity of approximately simulating\n(sampling) quantum systems for reasons beyond that of quantum computational\nsupremacy, such as diagnosing phase transitions. We consider the sampling\ncomplexity as a function of time $t$ due to evolution generated by spatially\nlocal quadratic bosonic Hamiltonians. We obtain an upper bound on the scaling\nof $t$ with the number of bosons $n$ for which approximate sampling is\nclassically efficient. We also obtain a lower bound on the scaling of $t$ with\n$n$ for which any instance of the boson sampling problem reduces to this\nproblem and hence implies that the problem is hard, assuming the conjectures of\nAaronson and Arkhipov [Proc. 43rd Annu. ACM Symp. Theory Comput. STOC '11].\nThis establishes a dynamical phase transition in sampling complexity. Further,\nwe show that systems in the Anderson-localized phase are always easy to sample\nfrom at arbitrarily long times. We view these results in the light of\nclassifying phases of physical systems based on parameters in the Hamiltonian.\nIn doing so, we combine ideas from mathematical physics and computational\ncomplexity to gain insight into the behavior of condensed matter, atomic,\nmolecular and optical systems.\n",
"title": "Dynamical phase transitions in sampling complexity"
}
| null | null |
[
"Computer Science",
"Physics"
] | null | true | null |
14100
| null |
Validated
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.