text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We analyze theoretically the Schrodinger-Poisson equation in two transverse\ndimensions in the presence of a Kerr term. The model describes the nonlinear\npropagation of optical beams in thermooptical media and can be regarded as an\nanalogue system for a self-gravitating self-interacting wave. We compute\nnumerically the family of radially symmetric ground state bright stationary\nsolutions for focusing and defocusing local nonlinearity, keeping in both cases\na focusing nonlocal nonlinearity. We also analyze excited states and\noscillations induced by fixing the temperature at the borders of the material.\nWe provide simulations of soliton interactions, drawing analogies with the\ndynamics of galactic cores in the scalar field dark matter scenario.\n", "title": "Spatial solitons in thermo-optical media from the nonlinear Schrodinger-Poisson equation and dark matter analogues" }
null
null
null
null
true
null
6701
null
Default
null
null
null
{ "abstract": " We extend the classic multi-armed bandit (MAB) model to the setting of\nnoncompliance, where the arm pull is a mere instrument and the treatment\napplied may differ from it, which gives rise to the instrument-armed bandit\n(IAB) problem. The IAB setting is relevant whenever the experimental units are\nhuman since free will, ethics, and the law may prohibit unrestricted or forced\napplication of treatment. In particular, the setting is relevant in bandit\nmodels of dynamic clinical trials and other controlled trials on human\ninterventions. Nonetheless, the setting has not been fully investigate in the\nbandit literature. We show that there are various and divergent notions of\nregret in this setting, all of which coincide only in the classic MAB setting.\nWe characterize the behavior of these regrets and analyze standard MAB\nalgorithms. We argue for a particular kind of regret that captures the causal\neffect of treatments but show that standard MAB algorithms cannot achieve\nsublinear control on this regret. Instead, we develop new algorithms for the\nIAB problem, prove new regret bounds for them, and compare them to standard MAB\nalgorithms in numerical examples.\n", "title": "Instrument-Armed Bandits" }
null
null
null
null
true
null
6702
null
Default
null
null
null
{ "abstract": " In this paper, we propose a new method to tackle the mapping challenge from\ntime-series data to spatial image in the field of seismic exploration, i.e.,\nreconstructing the velocity model directly from seismic data by deep neural\nnetworks (DNNs). The conventional way to address this ill-posed seismic\ninversion problem is through iterative algorithms, which suffer from poor\nnonlinear mapping and strong non-uniqueness. Other attempts may either import\nhuman intervention errors or underuse seismic data. The challenge for DNNs\nmainly lies in the weak spatial correspondence, the uncertain\nreflection-reception relationship between seismic data and velocity model as\nwell as the time-varying property of seismic data. To approach these\nchallenges, we propose an end-to-end Seismic Inversion Networks (SeisInvNet for\nshort) with novel components to make the best use of all seismic data.\nSpecifically, we start with every seismic trace and enhance it with its\nneighborhood information, its observation setup and global context of its\ncorresponding seismic profile. Then from enhanced seismic traces, the spatially\naligned feature maps can be learned and further concatenated to reconstruct\nvelocity model. In general, we let every seismic trace contribute to the\nreconstruction of the whole velocity model by finding spatial correspondence.\nThe proposed SeisInvNet consistently produces improvements over the baselines\nand achieves promising performance on our proposed SeisInv dataset according to\nvarious evaluation metrics, and the inversion results are more consistent with\nthe target from the aspects of velocity value, subsurface structure and\ngeological interface. In addition to the superior performance, the mechanism is\nalso carefully discussed, and some potential problems are identified for\nfurther study.\n", "title": "Deep learning Inversion of Seismic Data" }
null
null
[ "Computer Science" ]
null
true
null
6703
null
Validated
null
null
null
{ "abstract": " We propose NOPOL, an approach to automatic repair of buggy conditional\nstatements (i.e., if-then-else statements). This approach takes a buggy program\nas well as a test suite as input and generates a patch with a conditional\nexpression as output. The test suite is required to contain passing test cases\nto model the expected behavior of the program and at least one failing test\ncase that reveals the bug to be repaired. The process of NOPOL consists of\nthree major phases. First, NOPOL employs angelic fix localization to identify\nexpected values of a condition during the test execution. Second, runtime trace\ncollection is used to collect variables and their actual values, including\nprimitive data types and objected-oriented features (e.g., nullness checks), to\nserve as building blocks for patch generation. Third, NOPOL encodes these\ncollected data into an instance of a Satisfiability Modulo Theory (SMT)\nproblem, then a feasible solution to the SMT instance is translated back into a\ncode patch. We evaluate NOPOL on 22 real-world bugs (16 bugs with buggy IF\nconditions and 6 bugs with missing preconditions) on two large open-source\nprojects, namely Apache Commons Math and Apache Commons Lang. Empirical\nanalysis on these bugs shows that our approach can effectively fix bugs with\nbuggy IF conditions and missing preconditions. We illustrate the capabilities\nand limitations of NOPOL using case studies of real bug fixes.\n", "title": "Nopol: Automatic Repair of Conditional Statement Bugs in Java Programs" }
null
null
null
null
true
null
6704
null
Default
null
null
null
{ "abstract": " Parametric geometry of numbers is a new theory, recently created by Schmidt\nand Summerer, which unifies and simplifies many aspects of classical\nDiophantine approximations, providing a handle on problems which previously\nseemed out of reach. Our goal is to transpose this theory to fields of rational\nfunctions in one variable and to analyze in that context the problem of\nsimultaneous approximation to exponential functions.\n", "title": "Parametric geometry of numbers in function fields" }
null
null
[ "Mathematics" ]
null
true
null
6705
null
Validated
null
null
null
{ "abstract": " A study of the intersection theory on the moduli space of Riemann surfaces\nwith boundary was recently initiated in a work of R. Pandharipande, J. P.\nSolomon and the third author, where they introduced open intersection numbers\nin genus 0. Their construction was later generalized to all genera by J. P.\nSolomon and the third author. In this paper we consider a refinement of the\nopen intersection numbers by distinguishing contributions from surfaces with\ndifferent numbers of boundary components, and we calculate all these numbers.\nWe then construct a matrix model for the generating series of the refined open\nintersection numbers and conjecture that it is equivalent to the\nKontsevich-Penner matrix model. An evidence for the conjecture is presented.\nAnother refinement of the open intersection numbers, which describes the\ndistribution of the boundary marked points on the boundary components, is also\ndiscussed.\n", "title": "Refined open intersection numbers and the Kontsevich-Penner matrix model" }
null
null
[ "Mathematics" ]
null
true
null
6706
null
Validated
null
null
null
{ "abstract": " An SEIRS epidemic with disease fatalities is introduced in a growing\npopulation (modelled as a super-critical linear birth and death process). The\nstudy of the initial phase of the epidemic is stochastic, while the analysis of\nthe major outbreaks is deterministic. Depending on the values of the\nparameters, the following scenarios are possible. i) The disease dies out\nquickly, only infecting few; ii) the epidemic takes off, the \\textit{number} of\ninfected individuals grows exponentially, but the \\textit{fraction} of infected\nindividuals remains negligible; iii) the epidemic takes off, the\n\\textit{number} of infected grows initially quicker than the population, the\ndisease fatalities diminish the growth rate of the population, but it remains\nsuper critical, and the \\emph{fraction} of infected go to an endemic\nequilibrium; iv) the epidemic takes off, the \\textit{number} of infected\nindividuals grows initially quicker than the population, the diseases\nfatalities turn the exponential growth of the population to an exponential\ndecay.\n", "title": "SEIRS epidemics in growing populations" }
null
null
[ "Physics" ]
null
true
null
6707
null
Validated
null
null
null
{ "abstract": " This paper studies a new type of 3D bin packing problem (BPP), in which a\nnumber of cuboid-shaped items must be put into a bin one by one orthogonally.\nThe objective is to find a way to place these items that can minimize the\nsurface area of the bin. This problem is based on the fact that there is no\nfixed-sized bin in many real business scenarios and the cost of a bin is\nproportional to its surface area. Based on previous research on 3D BPP, the\nsurface area is determined by the sequence, spatial locations and orientations\nof items. It is a new NP-hard combinatorial optimization problem on\nunfixed-sized bin packing, for which we propose a multi-task framework based on\nSelected Learning, generating the sequence and orientations of items packed\ninto the bin simultaneously. During training steps, Selected Learning chooses\none of loss functions derived from Deep Reinforcement Learning and Supervised\nLearning corresponding to the training procedure. Numerical results show that\nthe method proposed significantly outperforms Lego baselines by a substantial\ngain of 7.52%. Moreover, we produce large scale 3D Bin Packing order data set\nfor studying bin packing problems and will release it to the research\ncommunity.\n", "title": "A Multi-task Selected Learning Approach for Solving New Type 3D Bin Packing Problem" }
null
null
null
null
true
null
6708
null
Default
null
null
null
{ "abstract": " We describe preliminary investigations of using Docker for the deployment and\ntesting of astronomy software. Docker is a relatively new containerisation\ntechnology that is developing rapidly and being adopted across a range of\ndomains. It is based upon virtualization at operating system level, which\npresents many advantages in comparison to the more traditional hardware\nvirtualization that underpins most cloud computing infrastructure today. A\nparticular strength of Docker is its simple format for describing and managing\nsoftware containers, which has benefits for software developers, system\nadministrators and end users.\nWe report on our experiences from two projects -- a simple activity to\ndemonstrate how Docker works, and a more elaborate set of services that\ndemonstrates more of its capabilities and what they can achieve within an\nastronomical context -- and include an account of how we solved problems\nthrough interaction with Docker's very active open source development\ncommunity, which is currently the key to the most effective use of this\nrapidly-changing technology.\n", "title": "Use of Docker for deployment and testing of astronomy software" }
null
null
null
null
true
null
6709
null
Default
null
null
null
{ "abstract": " We propose a method, called Label Embedding Network, which can learn label\nrepresentation (label embedding) during the training process of deep networks.\nWith the proposed method, the label embedding is adaptively and automatically\nlearned through back propagation. The original one-hot represented loss\nfunction is converted into a new loss function with soft distributions, such\nthat the originally unrelated labels have continuous interactions with each\nother during the training process. As a result, the trained model can achieve\nsubstantially higher accuracy and with faster convergence speed. Experimental\nresults based on competitive tasks demonstrate the effectiveness of the\nproposed method, and the learned label embedding is reasonable and\ninterpretable. The proposed method achieves comparable or even better results\nthan the state-of-the-art systems. The source code is available at\n\\url{this https URL}.\n", "title": "Label Embedding Network: Learning Label Representation for Soft Training of Deep Networks" }
null
null
null
null
true
null
6710
null
Default
null
null
null
{ "abstract": " The current fleet of space-based solar observatories offers us a wealth of\nopportunities to study solar flares over a range of wavelengths. Significant\nadvances in our understanding of flare physics often come from coordinated\nobservations between multiple instruments. Consequently, considerable efforts\nhave been, and continue to be made to coordinate observations among instruments\n(e.g. through the Max Millennium Program of Solar Flare Research). However,\nthere has been no study to date that quantifies how many flares have been\nobserved by combinations of various instruments. Here we describe a technique\nthat retrospectively searches archival databases for flares jointly observed by\nRHESSI, SDO/EVE (MEGS-A and -B), Hinode/(EIS, SOT, and XRT), and IRIS. Out of\nthe 6953 flares of GOES magnitude C1 or greater that we consider over the 6.5\nyears after the launch of SDO, 40 have been observed by six or more instruments\nsimultaneously. Using each instrument's individual rate of success in observing\nflares, we show that the numbers of flares co-observed by three or more\ninstruments are higher than the number expected under the assumption that the\ninstruments operated independently of one another. In particular, the number of\nflares observed by larger numbers of instruments is much higher than expected.\nOur study illustrates that these missions often acted in cooperation, or at\nleast had aligned goals. We also provide details on an interactive widget now\navailable in SSWIDL that allows a user to search for flaring events that have\nbeen observed by a chosen set of instruments. This provides access to a broader\nrange of events in order to answer specific science questions. The difficulty\nin scheduling coordinated observations for solar-flare research is discussed\nwith respect to instruments projected to begin operations during Solar Cycle\n25, such as DKIST, Solar Orbiter, and Parker Solar Probe.\n", "title": "On the Performance of Multi-Instrument Solar Flare Observations During Solar Cycle 24" }
null
null
null
null
true
null
6711
null
Default
null
null
null
{ "abstract": " New features and enhancements for the SPIKE banded solver are presented.\nAmong all the SPIKE algorithm versions, we focus our attention on the recursive\nSPIKE technique which provides the best trade-off between generality and\nparallel efficiency, but was known for its lack of flexibility. Its application\nwas essentially limited to power of two number of cores/processors. This\nlimitation is successfully addressed in this paper. In addition, we present a\nnew transpose solve option, a standard feature of most numerical solver\nlibraries which has never been addressed by the SPIKE algorithm so far. A\npivoting recursive SPIKE strategy is finally presented as an alternative to\nnon-pivoting scheme for systems with large condition numbers. All these new\nenhancements participate to create a feature complete SPIKE algorithm and a new\nblack-box SPIKE-OpenMP package that significantly outperforms the performance\nand scalability obtained with other state-of-the-art banded solvers.\n", "title": "A Feature Complete SPIKE Banded Algorithm and Solver" }
null
null
[ "Computer Science" ]
null
true
null
6712
null
Validated
null
null
null
{ "abstract": " We describe the neutrino flavor (e = electron, u = muon, t = tau) masses as\nm(i=e;u;t)= m + [Delta]mi with |[Delta]mij|/m < 1 and probably |[Delta]mij|/m\n<< 1. The quantity m is the degenerate neutrino mass. Because neutrino flavor\nis not a quantum number, this degenerate mass appears in the neutrino equation\nof state. We apply a Monte Carlo computational physics technique to the Local\nGroup (LG) of galaxies to determine an approximate location for a Dark Matter\nembedding condensed neutrino object(CNO). The calculation is based on the\nrotational properties of the only spiral galaxies within the LG: M31, M33 and\nthe Milky Way. CNOs could be the Dark Matter everyone is looking for and we\nestimate the CNO embedding the LG to have a mass 5.17x10^15 Mo and a radius\n1.316 Mpc, with the estimated value of m ~= 0.8 eV/c2. The up-coming KATRIN\nexperiment will either be the definitive result or eliminate condensed\nneutrinos as a Dark Matter candidate.\n", "title": "Dark Matter in the Local Group of Galaxies" }
null
null
null
null
true
null
6713
null
Default
null
null
null
{ "abstract": " GPUs and other accelerators are popular devices for accelerating\ncompute-intensive, parallelizable applications. However, programming these\ndevices is a difficult task. Writing efficient device code is challenging, and\nis typically done in a low-level programming language. High-level languages are\nrarely supported, or do not integrate with the rest of the high-level language\necosystem. To overcome this, we propose compiler infrastructure to efficiently\nadd support for new hardware or environments to an existing programming\nlanguage.\nWe evaluate our approach by adding support for NVIDIA GPUs to the Julia\nprogramming language. By integrating with the existing compiler, we\nsignificantly lower the cost to implement and maintain the new compiler, and\nfacilitate reuse of existing application code. Moreover, use of the high-level\nJulia programming language enables new and dynamic approaches for GPU\nprogramming. This greatly improves programmer productivity, while maintaining\napplication performance similar to that of the official NVIDIA CUDA toolkit.\n", "title": "Effective Extensible Programming: Unleashing Julia on GPUs" }
null
null
[ "Computer Science" ]
null
true
null
6714
null
Validated
null
null
null
{ "abstract": " Stabilization of linear systems with unknown dynamics is a canonical problem\nin adaptive control. Since the lack of knowledge of system parameters can cause\nit to become destabilized, an adaptive stabilization procedure is needed prior\nto regulation. Therefore, the adaptive stabilization needs to be completed in\nfinite time. In order to achieve this goal, asymptotic approaches are not very\nhelpful. There are only a few existing non-asymptotic results and a full\ntreatment of the problem is not currently available.\nIn this work, leveraging the novel method of random linear feedbacks, we\nestablish high probability guarantees for finite time stabilization. Our\nresults hold for remarkably general settings because we carefully choose a\nminimal set of assumptions. These include stabilizability of the underlying\nsystem and restricting the degree of heaviness of the noise distribution. To\nderive our results, we also introduce a number of new concepts and technical\ntools to address regularity and instability of the closed-loop matrix.\n", "title": "Finite Time Adaptive Stabilization of LQ Systems" }
null
null
null
null
true
null
6715
null
Default
null
null
null
{ "abstract": " We compute the second coefficient of the composition of two Berezin-Toeplitz\noperators associated with the $\\text{spin}^c$ Dirac operator on a symplectic\nmanifold, making use of the full-off diagonal expansion of the Bergman kernel.\n", "title": "On the composition of Berezin-Toeplitz operators on symplectic manifolds" }
null
null
null
null
true
null
6716
null
Default
null
null
null
{ "abstract": " For given convex integrands $\\gamma_{{}_{i}}: S^{n}\\to \\mathbb{R}_{+}$ (where\n$i=1, 2$), the functions $\\gamma_{{}_{max}}$ and $\\gamma_{{}_{min}}$ can be\ndefined as natural way. In this paper, we show that the Wulff shape of\n$\\gamma_{{}_{max}}$ (resp. the Wulff shape of $\\gamma_{{}_{min}}$) is exactly\nthe convex hull of $(\\mathcal{W}_{\\gamma_{{}_{1}}}\\cup\n\\mathcal{W}_{\\gamma_{{}_{2}}})$ (resp. $\\mathcal{W}_{\\gamma_{{}_{1}}}\\cap\n\\mathcal{W}_{\\gamma_{{}_{2}}}$).\n", "title": "Maximum and minimum operators of convex integrands" }
null
null
null
null
true
null
6717
null
Default
null
null
null
{ "abstract": " Kernel methods are powerful learning methodologies that provide a simple way\nto construct nonlinear algorithms from linear ones. Despite their popularity,\nthey suffer from poor scalability in big data scenarios. Various approximation\nmethods, including random feature approximation have been proposed to alleviate\nthe problem. However, the statistical consistency of most of these approximate\nkernel methods is not well understood except for kernel ridge regression\nwherein it has been shown that the random feature approximation is not only\ncomputationally efficient but also statistically consistent with a minimax\noptimal rate of convergence. In this paper, we investigate the efficacy of\nrandom feature approximation in the context of kernel principal component\nanalysis (KPCA) by studying the trade-off between computational and statistical\nbehaviors of approximate KPCA. We show that the approximate KPCA is both\ncomputationally and statistically efficient compared to KPCA in terms of the\nerror associated with reconstructing a kernel function based on its projection\nonto the corresponding eigenspaces. Depending on the eigenvalue decay behavior\nof the covariance operator, we show that only $n^{2/3}$ features (polynomial\ndecay) or $\\sqrt{n}$ features (exponential decay) are needed to match the\nstatistical performance of KPCA. We also investigate their statistical\nbehaviors in terms of the convergence of corresponding eigenspaces wherein we\nshow that only $\\sqrt{n}$ features are required to match the performance of\nKPCA and if fewer than $\\sqrt{n}$ features are used, then approximate KPCA has\na worse statistical behavior than that of KPCA.\n", "title": "Approximate Kernel PCA Using Random Features: Computational vs. Statistical Trade-off" }
null
null
null
null
true
null
6718
null
Default
null
null
null
{ "abstract": " This paper presents a non-manual design engineering method based on heuristic\nsearch algorithm to search for candidate agents in the solution space which\nformed by artificial intelligence agents modeled on the base of\nbionics.Compared with the artificial design method represented by meta-learning\nand the bionics method represented by the neural architecture chip,this method\nis more feasible for realizing artificial general intelligence,and it has a\nmuch better interaction with cognitive neuroscience;at the same time,the\nengineering method is based on the theoretical hypothesis that the final\nlearning algorithm is stable in certain scenarios,and has generalization\nability in various scenarios.The paper discusses the theory preliminarily and\nproposes the possible correlation between the theory and the fixed-point\ntheorem in the field of mathematics.Limited by the author's knowledge\nlevel,this correlation is proposed only as a kind of conjecture.\n", "title": "A Heuristic Search Algorithm Using the Stability of Learning Algorithms in Certain Scenarios as the Fitness Function: An Artificial General Intelligence Engineering Approach" }
null
null
null
null
true
null
6719
null
Default
null
null
null
{ "abstract": " The origin and life-cycle of molecular clouds are still poorly constrained,\ndespite their importance for understanding the evolution of the interstellar\nmedium. We have carried out a systematic, homogeneous, spectroscopic survey of\nthe inner Galactic plane, in order to complement the many continuum Galactic\nsurveys available with crucial distance and gas-kinematic information. Our aim\nis to combine this data set with recent infrared to sub-millimetre surveys at\nsimilar angular resolutions. The SEDIGISM survey covers 78 deg^2 of the inner\nGalaxy (-60 deg < l < +18 deg, |b| < 0.5 deg) in the J=2-1 rotational\ntransition of 13CO. This isotopologue of CO is less abundant than 12CO by\nfactors up to 100. Therefore, its emission has low to moderate optical depths,\nand higher critical density, making it an ideal tracer of the cold, dense\ninterstellar medium. The data have been observed with the SHFI single-pixel\ninstrument at APEX. The observational setup covers the 13CO(2-1) and C18O(2-1)\nlines, plus several transitions from other molecules. The observations have\nbeen completed. Data reduction is in progress, and the final data products will\nbe made available in the near future. Here we give a detailed description of\nthe survey and the dedicated data reduction pipeline. Preliminary results based\non a science demonstration field covering -20 deg < l < -18.5 deg are\npresented. Analysis of the 13CO(2-1) data in this field reveals compact clumps,\ndiffuse clouds, and filamentary structures at a range of heliocentric\ndistances. By combining our data with data in the (1-0) transition of CO\nisotopologues from the ThrUMMS survey, we are able to compute a 3D realization\nof the excitation temperature and optical depth in the interstellar medium.\nUltimately, this survey will provide a detailed, global view of the inner\nGalactic interstellar medium at an unprecedented angular resolution of ~30\".\n", "title": "SEDIGISM: Structure, excitation, and dynamics of the inner Galactic interstellar medium" }
null
null
null
null
true
null
6720
null
Default
null
null
null
{ "abstract": " The ability to accurately predict and simulate human driving behavior is\ncritical for the development of intelligent transportation systems. Traditional\nmodeling methods have employed simple parametric models and behavioral cloning.\nThis paper adopts a method for overcoming the problem of cascading errors\ninherent in prior approaches, resulting in realistic behavior that is robust to\ntrajectory perturbations. We extend Generative Adversarial Imitation Learning\nto the training of recurrent policies, and we demonstrate that our model\noutperforms rule-based controllers and maximum likelihood models in realistic\nhighway simulations. Our model both reproduces emergent behavior of human\ndrivers, such as lane change rate, while maintaining realistic control over\nlong time horizons.\n", "title": "Imitating Driver Behavior with Generative Adversarial Networks" }
null
null
null
null
true
null
6721
null
Default
null
null
null
{ "abstract": " Topological nodal line semimetals are characterized by the crossing of the\nconduction and valence bands along one or more closed loops in the Brillouin\nzone. Usually, these loops are either isolated or touch each other at some\nhighly symmetric points. Here, we introduce a new kind of nodal line semimetal,\nthat contains a pair of linked nodal loops. A concrete two-band model was\nconstructed, which supports a pair of nodal lines with a double-helix\nstructure, which can be further twisted into a Hopf link because of the\nperiodicity of the Brillouin zone. The nodal lines are stabilized by the\ncombined spatial inversion $\\mathcal{P}$ and time reversal $\\mathcal{T}$\nsymmetry; the individual $\\mathcal{P}$ and $\\mathcal{T}$ symmetries must be\nbroken. The band exhibits nontrivial topology that each nodal loop carries a\n$\\pi$ Berry flux. Surface flat bands emerge at the open boundary and are\nexactly encircled by the projection of the nodal lines on the surface Brillouin\nzone. The experimental implementation of our model using cold atoms in optical\nlattices is discussed.\n", "title": "Topological semimetals with double-helix nodal link" }
null
null
null
null
true
null
6722
null
Default
null
null
null
{ "abstract": " In this paper, an artificial intelligence based grid hardening model is\nproposed with the objective of improving power grid resilience in response to\nextreme weather events. At first, a machine learning model is proposed to\npredict the component states (either operational or outage) in response to the\nextreme event. Then, these predictions are fed into a hardening model, which\ndetermines strategic locations for placement of distributed generation (DG)\nunits. In contrast to existing literature in hardening and resilience\nenhancement, this paper co-optimizes grid economic and resilience objectives by\nconsidering the intricate dependencies of the two. The numerical simulations on\nthe standard IEEE 118-bus test system illustrate the merits and applicability\nof the proposed hardening model. The results indicate that the proposed\nhardening model through decentralized and distributed local energy resources\ncan produce a more robust solution that can protect the system significantly\nagainst multiple component outages due to an extreme event.\n", "title": "Artificial Intelligence Assisted Power Grid Hardening in Response to Extreme Weather Events" }
null
null
null
null
true
null
6723
null
Default
null
null
null
{ "abstract": " Let $\\mathbb{F}_q$ be a finite field. Given two irreducible polynomials $f,g$\nover $\\mathbb{F}_q$, with $\\mathrm{deg} f$ dividing $\\mathrm{deg} g$, the\nfinite field embedding problem asks to compute an explicit description of a\nfield embedding of $\\mathbb{F}_q[X]/f(X)$ into $\\mathbb{F}_q[Y]/g(Y)$. When\n$\\mathrm{deg} f = \\mathrm{deg} g$, this is also known as the isomorphism\nproblem.\nThis problem, a special instance of polynomial factorization, plays a central\nrole in computer algebra software. We review previous algorithms, due to\nLenstra, Allombert, Rains, and Narayanan, and propose improvements and\ngeneralizations. Our detailed complexity analysis shows that our newly proposed\nvariants are at least as efficient as previously known algorithms, and in many\ncases significantly better.\nWe also implement most of the presented algorithms, compare them with the\nstate of the art computer algebra software, and make the code available as open\nsource. Our experiments show that our new variants consistently outperform\navailable software.\n", "title": "Computing isomorphisms and embeddings of finite fields" }
null
null
null
null
true
null
6724
null
Default
null
null
null
{ "abstract": " The current-driven domain wall motion in a ratchet memory due to spin-orbit\ntorques is studied from both full micromagnetic simulations and the one\ndimensional model. Within the framework of this model, the integration of the\nanisotropy energy contribution leads to a new term in the well known q-$\\Phi$\nequations, being this contribution responsible for driving the domain wall to\nan equilibrium position. The comparison between the results drawn by the one\ndimensional model and full micromagnetic simulations proves the utility of such\na model in order to predict the current-driven domain wall motion in the\nratchet memory. Additionally, since current pulses are applied, the paper shows\nhow the proper working of such a device requires the adequate balance of\nexcitation and relaxation times, being the latter longer than the former.\nFinally, the current-driven regime of a ratchet memory is compared to the\nfield-driven regime described elsewhere, then highlighting the advantages of\nthis current-driven regime.\n", "title": "Analysis of the current-driven domain wall motion in a ratchet ferromagnetic strip" }
null
null
[ "Physics" ]
null
true
null
6725
null
Validated
null
null
null
{ "abstract": " Let $(R,\\mathfrak{m})$ be a $d$-dimensional Cohen-Macaulay local ring, $I$ an\n$\\mathfrak{m}$-primary ideal of $R$ and $J=(x_1,...,x_d)$ a minimal reduction\nof $I$. We show that if $J_{d-1}=(x_1,...,x_{d-1})$ and\n$\\sum\\limits_{n=1}^\\infty\\lambda{({I^{n+1}\\cap J_{d-1}})/({J{I^n} \\cap\nJ_{d-1}})=i}$ where i=0,1, then depth $G(I)\\geq{d-i-1}$. Moreover, we prove\nthat if $e_2(I) = \\sum_{n=2}^\\infty (n-1) \\lambda (I^n/JI^{n-1})-2;$ or if $I$\nis integrally closed and $e_2(I) = \\sum_{n=2}^\\infty\n(n-1)\\lambda({I^{n}}/JI^{n-1})-i$ where $i=3,4$, then $e_1(I) =\n\\sum_{n=1}^\\infty \\lambda(I^n / JI^{n-1})-1.$ In addition, we show that $r(I)$\nis independent. Furthermore, we study the independence of $r(I)$ with some\nother conditions.\n", "title": "On the Hilbert coefficients, depth of associated graded rings and reduction numbers" }
null
null
null
null
true
null
6726
null
Default
null
null
null
{ "abstract": " End-to-end (E2E) systems have achieved competitive results compared to\nconventional hybrid hidden Markov model (HMM)-deep neural network based\nautomatic speech recognition (ASR) systems. Such E2E systems are attractive due\nto the lack of dependence on alignments between input acoustic and output\ngrapheme or HMM state sequence during training. This paper explores the design\nof an ASR-free end-to-end system for text query-based keyword search (KWS) from\nspeech trained with minimal supervision. Our E2E KWS system consists of three\nsub-systems. The first sub-system is a recurrent neural network (RNN)-based\nacoustic auto-encoder trained to reconstruct the audio through a\nfinite-dimensional representation. The second sub-system is a character-level\nRNN language model using embeddings learned from a convolutional neural\nnetwork. Since the acoustic and text query embeddings occupy different\nrepresentation spaces, they are input to a third feed-forward neural network\nthat predicts whether the query occurs in the acoustic utterance or not. This\nE2E ASR-free KWS system performs respectably despite lacking a conventional ASR\nsystem and trains much faster.\n", "title": "End-to-End ASR-free Keyword Search from Speech" }
null
null
null
null
true
null
6727
null
Default
null
null
null
{ "abstract": " In LHC Run 3, ALICE will increase the data taking rate significantly to\n50\\,kHz continuous read out of minimum bias Pb-Pb events. This challenges the\nonline and offline computing infrastructure, requiring to process 50 times as\nmany events per second as in Run 2, and increasing the data compression ratio\nfrom 5 to 20. Such high data compression is impossible by lossless ZIP-like\nalgorithms, but it must use results from online reconstruction, which in turn\nrequires online calibration. These important online processing steps are the\nmost computing-intense ones, and will use GPUs as hardware accelerators. The\nnew online features are already under test during Run 2 in the High Level\nTrigger (HLT) online processing farm. The TPC (Time Projection Chamber)\ntracking algorithm for Run 3 is derived from the current HLT online tracking\nand is based on the Cellular Automaton and Kalman Filter. HLT has deployed\nonline calibration for the TPC drift time, which needs to be extended to space\ncharge distortions calibration. This requires online reconstruction for\nadditional detectors like TRD (Transition Radiation Detector) and TOF (Time Of\nFlight). We present prototypes of these developments, in particular a data\ncompression algorithm that achieves a compression factor of~9 on Run 2 TPC\ndata, and the efficiency of online TRD tracking. We give an outlook to the\nchallenges of TPC tracking with continuous read out.\n", "title": "Tracking performance in high multiplicities environment at ALICE" }
null
null
null
null
true
null
6728
null
Default
null
null
null
{ "abstract": " Let $\\Omega$ be a $C^2$-smooth bounded pseudoconvex domain in $\\mathbb{C}^n$\nfor $n\\geq 2$ and let $\\varphi$ be a holomorphic function on $\\Omega$ that is\n$C^2$-smooth on the closure of $\\Omega$. We prove that if\n$H_{\\overline{\\varphi}}$ is in Schatten $p$-class for $p\\leq 2n$ then $\\varphi$\nis a constant function. As a corollary, we show that the\n$\\overline{\\partial}$-Neumann operator on $\\Omega$ is not Hilbert-Schmidt.\n", "title": "Schatten class Hankel and $\\overline{\\partial}$-Neumann operators on pseudoconvex domains in $\\mathbb{C}^n$" }
null
null
null
null
true
null
6729
null
Default
null
null
null
{ "abstract": " The composition of natural liquidity has been changing over time. An analysis\nof intraday volumes for the S&P500 constituent stocks illustrates that (i)\nvolume surprises, i.e., deviations from their respective forecasts, are\ncorrelated across stocks, and (ii) this correlation increases during the last\nfew hours of the trading session. These observations could be attributed, in\npart, to the prevalence of portfolio trading activity that is implicit in the\ngrowth of ETF, passive and systematic investment strategies; and, to the\nincreased trading intensity of such strategies towards the end of the trading\nsession, e.g., due to execution of mutual fund inflows/outflows that are\nbenchmarked to the closing price on each day. In this paper, we investigate the\nconsequences of such portfolio liquidity on price impact and portfolio\nexecution. We derive a linear cross-asset market impact from a stylized model\nthat explicitly captures the fact that a certain fraction of natural liquidity\nproviders only trade portfolios of stocks whenever they choose to execute. We\nfind that due to cross-impact and its intraday variation, it is optimal for a\nrisk-neutral, cost minimizing liquidator to execute a portfolio of orders in a\ncoupled manner, as opposed to a separable VWAP-like execution that is often\nassumed. The optimal schedule couples the execution of the various orders so as\nto be able to take advantage of increased portfolio liquidity towards the end\nof the day. A worst case analysis shows that the potential cost reduction from\nthis optimized execution schedule over the separable approach can be as high as\n6% for plausible model parameters. Finally, we discuss how to estimate\ncross-sectional price impact if one had a dataset of realized portfolio\ntransaction records that exploits the low-rank structure of its coefficient\nmatrix suggested by our analysis.\n", "title": "Cross-Sectional Variation of Intraday Liquidity, Cross-Impact, and their Effect on Portfolio Execution" }
null
null
[ "Quantitative Finance" ]
null
true
null
6730
null
Validated
null
null
null
{ "abstract": " Here we present an in-depth study of the behaviour of the Fast Folding\nAlgorithm, an alternative pulsar searching technique to the Fast Fourier\nTransform. Weaknesses in the Fast Fourier Transform, including a susceptibility\nto red noise, leave it insensitive to pulsars with long rotational periods (P >\n1 s). This sensitivity gap has the potential to bias our understanding of the\nperiod distribution of the pulsar population. The Fast Folding Algorithm, a\ntime-domain based pulsar searching technique, has the potential to overcome\nsome of these biases. Modern distributed-computing frameworks now allow for the\napplication of this algorithm to all-sky blind pulsar surveys for the first\ntime. However, many aspects of the behaviour of this search technique remain\npoorly understood, including its responsiveness to variations in pulse shape\nand the presence of red noise. Using a custom CPU-based implementation of the\nFast Folding Algorithm, ffancy, we have conducted an in-depth study into the\nbehaviour of the Fast Folding Algorithm in both an ideal, white noise regime as\nwell as a trial on observational data from the HTRU-S Low Latitude pulsar\nsurvey, including a comparison to the behaviour of the Fast Fourier Transform.\nWe are able to both confirm and expand upon earlier studies that demonstrate\nthe ability of the Fast Folding Algorithm to outperform the Fast Fourier\nTransform under ideal white noise conditions, and demonstrate a significant\nimprovement in sensitivity to long-period pulsars in real observational data\nthrough the use of the Fast Folding Algorithm.\n", "title": "An investigation of pulsar searching techniques with the Fast Folding Algorithm" }
null
null
null
null
true
null
6731
null
Default
null
null
null
{ "abstract": " A famous result of Jurgen Moser states that a symplectic form on a compact\nmanifold cannot be deformed within its cohomology class to an inequivalent\nsymplectic form. It is well known that this does not hold in general for\nnoncompact symplectic manifolds. The notion of Eliashberg-Gromov convex ends\nprovides a natural restricted setting for the study of analogs of Moser's\nsymplectic stability result in the noncompact case, and this has been\nsignificantly developed in work of Cieliebak-Eliashberg. Retaining the end\nstructure on the underlying smooth manifold, but dropping the convexity and\ncompleteness assumptions on the symplectic forms at infinity we show that\nsymplectic stability holds under a natural growth condition on the path of\nsymplectic forms. The result can be straightforwardly applied as we show\nthrough explicit examples.\n", "title": "Symplectic stability on manifolds with cylindrical ends" }
null
null
null
null
true
null
6732
null
Default
null
null
null
{ "abstract": " Brillouin light spectroscopy is a powerful and robust technique for measuring\nthe interfacial Dzyaloshinskii-Moriya interaction in thin films with broken\ninversion symmetry. Here we show that the magnon visibility, i.e. the intensity\nof the inelastically scattered light, strongly depends on the thickness of the\ndielectric seed material - SiO$_2$. By using both, analytical thin-film optics\nand numerical calculations, we reproduce the experimental data. We therefore\nprovide a guideline for the maximization of the signal by adapting the\nsubstrate properties to the geometry of the measurement. Such a boost-up of the\nsignal eases the magnon visualization in ultrathin magnetic films, speeds-up\nthe measurement and increases the reliability of the data.\n", "title": "Making the Dzyaloshinskii-Moriya interaction visible" }
null
null
null
null
true
null
6733
null
Default
null
null
null
{ "abstract": " We develop a framework for approximating collapsed Gibbs sampling in\ngenerative latent variable cluster models. Collapsed Gibbs is a popular MCMC\nmethod, which integrates out variables in the posterior to improve mixing.\nUnfortunately for many complex models, integrating out these variables is\neither analytically or computationally intractable. We efficiently approximate\nthe necessary collapsed Gibbs integrals by borrowing ideas from expectation\npropagation. We present two case studies where exact collapsed Gibbs sampling\nis intractable: mixtures of Student-t's and time series clustering. Our\nexperiments on real and synthetic data show that our approximate sampler\nenables a runtime-accuracy tradeoff in sampling these types of models,\nproviding results with competitive accuracy much more rapidly than the naive\nGibbs samplers one would otherwise rely on in these scenarios.\n", "title": "Approximate Collapsed Gibbs Clustering with Expectation Propagation" }
null
null
null
null
true
null
6734
null
Default
null
null
null
{ "abstract": " The key feature of a thermophotovoltaic (TPV) emitter is the enhancement of\nthermal emission corresponding to energies just above the bandgap of the\nabsorbing photovoltaic cell and simultaneous suppression of thermal emission\nbelow the bandgap. We show here that a single layer plasmonic coating can\nperform this task with high efficiency. Our key design principle involves\ntuning the epsilon-near-zero frequency (plasma frequency) of the metal acting\nas a thermal emitter to the electronic bandgap of the semiconducting cell. This\napproach utilizes the change in reflectivity of a metal near its plasma\nfrequency (epsilon-near-zero frequency) to lead to spectrally selective thermal\nemission and can be adapted to large area coatings using high temperature\nplasmonic materials. We provide a detailed analysis of the spectral and angular\nperformance of high temperature plasmonic coatings as TPV emitters. We show the\npotential of such high temperature plasmonic thermal emitter coatings (p-TECs)\nfor narrowband near-field thermal emission. We also show the enhancement of\nnear-surface energy density in graphene-multilayer thermal metamaterials due to\na topological transition at an effective epsilon-near-zero frequency. This\nopens up spectrally selective thermal emission from graphene multilayers in the\ninfrared frequency regime. Our design paves the way for the development of\nsingle layer p-TECs and graphene multilayers for spectrally selective radiative\nheat transfer applications.\n", "title": "Thermal graphene metamaterials and epsilon-near-zero high temperature plasmonics" }
null
null
[ "Physics" ]
null
true
null
6735
null
Validated
null
null
null
{ "abstract": " The electric coupling between surface ions and bulk ferroelectricity gives\nrise to a continuum of mixed states in ferroelectric thin films, exquisitely\nsensitive to temperature and external factors, such as applied voltage and\noxygen pressure. Here we develop the comprehensive analytical description of\nthese coupled ferroelectric and ionic (\"ferroionic\") states by combining the\nGinzburg-Landau-Devonshire description of the ferroelectric properties of the\nfilm with Langmuir adsorption model for the electrochemical reaction at the\nfilm surface. We explore the thermodynamic and kinetic characteristics of the\nferroionic states as a function of temperature, film thickness, and external\nelectric potential. These studies provide a new insight into mesoscopic\nproperties of ferroelectric thin films, whose surface is exposed to chemical\nenvironment as screening charges supplier.\n", "title": "Ferroionic states in ferroelectric thin films" }
null
null
null
null
true
null
6736
null
Default
null
null
null
{ "abstract": " In this paper we provide new quantum algorithms with polynomial speed-up for\na range of problems for which no such results were known, or we improve\nprevious algorithms. First, we consider the approximation of the frequency\nmoments $F_k$ of order $k \\geq 3$ in the multi-pass streaming model with\nupdates (turnstile model). We design a $P$-pass quantum streaming algorithm\nwith memory $M$ satisfying a tradeoff of $P^2 M = \\tilde{O}(n^{1-2/k})$,\nwhereas the best classical algorithm requires $P M = \\Theta(n^{1-2/k})$. Then,\nwe study the problem of estimating the number $m$ of edges and the number $t$\nof triangles given query access to an $n$-vertex graph. We describe optimal\nquantum algorithms that perform $\\tilde{O}(\\sqrt{n}/m^{1/4})$ and\n$\\tilde{O}(\\sqrt{n}/t^{1/6} + m^{3/4}/\\sqrt{t})$ queries respectively. This is\na quadratic speed-up compared to the classical complexity of these problems.\nFor this purpose we develop a new quantum paradigm that we call Quantum\nChebyshev's inequality. Namely we demonstrate that, in a certain model of\nquantum sampling, one can approximate with relative error the mean of any\nrandom variable with a number of quantum samples that is linear in the ratio of\nthe square root of the variance to the mean. Classically the dependency is\nquadratic. Our algorithm subsumes a previous result of Montanaro [Mon15]. This\nnew paradigm is based on a refinement of the Amplitude Estimation algorithm of\nBrassard et al. [BHMT02] and of previous quantum algorithms for the mean\nestimation problem. We show that this speed-up is optimal, and we identify\nanother common model of quantum sampling where it cannot be obtained. For our\napplications, we also adapt the variable-time amplitude amplification technique\nof Ambainis [Amb10] into a variable-time amplitude estimation algorithm.\n", "title": "Quantum Chebyshev's Inequality and Applications" }
null
null
null
null
true
null
6737
null
Default
null
null
null
{ "abstract": " We propose a data-driven algorithm for the maximum a posteriori (MAP)\nestimation of stochastic processes from noisy observations. The primary\nstatistical properties of the sought signal is specified by the penalty\nfunction (i.e., negative logarithm of the prior probability density function).\nOur alternating direction method of multipliers (ADMM)-based approach\ntranslates the estimation task into successive applications of the proximal\nmapping of the penalty function. Capitalizing on this direct link, we define\nthe proximal operator as a parametric spline curve and optimize the spline\ncoefficients by minimizing the average reconstruction error for a given\ntraining set. The key aspects of our learning method are that the associated\npenalty function is constrained to be convex and the convergence of the ADMM\niterations is proven. As a result of these theoretical guarantees, adaptation\nof the proposed framework to different levels of measurement noise is extremely\nsimple and does not require any retraining. We apply our method to estimation\nof both sparse and non-sparse models of Lévy processes for which the\nminimum mean square error (MMSE) estimators are available. We carry out a\nsingle training session and perform comparisons at various signal-to-noise\nratio (SNR) values. Simulations illustrate that the performance of our\nalgorithm is practically identical to the one of the MMSE estimator\nirrespective of the noise power.\n", "title": "Learning Convex Regularizers for Optimal Bayesian Denoising" }
null
null
null
null
true
null
6738
null
Default
null
null
null
{ "abstract": " Let $H=-\\Delta+V$ be a Schrödinger operator on $L^2(\\mathbb R^2)$ with\nreal-valued potential $V$, and let $H_0=-\\Delta$. If $V$ has sufficient\npointwise decay, the wave operators $W_{\\pm}=s-\\lim_{t\\to \\pm\\infty}\ne^{itH}e^{-itH_0}$ are known to be bounded on $L^p(\\mathbb R^2)$ for all $1< p<\n\\infty$ if zero is not an eigenvalue or resonance. We show that if there is an\ns-wave resonance or an eigenvalue only at zero, then the wave operators are\nbounded on $L^p(\\mathbb R^2)$ for $1 < p<\\infty$. This result stands in\ncontrast to results in higher dimensions, where the presence of zero energy\nobstructions is known to shrink the range of valid exponents $p$.\n", "title": "On the $L^p$ boundedness of wave operators for two-dimensional Schrödinger operators with threshold obstructions" }
null
null
null
null
true
null
6739
null
Default
null
null
null
{ "abstract": " As the bioinformatics field grows, it must keep pace not only with new data\nbut with new algorithms. Here we contribute a thorough analysis of 13\nstate-of-the-art, commonly used machine learning algorithms on a set of 165\npublicly available classification problems in order to provide data-driven\nalgorithm recommendations to current researchers. We present a number of\nstatistical and visual comparisons of algorithm performance and quantify the\neffect of model selection and algorithm tuning for each algorithm and dataset.\nThe analysis culminates in the recommendation of five algorithms with\nhyperparameters that maximize classifier performance across the tested\nproblems, as well as general guidelines for applying machine learning to\nsupervised classification problems.\n", "title": "Data-driven Advice for Applying Machine Learning to Bioinformatics Problems" }
null
null
null
null
true
null
6740
null
Default
null
null
null
{ "abstract": " The mainstream of research in genetics, epigenetics and imaging data analysis\nfocuses on statistical association or exploring statistical dependence between\nvariables. Despite their significant progresses in genetic research,\nunderstanding the etiology and mechanism of complex phenotypes remains elusive.\nUsing association analysis as a major analytical platform for the complex data\nanalysis is a key issue that hampers the theoretic development of genomic\nscience and its application in practice. Causal inference is an essential\ncomponent for the discovery of mechanical relationships among complex\nphenotypes. Many researchers suggest making the transition from association to\ncausation. Despite its fundamental role in science, engineering and\nbiomedicine, the traditional methods for causal inference require at least\nthree variables. However, quantitative genetic analysis such as QTL, eQTL,\nmQTL, and genomic-imaging data analysis requires exploring the causal\nrelationships between two variables. This paper will focus on bivariate causal\ndiscovery. We will introduce independence of cause and mechanism (ICM) as a\nbasic principle for causal inference, algorithmic information theory and\nadditive noise model (ANM) as major tools for bivariate causal discovery.\nLarge-scale simulations will be performed to evaluate the feasibility of the\nANM for bivariate causal discovery. To further evaluate their performance for\ncausal inference, the ANM will be applied to the construction of gene\nregulatory networks. Also, the ANM will be applied to trait-imaging data\nanalysis to illustrate three scenarios: presence of both causation and\nassociation, presence of association while absence of causation, and presence\nof causation, while lack of association between two variables.\n", "title": "Bivariate Causal Discovery and its Applications to Gene Expression and Imaging Data Analysis" }
null
null
null
null
true
null
6741
null
Default
null
null
null
{ "abstract": " One of the key differences between the learning mechanism of humans and\nArtificial Neural Networks (ANNs) is the ability of humans to learn one task at\na time. ANNs, on the other hand, can only learn multiple tasks simultaneously.\nAny attempts at learning new tasks incrementally cause them to completely\nforget about previous tasks. This lack of ability to learn incrementally,\ncalled Catastrophic Forgetting, is considered a major hurdle in building a true\nAI system. In this paper, our goal is to isolate the truly effective existing\nideas for incremental learning from those that only work under certain\nconditions. To this end, we first thoroughly analyze the current state of the\nart (iCaRL) method for incremental learning and demonstrate that the good\nperformance of the system is not because of the reasons presented in the\nexisting literature. We conclude that the success of iCaRL is primarily due to\nknowledge distillation and recognize a key limitation of knowledge\ndistillation, i.e, it often leads to bias in classifiers. Finally, we propose a\ndynamic threshold moving algorithm that is able to successfully remove this\nbias. We demonstrate the effectiveness of our algorithm on CIFAR100 and MNIST\ndatasets showing near-optimal results. Our implementation is available at\nthis https URL.\n", "title": "Revisiting Distillation and Incremental Classifier Learning" }
null
null
null
null
true
null
6742
null
Default
null
null
null
{ "abstract": " Human-in-the-loop manipulation is useful in when autonomous grasping is not\nable to deal sufficiently well with corner cases or cannot operate fast enough.\nUsing the teleoperator's hand as an input device can provide an intuitive\ncontrol method but requires mapping between pose spaces which may not be\nsimilar. We propose a low-dimensional and continuous teleoperation subspace\nwhich can be used as an intermediary for mapping between different hand pose\nspaces. We present an algorithm to project between pose space and teleoperation\nsubspace. We use a non-anthropomorphic robot to experimentally prove that it is\npossible for teleoperation subspaces to effectively and intuitively enable\nteleoperation. In experiments, novice users completed pick and place tasks\nsignificantly faster using teleoperation subspace mapping than they did using\nstate of the art teleoperation methods.\n", "title": "Intuitive Hand Teleoperation by Novice Operators Using a Continuous Teleoperation Subspace" }
null
null
null
null
true
null
6743
null
Default
null
null
null
{ "abstract": " American cities devote significant resources to the implementation of traffic\nsafety countermeasures that prevent pedestrian fatalities. However, the\nbefore-after comparisons typically used to evaluate the success of these\ncountermeasures often suffer from selection bias. This paper motivates the\ntendency for selection bias to overestimate the benefits of traffic safety\npolicy, using New York City's Vision Zero strategy as an example. The NASS\nGeneral Estimates System, Fatality Analysis Reporting System and other\ndatabases are combined into a Bayesian hierarchical model to calculate a more\nrealistic before-after comparison. The results confirm the before-after\nanalysis of New York City's Vision Zero policy did in fact overestimate the\neffect of the policy, and a more realistic estimate is roughly two-thirds the\nsize.\n", "title": "A Hierarchical Bayes Approach to Adjust for Selection Bias in Before-After Analyses of Vision Zero Policies" }
null
null
null
null
true
null
6744
null
Default
null
null
null
{ "abstract": " It is known that gas bubbles on the surface bounding a fluid flow can change\nthe coefficient of friction and affect the parameters of the boundary layer. In\nthis paper, we propose a method that allows us to create, in the near-wall\nregion, a thin layer of liquid filled with bubbles. It will be shown that if\nthere is an oscillating piezoelectric plate on the surface bounding a liquid,\nthen, under certain conditions, cavitation develops in the boundary layer. The\nrelationship between the parameters of cavitation and the characteristics of\nthe piezoelectric plate oscillations is obtained. Possible applications are\ndiscussed.\n", "title": "Cavitation near the oscillating piezoelectric plate in water" }
null
null
null
null
true
null
6745
null
Default
null
null
null
{ "abstract": " Inverse problems correspond to a certain type of optimization problems\nformulated over appropriate input distributions. Recently, there has been a\ngrowing interest in understanding the computational hardness of these\noptimization problems, not only in the worst case, but in an average-complexity\nsense under this same input distribution.\nIn this revised note, we are interested in studying another aspect of\nhardness, related to the ability to learn how to solve a problem by simply\nobserving a collection of previously solved instances. These 'planted\nsolutions' are used to supervise the training of an appropriate predictive\nmodel that parametrizes a broad class of algorithms, with the hope that the\nresulting model will provide good accuracy-complexity tradeoffs in the average\nsense.\nWe illustrate this setup on the Quadratic Assignment Problem, a fundamental\nproblem in Network Science. We observe that data-driven models based on Graph\nNeural Networks offer intriguingly good performance, even in regimes where\nstandard relaxation based techniques appear to suffer.\n", "title": "Revised Note on Learning Algorithms for Quadratic Assignment with Graph Neural Networks" }
null
null
null
null
true
null
6746
null
Default
null
null
null
{ "abstract": " The general completeness problem of Hoare logic relative to the standard\nmodel $N$ of Peano arithmetic has been studied by Cook, and it allows for the\nuse of arbitrary arithmetical formulas as assertions. In practice, the\nassertions would be simple arithmetical formulas, e.g. of a low level in the\narithmetical hierarchy. In addition, we find that, by restricting inputs to\n$N$, the complexity of the minimal assertion theory for the completeness of\nHoare logic to hold can be reduced. This paper further studies the completeness\nof Hoare Logic relative to $N$ by restricting assertions to subclasses of\narithmetical formulas (and by restricting inputs to $N$). Our completeness\nresults refine Cook's result by reducing the complexity of the assertion\ntheory.\n", "title": "On Completeness Results of Hoare Logic Relative to the Standard Model" }
null
null
null
null
true
null
6747
null
Default
null
null
null
{ "abstract": " In this paper, we present a result similar to the shift-coupling result of\nThorisson (1996) in the context of random graphs and networks. The result is\nthat a given random rooted network can be obtained by changing the root of\nanother given one if and only if the distributions of the two agree on the\ninvariant sigma-field. Several applications of the result are presented for the\ncase of unimodular networks. In particular, it is shown that the distribution\nof a unimodular network is uniquely determined by its restriction to the\ninvariant sigma-filed. Also, the theorem is applied to the existence of an\ninvariant transport kernel that balances between two given (discrete) measures\non the vertices. An application is the existence of a so called extra head\nscheme for the Bernoulli process on an infinite unimodular graph. Moreover, a\nconstruction is presented for balancing transport kernels that is a\ngeneralization of the Gale-Shapley stable matching algorithm in bipartite\ngraphs. Another application is on a general method that covers the situations\nwhere some vertices and edges are added to a unimodular network and then, to\nmake it unimodular, the probability measure is biased and then a new root is\nselected. It is proved that this method provides all possible\nunimodularizations in these situations. Finally, analogous existing results for\nstationary point processes and unimodular networks are discussed in detail.\n", "title": "Shift-Coupling of Random Rooted Graphs and Networks" }
null
null
null
null
true
null
6748
null
Default
null
null
null
{ "abstract": " Quantifying image distortions caused by strong gravitational lensing and\nestimating the corresponding matter distribution in lensing galaxies has been\nprimarily performed by maximum likelihood modeling of observations. This is\ntypically a time and resource-consuming procedure, requiring sophisticated\nlensing codes, several data preparation steps, and finding the maximum\nlikelihood model parameters in a computationally expensive process with\ndownhill optimizers. Accurate analysis of a single lens can take up to a few\nweeks and requires the attention of dedicated experts. Tens of thousands of new\nlenses are expected to be discovered with the upcoming generation of ground and\nspace surveys, the analysis of which can be a challenging task. Here we report\nthe use of deep convolutional neural networks to accurately estimate lensing\nparameters in an extremely fast and automated way, circumventing the\ndifficulties faced by maximum likelihood methods. We also show that lens\nremoval can be made fast and automated using Independent Component Analysis of\nmulti-filter imaging data. Our networks can recover the parameters of the\nSingular Isothermal Ellipsoid density profile, commonly used to model strong\nlensing systems, with an accuracy comparable to the uncertainties of\nsophisticated models, but about ten million times faster: 100 systems in\napproximately 1s on a single graphics processing unit. These networks can\nprovide a way for non-experts to obtain lensing parameter estimates for large\nsamples of data. Our results suggest that neural networks can be a powerful and\nfast alternative to maximum likelihood procedures commonly used in\nastrophysics, radically transforming the traditional methods of data reduction\nand analysis.\n", "title": "Fast Automated Analysis of Strong Gravitational Lenses with Convolutional Neural Networks" }
null
null
null
null
true
null
6749
null
Default
null
null
null
{ "abstract": " Deep neural networks (DNNs) have excellent representative power and are state\nof the art classifiers on many tasks. However, they often do not capture their\nown uncertainties well making them less robust in the real world as they\noverconfidently extrapolate and do not notice domain shift. Gaussian processes\n(GPs) with RBF kernels on the other hand have better calibrated uncertainties\nand do not overconfidently extrapolate far from data in their training set.\nHowever, GPs have poor representational power and do not perform as well as\nDNNs on complex domains. In this paper we show that GP hybrid deep networks,\nGPDNNs, (GPs on top of DNNs and trained end-to-end) inherit the nice properties\nof both GPs and DNNs and are much more robust to adversarial examples. When\nextrapolating to adversarial examples and testing in domain shift settings,\nGPDNNs frequently output high entropy class probabilities corresponding to\nessentially \"don't know\". GPDNNs are therefore promising as deep architectures\nthat know when they don't know.\n", "title": "Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks" }
null
null
[ "Statistics" ]
null
true
null
6750
null
Validated
null
null
null
{ "abstract": " We introduce an elliptic regularization of the PDE system representing the\nisometric immersion of a surface in $\\mathbb R^{3}$. The regularization is\ngeometric, and has a natural variational interpretation.\n", "title": "Elliptic regularization of the isometric immersion problem" }
null
null
null
null
true
null
6751
null
Default
null
null
null
{ "abstract": " This paper presents a system based on a Two-Way Particle-Tracking Model to\nanalyze possible crash positions of flight MH370. The particle simulator\nincludes a simple flow simulation of the debris based on a Lagrangian approach\nand a module to extract appropriated ocean current data from netCDF files. The\ninfluence of wind, waves, immersion depth and hydrodynamic behavior are not\nconsidered in the simulation.\n", "title": "A Debris Backwards Flow Simulation System for Malaysia Airlines Flight 370" }
null
null
null
null
true
null
6752
null
Default
null
null
null
{ "abstract": " One of the major drawbacks of modularized task-completion dialogue systems is\nthat each module is trained individually, which presents several challenges.\nFor example, downstream modules are affected by earlier modules, and the\nperformance of the entire system is not robust to the accumulated errors. This\npaper presents a novel end-to-end learning framework for task-completion\ndialogue systems to tackle such issues. Our neural dialogue system can directly\ninteract with a structured database to assist users in accessing information\nand accomplishing certain tasks. The reinforcement learning based dialogue\nmanager offers robust capabilities to handle noises caused by other components\nof the dialogue system. Our experiments in a movie-ticket booking domain show\nthat our end-to-end system not only outperforms modularized dialogue system\nbaselines for both objective and subjective evaluation, but also is robust to\nnoises as demonstrated by several systematic experiments with different error\ngranularity and rates specific to the language understanding module.\n", "title": "End-to-End Task-Completion Neural Dialogue Systems" }
null
null
[ "Computer Science" ]
null
true
null
6753
null
Validated
null
null
null
{ "abstract": " Extreme phenotype sampling is a selective genotyping design for genetic\nassociation studies where only individuals with extreme values of a continuous\ntrait are genotyped for a set of genetic variants. Under financial or other\nlimitations, this design is assumed to improve the power to detect associations\nbetween genetic variants and the trait, compared to randomly selecting the same\nnumber of individuals for genotyping. Here we present extensions of likelihood\nmodels that can be used for inference when the data are sampled according to\nthe extreme phenotype sampling design. Computational methods for parameter\nestimation and hypothesis testing are provided. We consider methods for common\nvariant genetic effects and gene-environment interaction effects in linear\nregression models with a normally distributed trait. We use simulated and real\ndata to show that extreme phenotype sampling can be powerful compared to random\nsampling, but that this does not hold for all extreme sampling methods and\nsituations.\n", "title": "Improving power of genetic association studies by extreme phenotype sampling: a review and some new results" }
null
null
null
null
true
null
6754
null
Default
null
null
null
{ "abstract": " The fundamental theory of energy networks in different energy forms is\nestablished following an in-depth analysis of the nature of energy for\ncomprehensive energy utilization. The definition of an energy network is given.\nCombining the generalized balance equation of energy in space and the Pfaffian\nequation, the generalized transfer equations of energy in lines (pipes) are\nproposed. The energy variation laws in the transfer processes are investigated.\nTo establish the equations of energy networks, the Kirchhoff's Law in electric\nnetworks is extended to energy networks, which is called the Generalized\nKirchhoff\"s Law. According to the linear phenomenological law, the generalized\nequivalent energy transfer equations with lumped parameters are derived in\nterms of the characteristic equations of energy transfer in lines(pipes).The\nequations are finally unified into a complete energy network equation system\nand its solvability is further discussed. Experiments are carried out on a\ncombined cooling, heating and power(CCHP) system in engineering, the energy\nnetwork theory proposed in this paper is used to model and analyze this system.\nBy comparing the theoretical results obtained by our modeling approach and the\ndata measured in experiments, the energy equations are validated.\n", "title": "Energy network: towards an interconnected energy infrastructure for the future" }
null
null
null
null
true
null
6755
null
Default
null
null
null
{ "abstract": " The idea is to demonstrate the beauty and power of Alexandrov geometry by\nreaching interesting applications with a minimum of preparation.\nThe topics include\n1. Estimates on the number of collisions in billiards.\n2. Construction of exotic aspherical manifolds.\n3. The geometry of two-convex sets in Euclidean space.\n", "title": "Invitation to Alexandrov geometry: CAT[0] spaces" }
null
null
null
null
true
null
6756
null
Default
null
null
null
{ "abstract": " We prove rigorously that the exact N-electron Hohenberg-Kohn density\nfunctional converges in the strongly interacting limit to the strictly\ncorrelated electrons (SCE) functional, and that the absolute value squared of\nthe associated constrained-search wavefunction tends weakly in the sense of\nprobability measures to a minimizer of the multi-marginal optimal transport\nproblem with Coulomb cost associated to the SCE functional. This extends our\nprevious work for N=2 [CFK11]. The correct limit problem has been derived in\nthe physics literature by Seidl [Se99] and Seidl, Gori-Giorgi and Savin\n[SGS07]; in these papers the lack of a rigorous proof was pointed out.\nWe also give a mathematical counterexample to this type of result, by\nreplacing the constraint of given one-body density -- an infinite-dimensional\nquadratic expression in the wavefunction -- by an infinite-dimensional\nquadratic expression in the wavefunction and its gradient. Connections with the\nLawrentiev phenomenon in the calculus of variations are indicated.\n", "title": "Smoothing of transport plans with fixed marginals and rigorous semiclassical limit of the Hohenberg-Kohn functional" }
null
null
[ "Mathematics" ]
null
true
null
6757
null
Validated
null
null
null
{ "abstract": " In this paper we illustrate the use of the results from [1] proving that\n$D(4)$-triple $\\{a, b, c\\}$ with $a < b < a + 57\\sqrt{a}$ has a unique\nextension to a quadruple with a larger element. This furthermore implies that\n$D(4)$-pair $\\{a, b\\}$ cannot be extended to a quintuple if $a < b < a +\n57\\sqrt{a}$.\n", "title": "The extension of some D(4)-pairs" }
null
null
null
null
true
null
6758
null
Default
null
null
null
{ "abstract": " Technology market is continuing a rapid growth phase where different resource\nproviders and Cloud Management Frameworks are positioning to provide ad-hoc\nsolutions -in terms of management interfaces, information discovery or billing-\ntrying to differentiate from competitors but that as a result remain\nincompatible between them when addressing more complex scenarios like federated\nclouds. Grasping interoperability problems present in current infrastructures\nis then a must-do, tackled by studying how existing and emerging standards\ncould enhance user experience in the cloud ecosystem. In this paper we will\nreview the current open challenges in Infrastructure as a Service cloud\ninteroperability and federation, as well as point to the potential standards\nthat should alleviate these problems.\n", "title": "Standards for enabling heterogeneous IaaS cloud federations" }
null
null
null
null
true
null
6759
null
Default
null
null
null
{ "abstract": " The data mining field is an important source of large-scale applications and\ndatasets which are getting more and more common. In this paper, we present\ngrid-based approaches for two basic data mining applications, and a performance\nevaluation on an experimental grid environment that provides interesting\nmonitoring capabilities and configuration tools. We propose a new distributed\nclustering approach and a distributed frequent itemsets generation well-adapted\nfor grid environments. Performance evaluation is done using the Condor system\nand its workflow manager DAGMan. We also compare this performance analysis to a\nsimple analytical model to evaluate the overheads related to the workflow\nengine and the underlying grid system. This will specifically show that\nrealistic performance expectations are currently difficult to achieve on the\ngrid.\n", "title": "Grid-based Approaches for Distributed Data Mining Applications" }
null
null
null
null
true
null
6760
null
Default
null
null
null
{ "abstract": " Automatic speaker verification (ASV) systems use a playback detector to\nfilter out playback attacks and ensure verification reliability. Since current\nplayback detection models are almost always trained using genuine and\nplayed-back speech, it may be possible to degrade their performance by\ntransforming the acoustic characteristics of the played-back speech close to\nthat of the genuine speech. One way to do this is to enhance speech \"stolen\"\nfrom the target speaker before playback. We tested the effectiveness of a\nplayback attack using this method by using the speech enhancement generative\nadversarial network to transform acoustic characteristics. Experimental results\nshowed that use of this \"enhanced stolen speech\" method significantly increases\nthe equal error rates for the baseline used in the ASVspoof 2017 challenge and\nfor a light convolutional neural network-based method. The results also showed\nthat its use degrades the performance of a Gaussian mixture model-universal\nbackground model-based ASV system. This type of attack is thus an urgent\nproblem needing to be solved.\n", "title": "Transforming acoustic characteristics to deceive playback spoofing countermeasures of speaker verification systems" }
null
null
null
null
true
null
6761
null
Default
null
null
null
{ "abstract": " We describe the LoopInvGen tool for generating loop invariants that can\nprovably guarantee correctness of a program with respect to a given\nspecification. LoopInvGen is an efficient implementation of the inference\ntechnique originally proposed in our earlier work on PIE\n(this https URL).\nIn contrast to existing techniques, LoopInvGen is not restricted to a fixed\nset of features -- atomic predicates that are composed together to build\ncomplex loop invariants. Instead, we start with no initial features, and use\nprogram synthesis techniques to grow the set on demand. This not only enables a\nless onerous and more expressive approach, but also appears to be significantly\nfaster than the existing tools over the SyGuS-COMP 2017 benchmarks from the INV\ntrack.\n", "title": "LoopInvGen: A Loop Invariant Generator based on Precondition Inference" }
null
null
null
null
true
null
6762
null
Default
null
null
null
{ "abstract": " Topologically protected superfluid phases of $^3$He allow one to simulate\nmany important aspects of relativistic quantum field theories and quantum\ngravity in condensed matter. Here we discuss a topological Lifshitz transition\nof the effective quantum vacuum in which the determinant of the tetrad field\nchanges sign through a crossing to a vacuum state with a degenerate fermionic\nmetric. Such a transition is realized in polar distorted superfluid $^3$He-A in\nterms of the effective tetrad fields emerging in the vicinity of the superfluid\ngap nodes: the tetrads of the Weyl points in the chiral A-phase of $^3$He and\nthe degenerate tetrad in the vicinity of a Dirac nodal line in the polar phase\nof $^3$He. The continuous phase transition from the $A$-phase to the polar\nphase, i.e. in the transition from the Weyl nodes to the Dirac nodal line and\nback, allows one to follow the behavior of the fermionic and bosonic effective\nactions when the sign of the tetrad determinant changes, and the effective\nchiral space-time transforms to anti-chiral \"anti-spacetime\". This condensed\nmatter realization demonstrates that while the original fermionic action is\nanalytic across the transition, the effective action for the orbital degrees of\nfreedom (pseudo-EM) fields and gravity have non-analytic behavior. In\nparticular, the action for the pseudo-EM field in the vacuum with Weyl fermions\n(A-phase) contains the modulus of the tetrad determinant. In the vacuum with\nthe degenerate metric (polar phase) the nodal line is effectively a family of\n$2+1$d Dirac fermion patches, which leads to a non-analytic $(B^2-E^2)^{3/4}$\nQED action in the vicinity of the Dirac line.\n", "title": "Dimensional crossover of effective orbital dynamics in polar distorted 3He-A: Transitions to anti-spacetime" }
null
null
null
null
true
null
6763
null
Default
null
null
null
{ "abstract": " We present low-frequency spectral energy distributions of 60 known radio\npulsars observed with the Murchison Widefield Array (MWA) telescope. We\nsearched the GaLactic and Extragalactic All-sky MWA (GLEAM) survey images for\n200-MHz continuum radio emission at the position of all pulsars in the ATNF\npulsar catalogue. For the 60 confirmed detections we have measured flux\ndensities in 20 x 8 MHz bands between 72 and 231 MHz. We compare our results to\nexisting measurements and show that the MWA flux densities are in good\nagreement.\n", "title": "Low frequency spectral energy distributions of radio pulsars detected with the Murchison Widefield Array" }
null
null
null
null
true
null
6764
null
Default
null
null
null
{ "abstract": " A scalable framework is developed to allocate radio resources across a large\nnumber of densely deployed small cells with given traffic statistics on a slow\ntimescale. Joint user association and spectrum allocation is first formulated\nas a convex optimization problem by dividing the spectrum among all possible\ntransmission patterns of active access points (APs). To improve scalability\nwith the number of APs, the problem is reformulated using local patterns of\ninterfering APs. To maintain global consistency among local patterns,\ninter-cluster interaction is characterized as hyper-edges in a hyper-graph with\nnodes corresponding to neighborhoods of APs. A scalable solution is obtained by\niteratively solving a convex optimization problem for bandwidth allocation with\nreduced complexity and constructing a global spectrum allocation using\nhyper-graph coloring. Numerical results demonstrate the proposed solution for a\nnetwork with 100 APs and several hundred user equipments. For a given quality\nof service (QoS), the proposed scheme can increase the network capacity several\nfold compared to assigning each user to the strongest AP with full-spectrum\nreuse.\n", "title": "Scalable Spectrum Allocation and User Association in Networks with Many Small Cells" }
null
null
null
null
true
null
6765
null
Default
null
null
null
{ "abstract": " We study the Kepler metrics on Kepler manifolds from the point of view of\nSasakian geometry and Hessian geometry. This establishes a link between the\nproblem of classical gravity and the modern geometric methods in the study of\nAdS/CFT correspondence in string theory.\n", "title": "On Geometry and Symmetry of Kepler Systems. I" }
null
null
null
null
true
null
6766
null
Default
null
null
null
{ "abstract": " Collisions with background gas can perturb the transition frequency of\ntrapped ions in an optical atomic clock. We develop a non-perturbative\nframework based on a quantum channel description of the scattering process, and\nuse it to derive a master equation which leads to a simple analytic expression\nfor the collisional frequency shift. As a demonstration of our method, we\ncalculate the frequency shift of the Sr$^+$ optical atomic clock transition due\nto elastic collisions with helium.\n", "title": "The collisional frequency shift of a trapped-ion optical clock" }
null
null
null
null
true
null
6767
null
Default
null
null
null
{ "abstract": " We show that the Weyl symbol of a Born-Jordan operator is in the same class\nas the Born-Jordan symbol, when Hörmander symbols and certain types of\nmodulation spaces are used as symbol classes. We use these properties to carry\nover continuity and Schatten-von Neumann properties to the Born-Jordan\ncalculus.\n", "title": "Continuity properties for Born-Jordan operators with symbols in Hörmander classes and modulation spaces" }
null
null
null
null
true
null
6768
null
Default
null
null
null
{ "abstract": " Deep learning is a popular machine learning approach which has achieved a lot\nof progress in all traditional machine learning areas. Internet of thing (IoT)\nand Smart City deployments are generating large amounts of time-series sensor\ndata in need of analysis. Applying deep learning to these domains has been an\nimportant topic of research. The Long-Short Term Memory (LSTM) network has been\nproven to be well suited for dealing with and predicting important events with\nlong intervals and delays in the time series. LTSM networks have the ability to\nmaintain long-term memory. In an LTSM network, a stacked LSTM hidden layer also\nmakes it possible to learn a high level temporal feature without the need of\nany fine tuning and preprocessing which would be required by other techniques.\nIn this paper, we construct a long-short term memory (LSTM) recurrent neural\nnetwork structure, use the normal time series training set to build the\nprediction model. And then we use the predicted error from the prediction model\nto construct a Gaussian naive Bayes model to detect whether the original sample\nis abnormal. This method is called LSTM-Gauss-NBayes for short. We use three\nreal-world data sets, each of which involve long-term time-dependence or\nshort-term time-dependence, even very weak time dependence. The experimental\nresults show that LSTM-Gauss-NBayes is an effective and robust model.\n", "title": "IoT Data Analytics Using Deep Learning" }
null
null
[ "Computer Science" ]
null
true
null
6769
null
Validated
null
null
null
{ "abstract": " We develop a theory of viscous dissipation in one-dimensional\nsingle-component quantum liquids at low temperatures. Such liquids are\ncharacterized by a single viscosity coefficient, the bulk viscosity. We show\nthat for a generic interaction between the constituent particles this viscosity\ndiverges in the zero-temperature limit. In the special case of integrable\nmodels, the viscosity is infinite at any temperature, which can be interpreted\nas a breakdown of the hydrodynamic description. Our consideration is applicable\nto all single-component Galilean-invariant one-dimensional quantum liquids,\nregardless of the statistics of the constituent particles and the interaction\nstrength.\n", "title": "Viscous Dissipation in One-Dimensional Quantum Liquids" }
null
null
null
null
true
null
6770
null
Default
null
null
null
{ "abstract": " We study the existence of homoclinic type solutions for second order\nLagrangian systems of the type $\\ddot{q}(t)-q(t)+a(t)\\nabla G(q(t))=f(t)$,\nwhere $t\\in\\mathbb{R}$, $q\\in\\mathbb{R}^n$, $a\\colon\\mathbb{R}\\to\\mathbb{R}$ is\na continuous positive bounded function, $G\\colon\\mathbb{R}^n\\to\\mathbb{R}$ is a\n$C^1$-smooth potential satisfying the Ambrosetti-Rabinowitz superquadratic\ngrowth condition and $f\\colon\\mathbb{R}\\to\\mathbb{R}^n$ is a continuous bounded\nsquare integrable forcing term. A homoclinic type solution is obtained as limit\nof $2k$-periodic solutions of an approximative sequence of second order\ndifferential equations.\n", "title": "On the existence of homoclinic type solutions of inhomogenous Lagrangian systems" }
null
null
null
null
true
null
6771
null
Default
null
null
null
{ "abstract": " Racetrack memory is a non-volatile memory engineered to provide both high\ndensity and low latency, that is subject to synchronization or shift errors.\nThis paper describes a fast coding solution, in which delimiter bits assist in\nidentifying the type of shift error, and easily implementable graph-based codes\nare used to correct the error, once identified. A code that is able to detect\nand correct double shift errors is described in detail.\n", "title": "Correcting Two Deletions and Insertions in Racetrack Memory" }
null
null
null
null
true
null
6772
null
Default
null
null
null
{ "abstract": " In the spirit of recent work of Lamm, Malchiodi and Micallef in the setting\nof harmonic maps, we identify Yang-Mills connections obtained by approximations\nwith respect to the Yang-Mills {\\alpha}-energy. More specifically, we show that\nfor the SU(2) Hopf fibration over the four sphere, for sufficiently small\n{\\alpha} values the SO(4) invariant ADHM instanton is the unique\n{\\alpha}-critical point which has Yang-Mills {\\alpha}-energy lower than a\nspecific threshold.\n", "title": "Limits of Yang-Mills α-connections" }
null
null
null
null
true
null
6773
null
Default
null
null
null
{ "abstract": " Robust PCA methods are typically batch algorithms which requires loading all\nobservations into memory before processing. This makes them inefficient to\nprocess big data. In this paper, we develop an efficient online robust\nprincipal component methods, namely online moving window robust principal\ncomponent analysis (OMWRPCA). Unlike existing algorithms, OMWRPCA can\nsuccessfully track not only slowly changing subspace but also abruptly changed\nsubspace. By embedding hypothesis testing into the algorithm, OMWRPCA can\ndetect change points of the underlying subspaces. Extensive simulation studies\ndemonstrate the superior performance of OMWRPCA compared with other\nstate-of-art approaches. We also apply the algorithm for real-time background\nsubtraction of surveillance video.\n", "title": "Online Robust Principal Component Analysis with Change Point Detection" }
null
null
null
null
true
null
6774
null
Default
null
null
null
{ "abstract": " In this paper, we present a set of simulation models to more realistically\nmimic the behaviour of users reading messages. We propose a User Behaviour\nModel, where a simulated user reacts to a message by a flexible set of possible\nreactions (e.g. ignore, read, like, save, etc.) and a mobility-based reaction\n(visit a place, run away from danger, etc.). We describe our models and their\nimplementation in OMNeT++. We strongly believe that these models will\nsignificantly contribute to the state of the art of simulating realistically\nopportunistic networks.\n", "title": "Reactive User Behavior and Mobility Models" }
null
null
null
null
true
null
6775
null
Default
null
null
null
{ "abstract": " Technological improvement is the most important cause of long-term economic\ngrowth, but the factors that drive it are still not fully understood. In\nstandard growth models technology is treated in the aggregate, and a main goal\nhas been to understand how growth depends on factors such as knowledge\nproduction. But an economy can also be viewed as a network, in which producers\npurchase goods, convert them to new goods, and sell them to households or other\nproducers. Here we develop a simple theory that shows how the network\nproperties of an economy can amplify the effects of technological improvements\nas they propagate along chains of production. A key property of an industry is\nits output multiplier, which can be understood as the average number of\nproduction steps required to make a good. The model predicts that the output\nmultiplier of an industry predicts future changes in prices, and that the\naverage output multiplier of a country predicts future economic growth. We test\nthese predictions using data from the World Input Output Database and find\nresults in good agreement with the model. The results show how purely\nstructural properties of an economy, that have nothing to do with innovation or\nhuman creativity, can exert an important influence on long-term growth.\n", "title": "How production networks amplify economic growth" }
null
null
null
null
true
null
6776
null
Default
null
null
null
{ "abstract": " Mustaţă has given a conjecture for the graded Betti numbers in the\nminimal free resolution of the ideal of a general set of points on an\nirreducible projective algebraic variety. For surfaces in $\\mathbb P^3$ this\nconjecture has been proven for points on quadric surfaces and on general cubic\nsurfaces. In the latter case, Gorenstein liaison was the main tool. Here we\nprove the conjecture for general quartic surfaces. Gorenstein liaison continues\nto be a central tool, but to prove the existence of our links we make use of\ncertain dimension computations. We also discuss the higher degree case, but now\nthe dimension count does not force the existence of our links.\n", "title": "The Minimal Resolution Conjecture on a general quartic surface in $\\mathbb P^3$" }
null
null
null
null
true
null
6777
null
Default
null
null
null
{ "abstract": " A stochastic minimization method for a real-space wavefunction, $\\Psi({\\bf\nr}_{1},{\\bf r}_{2}\\ldots{\\bf r}_{n})$, constrained to a chosen density,\n$\\rho({\\bf r})$, is developed. It enables the explicit calculation of the Levy\nconstrained search\n$F[\\rho]=\\min_{\\Psi\\rightarrow\\rho}\\langle\\Psi|\\hat{T}+\\hat{V}_{ee}|\\Psi\\rangle$\n(Proc. Natl. Acad. Sci. 76 6062 (1979)), that gives the exact functional of\ndensity functional theory. This general method is illustrated in the evaluation\nof $F[\\rho]$ for two-electron densities in one dimension with a soft-Coulomb\ninteraction. Additionally, procedures are given to determine the first and\nsecond functional derivatives, $\\frac{\\delta F}{\\delta\\rho({\\bf r})}$ and\n$\\frac{\\delta^{2}F}{\\delta\\rho({\\bf r})\\delta\\rho({\\bf r}')}$. For a chosen\nexternal potential, $v({\\bf r})$, the functional and its derivatives are used\nin minimizations only over densities to give the exact energy, $E_{v}$ without\nneeding to solve the Schrödinger equation.\n", "title": "Exact density functional obtained via the Levy constrained search" }
null
null
[ "Physics" ]
null
true
null
6778
null
Validated
null
null
null
{ "abstract": " Template metaprogramming is a popular technique for implementing compile time\nmechanisms for numerical computing. We demonstrate how expression templates can\nbe used for compile time symbolic differentiation of algebraic expressions in\nC++ computer programs. Given a positive integer $N$ and an algebraic function\nof multiple variables, the compiler generates executable code for the $N$th\npartial derivatives of the function. Compile-time simplification of the\nderivative expressions is achieved using recursive templates. A detailed\nanalysis indicates that current C++ compiler technology is already sufficient\nfor practical use of our results, and highlights a number of issues where\nfurther improvements may be desirable.\n", "title": "Compile-Time Symbolic Differentiation Using C++ Expression Templates" }
null
null
null
null
true
null
6779
null
Default
null
null
null
{ "abstract": " We show that a recently proposed neural dependency parser can be improved by\njoint training on multiple languages from the same family. The parser is\nimplemented as a deep neural network whose only input is orthographic\nrepresentations of words. In order to successfully parse, the network has to\ndiscover how linguistically relevant concepts can be inferred from word\nspellings. We analyze the representations of characters and words that are\nlearned by the network to establish which properties of languages were\naccounted for. In particular we show that the parser has approximately learned\nto associate Latin characters with their Cyrillic counterparts and that it can\ngroup Polish and Russian words that have a similar grammatical function.\nFinally, we evaluate the parser on selected languages from the Universal\nDependencies dataset and show that it is competitive with other recently\nproposed state-of-the art methods, while having a simple structure.\n", "title": "On Multilingual Training of Neural Dependency Parsers" }
null
null
null
null
true
null
6780
null
Default
null
null
null
{ "abstract": " Let $G$ be a finite group and $\\Aut(G)$ the automorphism group of $G$. The\nautocommuting probability of $G$, denoted by $\\Pr(G, \\Aut(G))$, is the\nprobability that a randomly chosen automorphism of $G$ fixes a randomly chosen\nelement of $G$. In this paper, we study $\\Pr(G, \\Aut(G))$ through a\ngeneralization. We obtain a computing formula, several bounds and\ncharacterizations of $G$ through $\\Pr(G, \\Aut(G))$. We conclude the paper by\nshowing that the generalized autocommuting probability of $G$ remains unchanged\nunder autoisoclinism.\n", "title": "Autocommuting probability of a finite group" }
null
null
null
null
true
null
6781
null
Default
null
null
null
{ "abstract": " Systems with tightly-packed inner planets (STIPs) are very common. Chatterjee\n& Tan proposed Inside-Out Planet Formation (IOPF), an in situ formation theory,\nto explain these planets. IOPF involves sequential planet formation from\npebble-rich rings that are fed from the outer disk and trapped at the pressure\nmaximum associated with the dead zone inner boundary (DZIB). Planet masses are\nset by their ability to open a gap and cause the DZIB to retreat outwards. We\npresent models for the disk density and temperature structures that are\nrelevant to the conditions of IOPF. For a wide range of DZIB conditions, we\nevaluate the gap opening masses of planets in these disks that are expected to\nlead to truncation of pebble accretion onto the forming planet. We then\nconsider the evolution of dust and pebbles in the disk, estimating that pebbles\ntypically grow to sizes of a few cm during their radial drift from several tens\nof AU to the inner, $\\lesssim1\\:$AU-scale disk. A large fraction of the\naccretion flux of solids is expected to be in such pebbles. This allows us to\nestimate the timescales for individual planet formation and entire planetary\nsystem formation in the IOPF scenario. We find that to produce realistic STIPs\nwithin reasonable timescales similar to disk lifetimes requires disk accretion\nrates of $\\sim10^{-9}\\:M_\\odot\\:{\\rm yr}^{-1}$ and relatively low viscosity\nconditions in the DZIB region, i.e., Shakura-Sunyaev parameter of\n$\\alpha\\sim10^{-4}$.\n", "title": "Inside-Out Planet Formation. IV. Pebble Evolution and Planet Formation Timescales" }
null
null
null
null
true
null
6782
null
Default
null
null
null
{ "abstract": " Autonomous robot manipulation often involves both estimating the pose of the\nobject to be manipulated and selecting a viable grasp point. Methods using\nRGB-D data have shown great success in solving these problems. However, there\nare situations where cost constraints or the working environment may limit the\nuse of RGB-D sensors. When limited to monocular camera data only, both the\nproblem of object pose estimation and of grasp point selection are very\nchallenging. In the past, research has focused on solving these problems\nseparately. In this work, we introduce a novel method called SilhoNet that\nbridges the gap between these two tasks. We use a Convolutional Neural Network\n(CNN) pipeline that takes in ROI proposals to simultaneously predict an\nintermediate silhouette representation for objects with an associated occlusion\nmask. The 3D pose is then regressed from the predicted silhouettes. Grasp\npoints from a precomputed database are filtered by back-projecting them onto\nthe occlusion mask to find which points are visible in the scene. We show that\nour method achieves better overall performance than the state-of-the art\nPoseCNN network for 3D pose estimation on the YCB-video dataset.\n", "title": "SilhoNet: An RGB Method for 3D Object Pose Estimation and Grasp Planning" }
null
null
null
null
true
null
6783
null
Default
null
null
null
{ "abstract": " The structural properties of LaRu$_2$P$_2$ under external pressure have been\nstudied up to 14 GPa, employing high-energy x-ray diffraction in a\ndiamond-anvil pressure cell. At ambient conditions, LaRu$_2$P$_2$ (I4/mmm) has\na tetragonal structure with a bulk modulus of $B=105(2)$ GPa and exhibits\nsuperconductivity at $T_c= 4.1$ K. With the application of pressure,\nLaRu$_2$P$_2$ undergoes a phase transition to a collapsed tetragonal (cT) state\nwith a bulk modulus of $B=175(5)$ GPa. At the transition, the c-lattice\nparameter exhibits a sharp decrease with a concurrent increase of the a-lattice\nparameter. The cT phase transition in LaRu$_2$P$_2$ is consistent with a second\norder transition, and was found to be temperature dependent, increasing from\n$P=3.9(3)$ GPa at 160 K to $P=4.6(3)$ GPa at 300 K. In total, our data are\nconsistent with the cT transition being near, but slightly above 2 GPa at 5 K.\nFinally, we compare the effect of physical and chemical pressure in the\nRRu$_2$P$_2$ (R = Y, La-Er, Yb) isostructural series of compounds and find them\nto be analogous.\n", "title": "Collapsed Tetragonal Phase Transition in LaRu$_2$P$_2$" }
null
null
null
null
true
null
6784
null
Default
null
null
null
{ "abstract": " Hyperspectral imaging is an important tool in remote sensing, allowing for\naccurate analysis of vast areas. Due to a low spatial resolution, a pixel of a\nhyperspectral image rarely represents a single material, but rather a mixture\nof different spectra. HSU aims at estimating the pure spectra present in the\nscene of interest, referred to as endmembers, and their fractions in each\npixel, referred to as abundances. Today, many HSU algorithms have been\nproposed, based either on a geometrical or statistical model. While most\nmethods assume that the number of endmembers present in the scene is known,\nthere is only little work about estimating this number from the observed data.\nIn this work, we propose a Bayesian nonparametric framework that jointly\nestimates the number of endmembers, the endmembers itself, and their\nabundances, by making use of the Indian Buffet Process as a prior for the\nendmembers. Simulation results and experiments on real data demonstrate the\neffectiveness of the proposed algorithm, yielding results comparable with\nstate-of-the-art methods while being able to reliably infer the number of\nendmembers. In scenarios with strong noise, where other algorithms provide only\npoor results, the proposed approach tends to overestimate the number of\nendmembers slightly. The additional endmembers, however, often simply represent\nnoisy replicas of present endmembers and could easily be merged in a\npost-processing step.\n", "title": "Bayesian Nonparametric Unmixing of Hyperspectral Images" }
null
null
null
null
true
null
6785
null
Default
null
null
null
{ "abstract": " This work addresses the one-dimensional problem of Bloch electrons when they\nare rapidly driven by a homogeneous time-periodic light and linearly coupled to\nvibrational modes. Starting from a generic time-periodic electron-phonon\nHamiltonian, we derive a time-independent effective Hamiltonian that describes\nthe stroboscopic dynamics up to the third order in the high-frequency limit.\nThis yields nonequilibrium corrections to the electron-phonon coupling that are\ncontrollable dynamically via the driving strength. This shows in particular\nthat local Holstein interactions in equilibrium are corrected by nonlocal\nPeierls interactions out of equilibrium, as well as by phonon-assisted hopping\nprocesses that make the dynamical Wannier-Stark localization of Bloch electrons\nimpossible. Subsequently, we revisit the Holstein polaron problem out of\nequilibrium in terms of effective Green functions, and specify explicitly how\nthe binding energy and effective mass of the polaron can be controlled\ndynamically. These tunable properties are reported within the weak- and\nstrong-coupling regimes since both can be visited within the same material when\nvarying the driving strength. This work provides some insight into controllable\nmicroscopic mechanisms that may be involved during the multicycle laser\nirradiations of organic molecular crystals in ultrafast pump-probe experiments,\nalthough it should also be suitable for realizations in shaken optical lattices\nof ultracold atoms.\n", "title": "Dynamical control of electron-phonon interactions with high-frequency light" }
null
null
null
null
true
null
6786
null
Default
null
null
null
{ "abstract": " The thermoelectric voltage developed across an atomic metal junction (i.e., a\nnanostructure in which one or a few atoms connect two metal electrodes) in\nresponse to a temperature difference between the electrodes, results from the\nquantum interference of electrons that pass through the junction multiple times\nafter being scattered by the surrounding defects. Here we report successfully\ntuning this quantum interference and thus controlling the magnitude and sign of\nthe thermoelectric voltage by applying a mechanical force that deforms the\njunction. The observed switching of the thermoelectric voltage is reversible\nand can be cycled many times. Our ab initio and semi-empirical calculations\nelucidate the detailed mechanism by which the quantum interference is tuned. We\nshow that the applied strain alters the quantum phases of electrons passing\nthrough the narrowest part of the junction and hence modifies the electronic\nquantum interference in the device. Tuning the quantum interference causes the\nenergies of electronic transport resonances to shift, which affects the\nthermoelectric voltage. These experimental and theoretical studies reveal that\nAu atomic junctions can be made to exhibit both positive and negative\nthermoelectric voltages on demand, and demonstrate the importance and\ntunability of the quantum interference effect in the atomic-scale metal\nnanostructures.\n", "title": "Controlling the thermoelectric effect by mechanical manipulation of the electron's quantum phase in atomic junctions" }
null
null
null
null
true
null
6787
null
Default
null
null
null
{ "abstract": " Understanding segregation is essential to develop planning tools for building\nmore inclusive cities. Theoretically, segregation at the work place has been\ndescribed as lower compared to residential segregation given the importance of\nskill complementarity among other productive factors shaping the economies of\ncities. This paper tackles segregation during working hours from a dynamical\nperspective by focusing on the movement of urbanites across the city. In\ncontrast to measuring residential patterns of segregation, we used mobile phone\ndata to infer home-work trajectory net- works and apply a community detection\nalgorithm to the example city of Santiago, Chile. We then describe\nqualitatively and quantitatively outlined communities, in terms of their socio\neco- nomic composition. We then evaluate segregation for each of these\ncommunities as the probability that a person from a specific community will\ninteract with a co-worker from the same commu- nity. Finally, we compare these\nresults with simulations where a new work location is set for each real user,\nfollowing the empirical probability distributions of home-work distances and\nangles of direction for each community. Methodologically, this study shows that\nsegregation during working hours for Santiago is unexpectedly high for most of\nthe city with the exception of its central and business district. In fact, the\nonly community that is not statistically segregated corresponds to the downtown\narea of Santiago, described as a zone of encounter and integration across the\ncity.\n", "title": "The time geography of segregation during working hours" }
null
null
null
null
true
null
6788
null
Default
null
null
null
{ "abstract": " Protein gamma-turn prediction is useful in protein function studies and\nexperimental design. Several methods for gamma-turn prediction have been\ndeveloped, but the results were unsatisfactory with Matthew correlation\ncoefficients (MCC) around 0.2-0.4. One reason for the low prediction accuracy\nis the limited capacity of the methods; in particular, the traditional\nmachine-learning methods like SVM may not extract high-level features well to\ndistinguish between turn or non-turn. Hence, it is worthwhile exploring new\nmachine-learning methods for the prediction. A cutting-edge deep neural\nnetwork, named Capsule Network (CapsuleNet), provides a new opportunity for\ngamma-turn prediction. Even when the number of input samples is relatively\nsmall, the capsules from CapsuleNet are very effective to extract high-level\nfeatures for classification tasks. Here, we propose a deep inception capsule\nnetwork for gamma-turn prediction. Its performance on the gamma-turn benchmark\nGT320 achieved an MCC of 0.45, which significantly outperformed the previous\nbest method with an MCC of 0.38. This is the first gamma-turn prediction method\nutilizing deep neural networks. Also, to our knowledge, it is the first\npublished bioinformatics application utilizing capsule network, which will\nprovides a useful example for the community.\n", "title": "Improving Protein Gamma-Turn Prediction Using Inception Capsule Networks" }
null
null
null
null
true
null
6789
null
Default
null
null
null
{ "abstract": " An image is here defined to be a set which is either open or closed and an\nimage transformation is structure preserving in the following sense: It\ncorresponds to an algebra homomorphism for each singly generated algebra. The\nresults extend parts of results of J.F. Aarnes on quasi-measures, -states,\n-homomorphisms, and image-transformations from the setting compact Hausdorff\nspaces to locally compact Hausdorff spaces.\n", "title": "Image transformations on locally compact spaces" }
null
null
null
null
true
null
6790
null
Default
null
null
null
{ "abstract": " We performed electronic structure calculations based on the first-principles\nmany-body theory approach in order to study quasiparticle band gaps, and\noptical absorption spectra of hydrogen-passivated zigzag SiC nanoribbons.\nSelf-energy corrections are included using the GW approximation, and excitonic\neffects are included using the Bethe-Salpeter equation. We have systematically\nstudied nanoribbons that have widths between 0.6 $\\text{nm}$ and 2.2\n$\\text{nm}$. Quasiparticle corrections widened the Kohn-Sham band gaps because\nof enhanced interaction effects, caused by reduced dimensionality. Zigzag SiC\nnanoribbons with widths larger than 1 nm, exhibit half-metallicity at the\nmean-field level. The self-energy corrections increased band gaps\nsubstantially, thereby transforming the half-metallic zigzag SiC nanoribbons,\nto narrow gap spin-polarized semiconductors. Optical absorption spectra of\nthese nanoribbons get dramatically modified upon inclusion of electron-hole\ninteractions, and the narrowest ribbon exhibits strongly bound excitons, with\nbinding energy of 2.1 eV. Thus, the narrowest zigzag SiC nanoribbon has the\npotential to be used in optoelectronic devices operating in the IR region of\nthe spectrum, while the broader ones, exhibiting spin polarization, can be\nutilized in spintronic applications.\n", "title": "From Half-metal to Semiconductor: Electron-correlation Effects in Zigzag SiC Nanoribbons From First Principles" }
null
null
null
null
true
null
6791
null
Default
null
null
null
{ "abstract": " Precision experiments, such as the search for electric dipole moments of\ncharged particles using radiofrequency spin rotators in storage rings, demand\nfor maintaining the exact spin resonance condition for several thousand\nseconds. Synchrotron oscillations in the stored beam modulate the spin tune of\noff-central particles, moving it off the perfect resonance condition set for\ncentral particles on the reference orbit. Here we report an analytic\ndescription of how synchrotron oscillations lead to non-exponential decoherence\nof the radiofrequency resonance driven up-down spin rotations. This\nnon-exponential decoherence is shown to be accompanied by a nontrivial walk of\nthe spin phase. We also comment on sensitivity of the decoherence rate to the\nharmonics of the radiofreqency spin rotator and a possibility to check\npredictions of decoherence-free magic energies.\n", "title": "Non-exponential decoherence of radio-frequency resonance rotation of spin in storage rings" }
null
null
null
null
true
null
6792
null
Default
null
null
null
{ "abstract": " This paper presents an investigation of the relation between some positivity\nof the curvature and the finiteness of fundamental groups in semi-Riemannian\ngeometry. We consider semi-Riemannian submersions $\\pi : (E, g) \\rightarrow (B,\n-g_{B}) $ under the condition with $(B, g_{B})$ Riemannian, the fiber closed\nRiemannian, and the horizontal distribution integrable. Then we prove that, if\nthe lightlike geodesically complete or timelike geodesically complete\nsemi-Riemannian manifold $E$ has some positivity of curvature, then the\nfundamental group of the fiber is finite. Moreover we construct an example of\nsemi-Riemannian submersions with some positivity of curvature, non-integrable\nhorizontal distribution, and the finiteness of the fundamental group of the\nfiber.\n", "title": "On the fundamental group of semi-Riemannian manifolds with positive curvature operator" }
null
null
null
null
true
null
6793
null
Default
null
null
null
{ "abstract": " We propose a constraint-based flow-sensitive static analysis for concurrent\nprograms by iteratively composing thread-modular abstract interpreters via the\nuse of a system of lightweight constraints. Our method is compositional in that\nit first applies sequential abstract interpreters to individual threads and\nthen composes their results. It is flow-sensitive in that the causality\nordering of interferences (flow of data from global writes to reads) is modeled\nby a system of constraints. These interference constraints are lightweight\nsince they only refer to the execution order of program statements as opposed\nto their numerical properties: they can be decided efficiently using an\noff-the-shelf Datalog engine. Our new method has the advantage of being more\naccurate than existing, flow-insensitive, static analyzers while remaining\nscalable and providing the expected soundness and termination guarantees even\nfor programs with unbounded data. We implemented our method and evaluated it on\na large number of benchmarks, demonstrating its effectiveness at increasing the\naccuracy of thread-modular abstract interpretation.\n", "title": "Flow-Sensitive Composition of Thread-Modular Abstract Interpretation" }
null
null
null
null
true
null
6794
null
Default
null
null
null
{ "abstract": " Humanoid robots may require a degree of compliance at the joint level for\nimproving efficiency, shock tolerance, and safe interaction with humans. The\npresence of joint elasticity, however, complexifies the design of balancing and\nwalking controllers. This paper proposes a control framework for extending\nmomentum based controllers developed for stiff actuators to the case of series\nelastic actuators. The key point is to consider the motor velocities as an\nintermediate control input, and then apply high-gain control to stabilise the\ndesired motor velocities achieving momentum control. Simulations carried out on\na model of the robot iCub verify the soundness of the proposed approach.\n", "title": "Momentum Control of Humanoid Robots with Series Elastic Actuators" }
null
null
null
null
true
null
6795
null
Default
null
null
null
{ "abstract": " In this paper, we establish a baseline for object symmetry detection in\ncomplex backgrounds by presenting a new benchmark and an end-to-end deep\nlearning approach, opening up a promising direction for symmetry detection in\nthe wild. The new benchmark, named Sym-PASCAL, spans challenges including\nobject diversity, multi-objects, part-invisibility, and various complex\nbackgrounds that are far beyond those in existing datasets. The proposed\nsymmetry detection approach, named Side-output Residual Network (SRN),\nleverages output Residual Units (RUs) to fit the errors between the object\nsymmetry groundtruth and the outputs of RUs. By stacking RUs in a\ndeep-to-shallow manner, SRN exploits the 'flow' of errors among multiple scales\nto ease the problems of fitting complex outputs with limited layers,\nsuppressing the complex backgrounds, and effectively matching object symmetry\nof different scales. Experimental results validate both the benchmark and its\nchallenging aspects related to realworld images, and the state-of-the-art\nperformance of our symmetry detection approach. The benchmark and the code for\nSRN are publicly available at this https URL.\n", "title": "SRN: Side-output Residual Network for Object Symmetry Detection in the Wild" }
null
null
null
null
true
null
6796
null
Default
null
null
null
{ "abstract": " We consider in this paper the regularity problem for time-optimal\ntrajectories of a single-input control-affine system on a n-dimensional\nmanifold. We prove that, under generic conditions on the drift and the\ncontrolled vector field, any control u associated with an optimal trajectory is\nsmooth out of a countable set of times. More precisely, there exists an integer\nK, only depending on the dimension n, such that the non-smoothness set of u is\nmade of isolated points, accumulations of isolated points, and so on up to K-th\norder iterated accumulations.\n", "title": "Time-Optimal Trajectories of Generic Control-Affine Systems Have at Worst Iterated Fuller Singularities" }
null
null
null
null
true
null
6797
null
Default
null
null
null
{ "abstract": " Very important breakthroughs in data centric deep learning algorithms led to\nimpressive performance in transactional point applications of Artificial\nIntelligence (AI) such as Face Recognition, or EKG classification. With all due\nappreciation, however, knowledge blind data only machine learning algorithms\nhave severe limitations for non-transactional AI applications, such as medical\ndiagnosis beyond the EKG results. Such applications require deeper and broader\nknowledge in their problem solving capabilities, e.g. integrating anatomy and\nphysiology knowledge with EKG results and other patient findings. Following a\nreview and illustrations of such limitations for several real life AI\napplications, we point at ways to overcome them. The proposed Wikipedia for\nSmart Machines initiative aims at building repositories of software structures\nthat represent humanity science & technology knowledge in various parts of\nlife; knowledge that we all learn in schools, universities and during our\nprofessional life. Target readers for these repositories are smart machines;\nnot human. AI software developers will have these Reusable Knowledge structures\nreadily available, hence, the proposed name ReKopedia. Big Data is by now a\nmature technology, it is time to focus on Big Knowledge. Some will be derived\nfrom data, some will be obtained from mankind gigantic repository of knowledge.\nWikipedia for smart machines along with the new Double Deep Learning approach\noffer a paradigm for integrating datacentric deep learning algorithms with\nalgorithms that leverage deep knowledge, e.g. evidential reasoning and\ncausality reasoning. For illustration, a project is described to produce\nReKopedia knowledge modules for medical diagnosis of about 1,000 disorders.\nData is important, but knowledge deep, basic, and commonsense is equally\nimportant.\n", "title": "Wikipedia for Smart Machines and Double Deep Machine Learning" }
null
null
[ "Computer Science" ]
null
true
null
6798
null
Validated
null
null
null
{ "abstract": " The asteroids are primitive solar system bodies which evolve both\ncollisionally and through disruptions due to rapid rotation [1]. These\nprocesses can lead to the formation of binary asteroids [2-4] and to the\nrelease of dust [5], both directly and, in some cases, through uncovering\nfrozen volatiles. In a sub-set of the asteroids called main-belt comets (MBCs),\nthe sublimation of excavated volatiles causes transient comet-like activity\n[6-8]. Torques exerted by sublimation measurably influence the spin rates of\nactive comets [9] and might lead to the splitting of bilobate comet nuclei\n[10]. The kilometer-sized main-belt asteroid 288P (300163) showed activity for\nseveral months around its perihelion 2011 [11], suspected to be sustained by\nthe sublimation of water ice [12] and supported by rapid rotation [13], while\nat least one component rotates slowly with a period of 16 hours [14]. 288P is\npart of a young family of at least 11 asteroids that formed from a ~10km\ndiameter precursor during a shattering collision 7.5 million years ago [15].\nHere we report that 288P is a binary main-belt comet. It is different from the\nknown asteroid binaries for its combination of wide separation, near-equal\ncomponent size, high eccentricity, and comet-like activity. The observations\nalso provide strong support for sublimation as the driver of activity in 288P\nand show that sublimation torques may play a significant role in binary orbit\nevolution.\n", "title": "A binary main belt comet" }
null
null
null
null
true
null
6799
null
Default
null
null
null
{ "abstract": " Bayesian nonparametrics are a class of probabilistic models in which the\nmodel size is inferred from data. A recently developed methodology in this\nfield is small-variance asymptotic analysis, a mathematical technique for\nderiving learning algorithms that capture much of the flexibility of Bayesian\nnonparametric inference algorithms, but are simpler to implement and less\ncomputationally expensive. Past work on small-variance analysis of Bayesian\nnonparametric inference algorithms has exclusively considered batch models\ntrained on a single, static dataset, which are incapable of capturing time\nevolution in the latent structure of the data. This work presents a\nsmall-variance analysis of the maximum a posteriori filtering problem for a\ntemporally varying mixture model with a Markov dependence structure, which\ncaptures temporally evolving clusters within a dataset. Two clustering\nalgorithms result from the analysis: D-Means, an iterative clustering algorithm\nfor linearly separable, spherical clusters; and SD-Means, a spectral clustering\nalgorithm derived from a kernelized, relaxed version of the clustering problem.\nEmpirical results from experiments demonstrate the advantages of using D-Means\nand SD-Means over contemporary clustering algorithms, in terms of both\ncomputational cost and clustering accuracy.\n", "title": "Dynamic Clustering Algorithms via Small-Variance Analysis of Markov Chain Mixture Models" }
null
null
null
null
true
null
6800
null
Default
null
null