text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We have investigated morphology of the lateral surfaces of PbTe crystal\nsamples grown from melt by the Bridgman method sputtered by Ar+ plasma with ion\nenergy of 50-550 eV for 5-50 minutes under Secondary Neutral Mass Spectrometry\n(SNMS) conditions. The sputtered PbTe crystal surface was found to be\nsimultaneously both the source of sputtered material and the efficient\nsubstrate for re-deposition of the sputtered material during the depth\nprofiling. During sputtering PbTe crystal surface is forming the dimple relief.\nTo be redeposited the sputtered Pb and Te form arrays of the microscopic\nsurface structures in the shapes of hillocks, pyramids, cones and others on the\nPbTe crystal sputtered surface. Correlation between the density of re-deposited\nmicroscopic surface structures, their shape, and average size, on the one hand,\nand the energy and duration of sputtering, on the other, is revealed.\n", "title": "Morphology of PbTe crystal surface sputtered by argon plasma under Secondary Neutral Mass Spectrometry conditions" }
null
null
[ "Physics" ]
null
true
null
9601
null
Validated
null
null
null
{ "abstract": " Highly oscillatory integrals, such as those involving Bessel functions, are\nbest evaluated analytically as much as possible, as numerical errors can be\ndifficult to control. We investigate indefinite integrals involving monomials\nin $x$ multiplying one or two spherical Bessel functions of the first kind\n$j_l(x)$ with integer order $l$. Closed-form solutions are presented where\npossible, and recursion relations are developed that are guaranteed to reduce\nall integrals in this class to closed-form solutions. These results allow for\ndefinite integrals over spherical Bessel functions to be computed quickly and\naccurately. For completeness, we also present our results in terms of ordinary\nBessel functions, but in general, the recursion relations do not terminate.\n", "title": "Indefinite Integrals of Spherical Bessel Functions" }
null
null
null
null
true
null
9602
null
Default
null
null
null
{ "abstract": " This volume contains the proceedings of the Fourteenth International Workshop\non the ACL2 Theorem Prover and Its Applications, ACL2 2017, a two-day workshop\nheld in Austin, Texas, USA, on May 22-23, 2017. ACL2 workshops occur at\napproximately 18-month intervals, and they provide a technical forum for\nresearchers to present and discuss improvements and extensions to the theorem\nprover, comparisons of ACL2 with other systems, and applications of ACL2 in\nformal verification.\nACL2 is a state-of-the-art automated reasoning system that has been\nsuccessfully applied in academia, government, and industry for specification\nand verification of computing systems and in teaching computer science courses.\nBoyer, Kaufmann, and Moore were awarded the 2005 ACM Software System Award for\ntheir work on ACL2 and the other theorem provers in the Boyer-Moore\ntheorem-prover family.\nThe proceedings of ACL2 2017 include the seven technical papers and two\nextended abstracts that were presented at the workshop. Each submission\nreceived two or three reviews. The workshop also included three invited talks:\n\"Using Mechanized Mathematics in an Organization with a Simulation-Based\nMentality\", by Glenn Henry of Centaur Technology, Inc.; \"Formal Verification of\nFinancial Algorithms, Progress and Prospects\", by Grant Passmore of Aesthetic\nIntegration; and \"Verifying Oracle's SPARC Processors with ACL2\" by Greg\nGrohoski of Oracle. The workshop also included several rump sessions discussing\nongoing research and the use of ACL2 within industry.\n", "title": "Proceedings 14th International Workshop on the ACL2 Theorem Prover and its Applications" }
null
null
null
null
true
null
9603
null
Default
null
null
null
{ "abstract": " This paper considers mean field games in a multi-agent Markov decision\nprocess (MDP) framework. Each player has a continuum state and binary action.\nBy active control, a player can bring its state to a resetting point. All\nplayers are coupled through their cost functions. The structural property of\nthe individual strategies is characterized in terms of threshold policies when\nthe mean field game admits a solution. We further introduce a stationary\nequation system of the mean field game and analyze uniqueness of its solution\nunder positive externalities.\n", "title": "Mean Field Stochastic Games with Binary Action Spaces and Monotone Costs" }
null
null
null
null
true
null
9604
null
Default
null
null
null
{ "abstract": " We develop a framework for downlink heterogeneous cellular networks with\nline-of-sight (LoS) and non-line-of-sight (NLoS) transmissions. Using\nstochastic geometry, we derive tight approximation of achievable downlink rate\nthat enables us to compare the performance between densifying small cells and\nexpanding BS antenna arrays. Interestingly, we find that adding small cells\ninto the network improves the achievable rate much faster than expanding\nantenna arrays at the macro BS. However, when the small cell density exceeds a\ncritical threshold, the spacial densification will lose its benefits and\nfurther impair the network capacity. To this end, we present the optimal small\ncell density that maximizes the rate as practical deployment guidance. In\ncontrast, expanding macro BS antenna array can always benefit the capacity\nuntil an upper bound caused by pilot contamination, and this bound also\nsurpasses the peak rate obtained from deployment of small cells. Furthermore,\nwe find that allocating part of antennas to distributed small cell BSs works\nbetter than centralizing all antennas at the macro BS, and the optimal\nallocation proportion is also given for practical configuration reference. In\nsummary, this work provides a further understanding on how to leverage small\ncells and massive MIMO in future heterogeneous cellular networks deployment.\n", "title": "Heterogeneous Cellular Networks with LoS and NLoS Transmissions--The Role of Massive MIMO and Small Cells" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
9605
null
Validated
null
null
null
{ "abstract": " Gould's Belt is a flat local system composed of young OB stars, molecular\nclouds and neutral hydrogen within 500 pc from the Sun. It is inclined about 20\ndegrees to the galactic plane and its velocity field significantly deviates\nfrom rotation around the distant center of the Milky Way. We discuss possible\nmodels of its origin: free expansion from a point or from a ring, expansion of\na shell, or a collision of a high velocity cloud with the plane of the Milky\nWay. Currently, no convincing model exists. Similar structures are identified\nin HI and CO distribution in our and other nearby galaxies.\n", "title": "Gould's Belt: Local Large Scale Structure in the Milky Way" }
null
null
[ "Physics" ]
null
true
null
9606
null
Validated
null
null
null
{ "abstract": " In recent years, correntropy and its applications in machine learning have\nbeen drawing continuous attention owing to its merits in dealing with\nnon-Gaussian noise and outliers. However, theoretical understanding of\ncorrentropy, especially in the statistical learning context, is still limited.\nIn this study, within the statistical learning framework, we investigate\ncorrentropy based regression in the presence of non-Gaussian noise or outliers.\nMotivated by the practical way of generating non-Gaussian noise or outliers, we\nintroduce mixture of symmetric stable noise, which include Gaussian noise,\nCauchy noise, and their mixture as special cases, to model non-Gaussian noise\nor outliers. We demonstrate that under the mixture of symmetric stable noise\nassumption, correntropy based regression can learn the conditional mean\nfunction or the conditional median function well without resorting to the\nfinite-variance or even the finite first-order moment condition on the noise.\nIn particular, for the above two cases, we establish asymptotic optimal\nlearning rates for correntropy based regression estimators that are\nasymptotically of type $\\mathcal{O}(n^{-1})$. These results justify the\neffectiveness of the correntropy based regression estimators in dealing with\noutliers as well as non-Gaussian noise. We believe that the present study\ncompletes our understanding towards correntropy based regression from a\nstatistical learning viewpoint, and may also shed some light on robust\nstatistical learning for regression.\n", "title": "Learning with Correntropy-induced Losses for Regression with Mixture of Symmetric Stable Noise" }
null
null
null
null
true
null
9607
null
Default
null
null
null
{ "abstract": " Evidence of surface magnetism is now observed on an increasing number of cool\nstars. The detailed manner by which dynamo-generated magnetic fields giving\nrise to starspots traverse the convection zone still remains unclear. Some\ninsight into this flux emergence mechanism has been gained by assuming bundles\nof magnetic field can be represented by idealized thin flux tubes (TFTs). Weber\n& Browning (2016) have recently investigated how individual flux tubes might\nevolve in a 0.3 solar-mass M dwarf by effectively embedding TFTs in\ntime-dependent flows representative of a fully convective star. We expand upon\nthis work by initiating flux tubes at various depths in the upper 50-75% of the\nstar in order to sample the differing convective flow pattern and differential\nrotation across this region. Specifically, we comment on the role of\ndifferential rotation and time-varying flows in both the suppression and\npromotion of the magnetic flux emergence process.\n", "title": "The Suppression and Promotion of Magnetic Flux Emergence in Fully Convective Stars" }
null
null
null
null
true
null
9608
null
Default
null
null
null
{ "abstract": " We consider content delivery over fading broadcast channels. A server wants\nto transmit K files to K users, each equipped with a cache of finite size.\nUsing the coded caching scheme of Maddah-Ali and Niesen, we design an\nopportunistic delivery scheme where the long-term sum content delivery rate\nscales with K the number of users in the system. The proposed delivery scheme\ncombines superposition coding together with appropriate power allocation across\nsub-files intended to different subsets of users. We analyze the long-term\naverage sum content delivery rate achieved by two special cases of our scheme:\na) a selection scheme that chooses the subset of users with the largest\nweighted rate, and b) a baseline scheme that transmits to K users using the\nscheme of Maddah-Ali and Niesen. We prove that coded caching with appropriate\nuser selection is scalable since it yields a linear increase of the average sum\ncontent delivery rate.\n", "title": "Opportunistic Content Delivery in Fading Broadcast Channels" }
null
null
null
null
true
null
9609
null
Default
null
null
null
{ "abstract": " The dynamics of nonlinear conservation laws have long posed fascinating\nproblems. With the introduction of some nonlinearity, e.g. Burgers' equation,\ndiscontinuous behavior in the solutions is exhibited, even for smooth initial\ndata. The introduction of randomness in any of several forms into the initial\ncondition makes the problem even more interesting. We present a broad spectrum\nof results from a number of works, both deterministic and random, to provide a\ndiverse introduction to some of the methods of analysis for conservation laws.\nSome of the deep theorems are applied to discrete examples and illuminated\nusing diagrams.\n", "title": "Conservation Laws With Random and Deterministic Data" }
null
null
[ "Mathematics" ]
null
true
null
9610
null
Validated
null
null
null
{ "abstract": " The combined all-electron and two-step approach is applied to calculate the\nmolecular parameters which are required to interpret the ongoing experiment to\nsearch for the effects of manifestation of the T,P-odd fundamental interactions\nin the HfF$^+$ cation by Cornell/Ye group [Science 342, 1220 (2013); J. Mol.\nSpectrosc. 300, 12 (2014)]. The effective electric field that is required to\ninterpret the experiment in terms of the electron electric dipole moment is\nfound to be 22.5 GV/cm. In Ref. [Phys. Rev. D 89, 056006 (2014)] it was shown\nthat another source of T,P-odd interaction, the scalar-pseudoscalar\nnucleus-electron interaction with the dimensionless strength constant $k_{T,P}$\ncan dominate over the direct contribution from the electron EDM within the\nstandard model and some of its extensions. Therefore, for the comprehensive and\ncorrect interpretation of the HfF$^+$ experiment one should also know the\nmolecular parameter $W_{T,P}$ the value of which is reported here to be 20.1\nkHz.\n", "title": "Theoretical study of HfF$^+$ cation to search for the T,P-odd interactions" }
null
null
null
null
true
null
9611
null
Default
null
null
null
{ "abstract": " In the adaptive information gathering problem, a policy is required to select\nan informative sensing location using the history of measurements acquired thus\nfar. While there is an extensive amount of prior work investigating effective\npractical approximations using variants of Shannon's entropy, the efficacy of\nsuch policies heavily depends on the geometric distribution of objects in the\nworld. On the other hand, the principled approach of employing online POMDP\nsolvers is rendered impractical by the need to explicitly sample online from a\nposterior distribution of world maps.\nWe present a novel data-driven imitation learning framework to efficiently\ntrain information gathering policies. The policy imitates a clairvoyant oracle\n- an oracle that at train time has full knowledge about the world map and can\ncompute maximally informative sensing locations. We analyze the learnt policy\nby showing that offline imitation of a clairvoyant oracle is implicitly\nequivalent to online oracle execution in conjunction with posterior sampling.\nThis observation allows us to obtain powerful near-optimality guarantees for\ninformation gathering problems possessing an adaptive sub-modularity property.\nAs demonstrated on a spectrum of 2D and 3D exploration problems, the trained\npolicies enjoy the best of both worlds - they adapt to different world map\ndistributions while being computationally inexpensive to evaluate.\n", "title": "Adaptive Information Gathering via Imitation Learning" }
null
null
null
null
true
null
9612
null
Default
null
null
null
{ "abstract": " We report an experimental and numerical demonstration of dispersive\nrarefaction shocks (DRS) in a 3D-printed soft chain of hollow elliptical\ncylinders. We find that, in contrast to conventional nonlinear waves, these DRS\nhave their lower amplitude components travel faster, while the higher amplitude\nones propagate slower. This results in the backward-tilted shape of the front\nof the wave (the rarefaction segment) and the breakage of wave tails into a\nmodulated waveform (the dispersive shock segment). Examining the DRS under\nvarious impact conditions, we find the counter-intuitive feature that the\nhigher striker velocity causes the slower propagation of the DRS. These unique\nfeatures can be useful for mitigating impact controllably and efficiently\nwithout relying on material damping or plasticity effects.\n", "title": "Demonstration of dispersive rarefaction shocks in hollow elliptical cylinder chains" }
null
null
[ "Physics" ]
null
true
null
9613
null
Validated
null
null
null
{ "abstract": " Arctic coastal morphology is governed by multiple factors, many of which are\naffected by climatological changes. As the season length for shorefast ice\ndecreases and temperatures warm permafrost soils, coastlines are more\nsusceptible to erosion from storm waves. Such coastal erosion is a concern,\nsince the majority of the population centers and infrastructure in the Arctic\nare located near the coasts. Stakeholders and decision makers increasingly need\nmodels capable of scenario-based predictions to assess and mitigate the effects\nof coastal morphology on infrastructure and land use. Our research uses\nGaussian process models to forecast Arctic coastal erosion along the Beaufort\nSea near Drew Point, AK. Gaussian process regression is a data-driven modeling\nmethodology capable of extracting patterns and trends from data-sparse\nenvironments such as remote Arctic coastlines. To train our model, we use\nannual coastline positions and near-shore summer temperature averages from\nexisting datasets and extend these data by extracting additional coastlines\nfrom satellite imagery. We combine our calibrated models with future climate\nmodels to generate a range of plausible future erosion scenarios. Our results\nshow that the Gaussian process methodology substantially improves yearly\npredictions compared to linear and nonlinear least squares methods, and is\ncapable of generating detailed forecasts suitable for use by decision makers.\n", "title": "Gaussian Process Regression for Arctic Coastal Erosion Forecasting" }
null
null
[ "Physics", "Statistics" ]
null
true
null
9614
null
Validated
null
null
null
{ "abstract": " In classification problems, sampling bias between training data and testing\ndata is critical to the ranking performance of classification scores. Such bias\ncan be both unintentionally introduced by data collection and intentionally\nintroduced by the algorithm, such as under-sampling or weighting techniques\napplied to imbalanced data. When such sampling bias exists, using the raw\nclassification score to rank observations in the testing data can lead to\nsuboptimal results. In this paper, I investigate the optimal calibration\nstrategy in general settings, and develop a practical solution for one specific\nsampling bias case, where the sampling bias is introduced by stratified\nsampling. The optimal solution is developed by analytically solving the problem\nof optimizing the ROC curve. For practical data, I propose a ranking algorithm\nfor general classification models with stratified data. Numerical experiments\ndemonstrate that the proposed algorithm effectively addresses the stratified\nsampling bias issue. Interestingly, the proposed method shows its potential\napplicability in two other machine learning areas: unsupervised learning and\nmodel ensembling, which can be future research topics.\n", "title": "Calibration for Stratified Classification Models" }
null
null
null
null
true
null
9615
null
Default
null
null
null
{ "abstract": " Fast-declining Type Ia supernovae (SN Ia) separate into two categories based\non their bolometric and near-infrared (NIR) properties. The peak bolometric\nluminosity ($\\mathrm{L_{max}}$), the phase of the first maximum relative to the\noptical, the NIR peak luminosity and the occurrence of a second maximum in the\nNIR distinguish a group of very faint SN Ia. Fast-declining supernovae show a\nlarge range of peak bolometric luminosities ($\\mathrm{L_{max}}$ differing by up\nto a factor of $\\sim$ 8). All fast-declining SN Ia with $\\mathrm{L_{max}} < 0.3\n\\cdot$ 10$^{43}\\mathrm{erg s}^{-1}$ are spectroscopically classified as\n91bg-like and show only a single NIR peak. SNe with $\\mathrm{L_{max}} > 0.5\n\\cdot$ 10$^{43}\\mathrm{erg s}^{-1}$ appear to smoothly connect to normal SN Ia.\nThe total ejecta mass (M$_{ej}$) values for SNe with enough late time data are\n$\\lesssim$1 $M_{\\odot}$, indicating a sub-Chandrasekhar mass progenitor for\nthese SNe.\n", "title": "Two classes of fast-declining type Ia supernovae" }
null
null
[ "Physics" ]
null
true
null
9616
null
Validated
null
null
null
{ "abstract": " The architectures of debris disks encode the history of planet formation in\nthese systems. Studies of debris disks via their spectral energy distributions\n(SEDs) have found infrared excesses arising from cold dust, warm dust, or a\ncombination of the two. The cold outer belts of many systems have been imaged,\nfacilitating their study in great detail. Far less is known about the warm\ncomponents, including the origin of the dust. The regularity of the disk\ntemperatures indicates an underlying structure that may be linked to the water\nsnow line. If the dust is generated from collisions in an exo-asteroid belt,\nthe dust will likely trace the location of the water snow line in the\nprimordial protoplanetary disk where planetesimal growth was enhanced. If\ninstead the warm dust arises from the inward transport from a reservoir of icy\nmaterial farther out in the system, the dust location is expected to be set by\nthe current snow line. We analyze the SEDs of a large sample of debris disks\nwith warm components. We find that warm components in single-component systems\n(those without detectable cold components) follow the primordial snow line\nrather than the current snow line, so they likely arise from exo-asteroid\nbelts. While the locations of many warm components in two-component systems are\nalso consistent with the primordial snow line, there is more diversity among\nthese systems, suggesting additional effects play a role.\n", "title": "What Sets the Radial Locations of Warm Debris Disks?" }
null
null
null
null
true
null
9617
null
Default
null
null
null
{ "abstract": " We prove nonlinear modulational instability for both periodic and localized\nperturbations of periodic traveling waves for several dispersive PDEs,\nincluding the KDV type equations (e.g. the Whitham equation, the generalized\nKDV equation, the Benjamin-Ono equation), the nonlinear Schrödinger equation\nand the BBM equation. First, the semigroup estimates required for the nonlinear\nproof are obtained by using the Hamiltonian structures of the linearized PDEs;\nSecond, for KDV type equations the loss of derivative in the nonlinear term is\novercome in two complementary cases: (1) for smooth nonlinear terms and general\ndispersive operators, we construct higher order approximation solutions and\nthen use energy type estimates; (2) for nonlinear terms of low regularity, with\nsome additional assumption on the dispersive operator, we use a bootstrap\nargument to overcome the loss of derivative.\n", "title": "Nonlinear Modulational Instability of Dispersive PDE Models" }
null
null
[ "Mathematics" ]
null
true
null
9618
null
Validated
null
null
null
{ "abstract": " We present a new model DrNET that learns disentangled image representations\nfrom video. Our approach leverages the temporal coherence of video and a novel\nadversarial loss to learn a representation that factorizes each frame into a\nstationary part and a temporally varying component. The disentangled\nrepresentation can be used for a range of tasks. For example, applying a\nstandard LSTM to the time-vary components enables prediction of future frames.\nWe evaluate our approach on a range of synthetic and real videos, demonstrating\nthe ability to coherently generate hundreds of steps into the future.\n", "title": "Unsupervised Learning of Disentangled Representations from Video" }
null
null
null
null
true
null
9619
null
Default
null
null
null
{ "abstract": " Photodissociation of a molecule produces a spatial distribution of\nphotofragments determined by the molecular structure and the characteristics of\nthe dissociating light. Performing this basic chemical reaction at ultracold\ntemperatures allows its quantum mechanical features to dominate. In this\nregime, weak applied fields can be used to control the reaction. Here, we\nphotodissociate ultracold diatomic strontium in magnetic fields below 10 G and\nobserve striking changes in photofragment angular distributions. The\nobservations are in excellent qualitative agreement with a multichannel quantum\nchemistry model that includes nonadiabatic effects and predicts strong mixing\nof partial waves in the photofragment energy continuum. The experiment is\nenabled by precise quantum-state control of the molecules.\n", "title": "Control of Ultracold Photodissociation with Magnetic Fields" }
null
null
null
null
true
null
9620
null
Default
null
null
null
{ "abstract": " A new generative adversarial network is developed for joint distribution\nmatching. Distinct from most existing approaches, that only learn conditional\ndistributions, the proposed model aims to learn a joint distribution of\nmultiple random variables (domains). This is achieved by learning to sample\nfrom conditional distributions between the domains, while simultaneously\nlearning to sample from the marginals of each individual domain. The proposed\nframework consists of multiple generators and a single softmax-based critic,\nall jointly trained via adversarial learning. From a simple noise source, the\nproposed framework allows synthesis of draws from the marginals, conditional\ndraws given observations from a subset of random variables, or complete draws\nfrom the full joint distribution. Most examples considered are for joint\nanalysis of two domains, with examples for three domains also presented.\n", "title": "JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets" }
null
null
null
null
true
null
9621
null
Default
null
null
null
{ "abstract": " MapReduce is a programming model used extensively for parallel data\nprocessing in distributed environments. A wide range of algorithms were\nimplemented using MapReduce, from simple tasks like sorting and searching up to\ncomplex clustering and machine learning operations. Many of these\nimplementations are part of services externalized to cloud infrastructures.\nOver the past years, however, many concerns have been raised regarding the\nsecurity guarantees offered in such environments. Some solutions relying on\ncryptography were proposed for countering threats but these typically imply a\nhigh computational overhead. Intel, the largest manufacturer of commodity CPUs,\nrecently introduced SGX (software guard extensions), a set of hardware\ninstructions that support execution of code in an isolated secure environment.\nIn this paper, we explore the use of Intel SGX for providing privacy guarantees\nfor MapReduce operations, and based on our evaluation we conclude that it\nrepresents a viable alternative to a cryptographic mechanism. We present\nresults based on the widely used k-means clustering algorithm, but our\nimplementation can be generalized to other applications that can be expressed\nusing MapReduce model.\n", "title": "A lightweight MapReduce framework for secure processing with SGX" }
null
null
null
null
true
null
9622
null
Default
null
null
null
{ "abstract": " We have investigated the formation of a circumstellar wide-orbit gas giant\nplanet in a multiple stellar system. We consider a model of orbital\ncircularization for the core of a giant planet after it is scattered from an\ninner disk region by a more massive planet, which was proposed by Kikuchi et al\n(2014). We extend their model for single star systems to binary (multiple) star\nsystems, by taking into account tidal truncation of the protoplanetary gas disk\nby a binary companion. As an example, we consider a wide-orbit gas giant in a\nhierarchical triple system, HD131399Ab. The best-fit orbit of the planet is\nthat with semimajor axis $\\sim 80$ au and eccentricity $\\sim 0.35$. As the\nbinary separation is $\\sim 350$ au, it is very close to the stability limit,\nwhich is puzzling. With the original core location $\\sim 20$-30 au, the core\n(planet) mass $\\sim 50 M_{\\rm E}$ and the disk truncation radius $\\sim 150$ au,\nour model reproduces the best-fit orbit of HD131399Ab. We find that the orbit\nafter the circularization is usually close to the stability limit against the\nperturbations from the binary companion, because the scattered core accretes\ngas from the truncated disk. Our conclusion can also be applied to wider or\nmore compact binary systems if the separation is not too large and another\nplanet with $> \\sim$ 20-30 Earth masses that scattered the core existed in\ninner region of the system.\n", "title": "Formation of wide-orbit gas giants near the stability limit in multi-stellar systems" }
null
null
null
null
true
null
9623
null
Default
null
null
null
{ "abstract": " Symmetric nonnegative matrix factorization has found abundant applications in\nvarious domains by providing a symmetric low-rank decomposition of nonnegative\nmatrices. In this paper we propose a Frank-Wolfe (FW) solver to optimize the\nsymmetric nonnegative matrix factorization problem under a simplicial\nconstraint, which has recently been proposed for probabilistic clustering.\nCompared with existing solutions, this algorithm is simple to implement, and\nhas no hyperparameters to be tuned. Building on the recent advances of FW\nalgorithms in nonconvex optimization, we prove an $O(1/\\varepsilon^2)$\nconvergence rate to $\\varepsilon$-approximate KKT points, via a tight bound\n$\\Theta(n^2)$ on the curvature constant, which matches the best known result in\nunconstrained nonconvex setting using gradient methods. Numerical results\ndemonstrate the effectiveness of our algorithm. As a side contribution, we\nconstruct a simple nonsmooth convex problem where the FW algorithm fails to\nconverge to the optimum. This result raises an interesting question about\nnecessary conditions of the success of the FW algorithm on convex problems.\n", "title": "Frank-Wolfe Optimization for Symmetric-NMF under Simplicial Constraint" }
null
null
null
null
true
null
9624
null
Default
null
null
null
{ "abstract": " In this paper, we present a new Light Field representation for efficient\nLight Field processing and rendering called Fourier Disparity Layers (FDL). The\nproposed FDL representation samples the Light Field in the depth (or\nequivalently the disparity) dimension by decomposing the scene as a discrete\nsum of layers. The layers can be constructed from various types of Light Field\ninputs including a set of sub-aperture images, a focal stack, or even a\ncombination of both. From our derivations in the Fourier domain, the layers are\nsimply obtained by a regularized least square regression performed\nindependently at each spatial frequency, which is efficiently parallelized in a\nGPU implementation. Our model is also used to derive a gradient descent based\ncalibration step that estimates the input view positions and an optimal set of\ndisparity values required for the layer construction. Once the layers are\nknown, they can be simply shifted and filtered to produce different viewpoints\nof the scene while controlling the focus and simulating a camera aperture of\narbitrary shape and size. Our implementation in the Fourier domain allows real\ntime Light Field rendering. Finally, direct applications such as view\ninterpolation or extrapolation and denoising are presented and evaluated.\n", "title": "A Fourier Disparity Layer representation for Light Fields" }
null
null
[ "Computer Science" ]
null
true
null
9625
null
Validated
null
null
null
{ "abstract": " Networks have become the de facto diagram of the Big Data age (try searching\nGoogle Images for [big data AND visualisation] and see). The concept of\nnetworks has become central to many fields of human inquiry and is said to\nrevolutionise everything from medicine to markets to military intelligence.\nWhile the mathematical and analytical capabilities of networks have been\nextensively studied over the years, in this article we argue that the\nstorytelling affordances of networks have been comparatively neglected. In\norder to address this we use multimodal analysis to examine the stories that\nnetworks evoke in a series of journalism articles. We develop a protocol by\nmeans of which narrative meanings can be construed from network imagery and the\ncontext in which it is embedded, and discuss five different kinds of narrative\nreadings of networks, illustrated with analyses of examples from journalism.\nFinally, to support further research in this area, we discuss methodological\nissues that we encountered and suggest directions for future study to advance\nand broaden research around this defining aspect of visual culture after the\ndigital turn.\n", "title": "Narrating Networks" }
null
null
[ "Computer Science" ]
null
true
null
9626
null
Validated
null
null
null
{ "abstract": " We study the problem of policy evaluation and learning from batched\ncontextual bandit data when treatments are continuous, going beyond previous\nwork on discrete treatments. Previous work for discrete treatment/action spaces\nfocuses on inverse probability weighting (IPW) and doubly robust (DR) methods\nthat use a rejection sampling approach for evaluation and the equivalent\nweighted classification problem for learning. In the continuous setting, this\nreduction fails as we would almost surely reject all observations. To tackle\nthe case of continuous treatments, we extend the IPW and DR approaches to the\ncontinuous setting using a kernel function that leverages treatment proximity\nto attenuate discrete rejection. Our policy estimator is consistent and we\ncharacterize the optimal bandwidth. The resulting continuous policy optimizer\n(CPO) approach using our estimator achieves convergent regret and approaches\nthe best-in-class policy for learnable policy classes. We demonstrate that the\nestimator performs well and, in particular, outperforms a discretization-based\nbenchmark. We further study the performance of our policy optimizer in a case\nstudy on personalized dosing based on a dataset of Warfarin patients, their\ncovariates, and final therapeutic doses. Our learned policy outperforms\nbenchmarks and nears the oracle-best linear policy.\n", "title": "Policy Evaluation and Optimization with Continuous Treatments" }
null
null
null
null
true
null
9627
null
Default
null
null
null
{ "abstract": " This work investigates the geometry of a nonconvex reformulation of\nminimizing a general convex loss function $f(X)$ regularized by the matrix\nnuclear norm $\\|X\\|_*$. Nuclear-norm regularized matrix inverse problems are at\nthe heart of many applications in machine learning, signal processing, and\ncontrol. The statistical performance of nuclear norm regularization has been\nstudied extensively in literature using convex analysis techniques. Despite its\noptimal performance, the resulting optimization has high computational\ncomplexity when solved using standard or even tailored fast convex solvers. To\ndevelop faster and more scalable algorithms, we follow the proposal of\nBurer-Monteiro to factor the matrix variable $X$ into the product of two\nsmaller rectangular matrices $X=UV^T$ and also replace the nuclear norm\n$\\|X\\|_*$ with $(\\|U\\|_F^2+\\|V\\|_F^2)/2$. In spite of the nonconvexity of the\nfactored formulation, we prove that when the convex loss function $f(X)$ is\n$(2r,4r)$-restricted well-conditioned, each critical point of the factored\nproblem either corresponds to the optimal solution $X^\\star$ of the original\nconvex optimization or is a strict saddle point where the Hessian matrix has a\nstrictly negative eigenvalue. Such a geometric structure of the factored\nformulation allows many local search algorithms to converge to the global\noptimum with random initializations.\n", "title": "Geometry of Factored Nuclear Norm Regularization" }
null
null
null
null
true
null
9628
null
Default
null
null
null
{ "abstract": " Methods are described that extend fields from reconstructed equilibria to\ninclude scrape-off-layer current through extrapolated parametrized and\nexperimental fits. The extrapolation includes both the effects of the\ntoroidal-field and pressure gradients which produce scrape-off-layer current\nafter recomputation of the Grad-Shafranov solution. To quantify the degree that\ninclusion of scrape-off-layer current modifies the equilibrium, the\n$\\chi$-squared goodness-of-fit parameter is calculated for cases with and\nwithout scrape-off-layer current. The change in $\\chi$-squared is found to be\nminor when scrape-off-layer current is included however flux surfaces are\nshifted by up to 3 cm. The impact on edge modes of these scrape-off-layer\nmodifications is also found to be small and the importance of these methods to\nnonlinear computation is discussed.\n", "title": "Effect of Scrape-Off-Layer Current on Reconstructed Tokamak Equilibrium" }
null
null
null
null
true
null
9629
null
Default
null
null
null
{ "abstract": " Let $(L,\\cdot)$ be any loop and let $A(L)$ be a group of automorphisms of\n$(L,\\cdot)$ such that $\\alpha$ and $\\phi$ are elements of $A(L)$. It is shown\nthat, for all $x,y,z\\in L$, the $A(L)$-holomorph $(H,\\circ)=H(L)$ of\n$(L,\\cdot)$ is an Osborn loop if and only if $x\\alpha (yz\\cdot x\\phi^{-1})=\nx\\alpha (yx^\\lambda\\cdot x) \\cdot zx\\phi^{-1}$. Furthermore, it is shown that\nfor all $x\\in L$, $H(L)$ is an Osborn loop if and only if $(L,\\cdot)$ is an\nOsborn loop, $(x\\alpha\\cdot x^{\\rho})x=x\\alpha$, $x(x^{\\lambda}\\cdot\nx\\phi^{-1})=x\\phi^{-1}$ and every pair of automorphisms in $A(L)$ is nuclear\n(i.e. $x\\alpha\\cdot x^{\\rho},x^{\\lambda}\\cdot x\\phi\\in N(L,\\cdot )$). It is\nshown that if $H(L)$ is an Osborn loop, then $A(L,\\cdot)=\n\\mathcal{P}(L,\\cdot)\\cap\\Lambda(L,\\cdot)\\cap\\Phi(L,\\cdot)\\cap\\Psi(L,\\cdot)$ and\nfor any $\\alpha\\in A(L)$, $\\alpha= L_{e\\pi}=R^{-1}_{e\\varrho}$ for some $\\pi\\in\n\\Phi(L,\\cdot)$ and some $\\varrho\\in \\Psi(L,\\cdot)$. Some commutative diagrams\nare deduced by considering isomorphisms among the various groups of regular\nbijections (whose intersection is $A(L)$) and the nucleus of $(L,\\cdot)$.\n", "title": "Holomorphy of Osborn loops" }
null
null
[ "Mathematics" ]
null
true
null
9630
null
Validated
null
null
null
{ "abstract": " Multi-objective recommender systems address the difficult task of\nrecommending items that are relevant to multiple, possibly conflicting,\ncriteria. However these systems are most often designed to address the\nobjective of one single stakeholder, typically, in online commerce, the\nconsumers whose input and purchasing decisions ultimately determine the success\nof the recommendation systems. In this work, we address the multi-objective,\nmulti-stakeholder, recommendation problem involving one or more objective(s)\nper stakeholder. In addition to the consumer stakeholder, we also consider two\nother stakeholders; the suppliers who provide the goods and services for sale\nand the intermediary who is responsible for helping connect consumers to\nsuppliers via its recommendation algorithms. We analyze the multi-objective,\nmulti-stakeholder, problem from the point of view of the online marketplace\nintermediary whose objective is to maximize its commission through its\nrecommender system. We define a multi-objective problem relating all our three\nstakeholders which we solve with a novel learning-to-re-rank approach that\nmakes use of a novel regularization function based on the Kendall tau\ncorrelation metric and its kernel version; given an initial ranking of item\nrecommendations built for the consumer, we aim to re-rank it such that the new\nranking is also optimized for the secondary objectives while staying close to\nthe initial ranking. We evaluate our approach on a real-world dataset of hotel\nrecommendations provided by Expedia where we show the effectiveness of our\napproach against a business-rules oriented baseline model.\n", "title": "A Multi-Objective Learning to re-Rank Approach to Optimize Online Marketplaces for Multiple Stakeholders" }
null
null
null
null
true
null
9631
null
Default
null
null
null
{ "abstract": " Decide Madrid is the civic technology of Madrid City Council which allows\nusers to create and support online petitions. Despite the initial success, the\nplatform is encountering problems with the growth of petition signing because\npetitions are far from the minimum number of supporting votes they must gather.\nPrevious analyses have suggested that this problem is produced by the\ninterface: a paginated list of petitions which applies a non-optimal ranking\nalgorithm. For this reason, we present an interactive system for the discovery\nof topics and petitions. This approach leads us to reflect on the usefulness of\ndata visualization techniques to address relevant societal challenges.\n", "title": "Interactive Discovery System for Direct Democracy" }
null
null
null
null
true
null
9632
null
Default
null
null
null
{ "abstract": " Modeling physiological time-series in ICU is of high clinical importance.\nHowever, data collected within ICU are irregular in time and often contain\nmissing measurements. Since absence of a measure would signify its lack of\nimportance, the missingness is indeed informative and might reflect the\ndecision making by the clinician. Here we propose a deep learning architecture\nthat can effectively handle these challenges for predicting ICU mortality\noutcomes. The model is based on Long Short-Term Memory, and has layered\nattention mechanisms. At the sensing layer, the model decides whether to\nobserve and incorporate parts of the current measurements. At the reasoning\nlayer, evidences across time steps are weighted and combined. The model is\nevaluated on the PhysioNet 2012 dataset showing competitive and interpretable\nresults.\n", "title": "Deep Learning to Attend to Risk in ICU" }
null
null
null
null
true
null
9633
null
Default
null
null
null
{ "abstract": " In this paper we study spectral properties of Dirichlet-to-Neumann map on\ndifferential forms obtained by a slight modification of the definition due to\nBelishev and Sharafutdinov. The resulting operator $\\Lambda$ is shown to be\nself-adjoint on the subspace of coclosed forms and to have purely discrete\nspectrum there.We investigate properies of eigenvalues of $\\Lambda$ and prove a\nHersch-Payne-Schiffer type inequality relating products of those eigenvalues to\neigenvalues of Hodge Laplacian on the boundary. Moreover, non-trivial\neigenvalues of $\\Lambda$ are always at least as large as eigenvalues of\nDirichlet-to-Neumann map defined by Raulot and Savo. Finally, we remark that a\nparticular case of $p$-forms on the boundary of $2p+2$-dimensional manifold\nshares a lot of important properties with the classical Steklov eigenvalue\nproblem on surfaces.\n", "title": "Steklov problem on differential forms" }
null
null
null
null
true
null
9634
null
Default
null
null
null
{ "abstract": " In this work we have used the recent cosmic chronometers data along with the\nlatest estimation of the local Hubble parameter value, $H_0$ at 2.4\\% precision\nas well as the standard dark energy probes, such as the Supernovae Type Ia,\nbaryon acoustic oscillation distance measurements, and cosmic microwave\nbackground measurements (PlanckTT $+$ lowP) to constrain a dark energy model\nwhere the dark energy is allowed to interact with the dark matter. A general\nequation of state of dark energy parametrized by a dimensionless parameter\n`$\\beta$' is utilized. From our analysis, we find that the interaction is\ncompatible with zero within the 1$\\sigma$ confidence limit. We also show that\nthe same evolution history can be reproduced by a small pressure of the dark\nmatter.\n", "title": "Constraining a dark matter and dark energy interaction scenario with a dynamical equation of state" }
null
null
null
null
true
null
9635
null
Default
null
null
null
{ "abstract": " Statistical TTS systems that directly predict the speech waveform have\nrecently reported improvements in synthesis quality. This investigation\nevaluates Amazon's statistical speech waveform synthesis (SSWS) system. An\nin-depth evaluation of SSWS is conducted across a number of domains to better\nunderstand the consistency in quality. The results of this evaluation are\nvalidated by repeating the procedure on a separate group of testers. Finally,\nan analysis of the nature of speech errors of SSWS compared to hybrid unit\nselection synthesis is conducted to identify the strengths and weaknesses of\nSSWS. Having a deeper insight into SSWS allows us to better define the focus of\nfuture work to improve this new technology.\n", "title": "Comprehensive evaluation of statistical speech waveform synthesis" }
null
null
null
null
true
null
9636
null
Default
null
null
null
{ "abstract": " The stability of a complex system generally decreases with increasing system\nsize and interconnectivity, a counterintuitive result of widespread importance\nacross the physical, life, and social sciences. Despite recent interest in the\nrelationship between system properties and stability, the effect of variation\nin the response rate of individual system components remains unconsidered. Here\nI vary the component response rates ($\\boldsymbol{\\gamma}$) of randomly\ngenerated complex systems. I show that when component response rates vary, the\npotential for system stability is markedly increased. Variation in\n$\\boldsymbol{\\gamma}$ becomes increasingly important as system size increases,\nsuch that the largest stable complex systems would be unstable if not for\n$\\boldsymbol{Var(\\gamma)}$. My results reveal a previously unconsidered driver\nof system stability that is likely to be pervasive across all complex systems.\n", "title": "Component response rate variation drives stability in large complex systems" }
null
null
null
null
true
null
9637
null
Default
null
null
null
{ "abstract": " Time-resolved ultrafast x-ray scattering from photo-excited matter is an\nemerging method to image ultrafast dynamics in matter with atomic-scale spatial\nand temporal resolutions. For a correct and rigorous understanding of current\nand upcoming imaging experiments, we present the theory of time-resolved x-ray\nscattering from an incoherent electronic mixture using quantum electrodynamical\ntheory of light-matter interaction. We show that the total scattering signal is\nan incoherent sum of the individual scattering signals arising from different\nelectronic states and therefore heterodyning of the individual signals is not\npossible for an ensemble of gas-phase photo-excited molecules. We scrutinize\nthe information encoded in the total signal for the experimentally important\nsituation when pulse duration and coherence time of the x-ray pulse are short\nin comparison to the timescale of the vibrational motion and long in comparison\nto the timescale of the electronic motion, respectively. Finally, we show that\nin the case of an electronically excited crystal the total scattering signal\nimprints the interference of the individual scattering amplitudes associated\nwith different electronic states and heterodyning is possible.\n", "title": "Time-resolved ultrafast x-ray scattering from an incoherent electronic mixture" }
null
null
null
null
true
null
9638
null
Default
null
null
null
{ "abstract": " Critical overdensity $\\delta_c$ is a key concept in estimating the number\ncount of halos for different redshift and halo-mass bins, and therefore, it is\na powerful tool to compare cosmological models to observations. There are\ncurrently two different prescriptions in the literature for its calculation,\nnamely, the differential-radius and the constant-infinity methods. In this work\nwe show that the latter yields precise results {\\it only} if we are careful in\nthe definition of the so-called numerical infinities. Although the subtleties\nwe point out are crucial ingredients for an accurate determination of\n$\\delta_c$ both in general relativity and in any other gravity theory, we focus\non $f(R)$ modified-gravity models in the metric approach; in particular, we use\nthe so-called large ($F=1/3$) and small-field ($F=0$) limits. For both of them,\nwe calculate the relative errors (between our method and the others) in the\ncritical density $\\delta_c$, in the comoving number density of halos per\nlogarithmic mass interval $n_{\\ln M}$ and in the number of clusters at a given\nredshift in a given mass bin $N_{\\rm bin}$, as functions of the redshift. We\nhave also derived an analytical expression for the density contrast in the\nlinear regime as a function of the collapse redshift $z_c$ and $\\Omega_{m0}$\nfor any $F$.\n", "title": "Calculation of the critical overdensity in the spherical-collapse approximation" }
null
null
[ "Physics" ]
null
true
null
9639
null
Validated
null
null
null
{ "abstract": " We consider finite point subsets (distributions) in compact metric spaces. In\nthe case of general rectifiable metric spaces, non-trivial bounds for sums of\ndistances between points of distributions and for discrepancies of\ndistributions in metric balls are given (Theorem 1.1). We generalize\nStolarsky's invariance principle to distance-invariant spaces (Theorem 2.1).\nFor arbitrary metric spaces, we prove a probabilistic invariance principle\n(Theorem 3.1). Furthermore, we construct equal-measure partitions of general\nrectifiable compact metric spaces into parts of small average diameter (Theorem\n4.1). This version of the paper will be published in Mathematika\n", "title": "Point distributions in compact metric spaces, II" }
null
null
null
null
true
null
9640
null
Default
null
null
null
{ "abstract": " This paper is concerned with the partitioned iterative formulation to\nsimulate the fluid-structure interaction of a nonlinear multibody system in an\nincompressible turbulent flow. The proposed formulation relies on a\nthree-dimensional (3D) incompressible turbulent flow solver, a nonlinear\nmonolithic elastic structural solver for constrained flexible multibody system\nand the nonlinear iterative force correction scheme for coupling of the\nturbulent fluid-flexible multibody system with nonmatching interface meshes.\nWhile the fluid equations are discretized using a stabilized Petrov-Galerkin\nformulation in space and the generalized-$\\alpha$ updates in time, the\nmultibody system utilizes a discontinuous space-time Galerkin finite element\nmethod. We address two key challenges in the present formulation. Firstly, the\ncoupling of the incompressible turbulent flow with a system of nonlinear\nelastic bodies described in a co-rotated frame. Secondly, the projection of the\ntractions and displacements across the nonmatching 3D fluid surface elements\nand the one-dimensional line elements for the flexible multibody system in a\nconservative manner. Through the nonlinear iterative correction and the\nconservative projection, the developed fluid-flexible multibody interaction\nsolver is stable for problems involving strong inertial effects between the\nfluid-flexible multibody system and the coupled interactions among each\nmultibody component. The accuracy of the proposed coupled finite element\nframework is validated against the available experimental data for a long\nflexible cylinder undergoing vortex-induced vibration in a uniform current flow\ncondition. Finally, a practical application of the proposed framework is\ndemonstrated by simulating the flow-induced vibration of a realistic offshore\nfloating platform connected to a long riser and an elastic mooring system.\n", "title": "A Variational Projection Scheme for Nonmatching Surface-to-Line Coupling between 3D Flexible Multibody System and Incompressible Turbulent Flow" }
null
null
null
null
true
null
9641
null
Default
null
null
null
{ "abstract": " The Hubble Catalog of Variables (HCV) is a 3 year ESA funded project that\naims to develop a set of algorithms to identify variables among the sources\nincluded in the Hubble Source Catalog (HSC) and produce the HCV. We will\nprocess all HSC sources with more than a predefined number of measurements in a\nsingle filter/instrument combination and compute a range of lightcurve features\nto determine the variability status of each source. At the end of the project,\nthe first release of the Hubble Catalog of Variables will be made available at\nthe Mikulski Archive for Space Telescopes (MAST) and the ESA Science Archives.\nThe variability detection pipeline will be implemented at the Space Telescope\nScience Institute (STScI) so that updated versions of the HCV may be created\nfollowing the future releases of the HSC.\n", "title": "The Hubble Catalog of Variables" }
null
null
null
null
true
null
9642
null
Default
null
null
null
{ "abstract": " The design of general purpose processors relies heavily on a workload\ngathering step in which representative programs are collected from various\napplication domains. Processor performance, when running the workload set, is\nprofiled using simulators that model the targeted processor architecture.\nHowever, simulating the entire workload set is prohibitively time-consuming,\nwhich precludes considering a large number of programs. To reduce simulation\ntime, several techniques in the literature have exploited the internal program\nrepetitiveness to extract and execute only representative code segments.\nExisting so- lutions are based on reducing cross-program computational\nredundancy or on eliminating internal-program redundancy to decrease execution\ntime. In this work, we propose an orthogonal and complementary loop- centric\nmethodology that targets loop-dominant programs by exploiting internal-program\ncharacteristics to reduce cross-program computational redundancy. The approach\nemploys a newly developed framework that extracts and analyzes core loops\nwithin workloads. The collected characteristics model memory behavior,\ncomputational complexity, and data structures of a program, and are used to\nconstruct a signature vector for each program. From these vectors,\ncross-workload similarity metrics are extracted, which are processed by a novel\nheuristic to exclude similar programs and reduce redundancy within the set.\nFinally, a reverse engineering approach that synthesizes executable\nmicro-benchmarks having the same instruction mix as the loops in the original\nworkload is introduced. A tool that automates the flow steps of the proposed\nmethodology is developed. Simulation results demonstrate that applying the\nproposed methodology to a set of workloads reduces the set size by half, while\npreserving the main characterizations of the initial workloads.\n", "title": "A Loop-Based Methodology for Reducing Computational Redundancy in Workload Sets" }
null
null
null
null
true
null
9643
null
Default
null
null
null
{ "abstract": " Most traditional video summarization methods are designed to generate\neffective summaries for single-view videos, and thus they cannot fully exploit\nthe complicated intra and inter-view correlations in summarizing multi-view\nvideos in a camera network. In this paper, with the aim of summarizing\nmulti-view videos, we introduce a novel unsupervised framework via joint\nembedding and sparse representative selection. The objective function is\ntwo-fold. The first is to capture the multi-view correlations via an embedding,\nwhich helps in extracting a diverse set of representatives. The second is to\nuse a `2;1- norm to model the sparsity while selecting representative shots for\nthe summary. We propose to jointly optimize both of the objectives, such that\nembedding can not only characterize the correlations, but also indicate the\nrequirements of sparse representative selection. We present an efficient\nalternating algorithm based on half-quadratic minimization to solve the\nproposed non-smooth and non-convex objective with convergence analysis. A key\nadvantage of the proposed approach with respect to the state-of-the-art is that\nit can summarize multi-view videos without assuming any prior\ncorrespondences/alignment between them, e.g., uncalibrated camera networks.\nRigorous experiments on several multi-view datasets demonstrate that our\napproach clearly outperforms the state-of-the-art methods.\n", "title": "Multi-View Surveillance Video Summarization via Joint Embedding and Sparse Optimization" }
null
null
null
null
true
null
9644
null
Default
null
null
null
{ "abstract": " Federated clouds raise a variety of challenges for managing identity,\nresource access, naming, connectivity, and object access control. This paper\nshows how to address these challenges in a comprehensive and uniform way using\na data-centric approach. The foundation of our approach is a trust logic in\nwhich participants issue authenticated statements about principals, objects,\nattributes, and relationships in a logic language, with reasoning based on\ndeclarative policy rules. We show how to use the logic to implement a trust\ninfrastructure for cloud federation that extends the model of NSF GENI, a\nfederated IaaS testbed. It captures shared identity management, GENI authority\nservices, cross-site interconnection using L2 circuits, and a naming and access\ncontrol system similar to AWS Identity and Access Management (IAM), but\nextended to a federated system without central control.\n", "title": "A Logical Approach to Cloud Federation" }
null
null
null
null
true
null
9645
null
Default
null
null
null
{ "abstract": " It is well-established by cognitive neuroscience that human perception of\nobjects constitutes a complex process, where object appearance information is\ncombined with evidence about the so-called object \"affordances\", namely the\ntypes of actions that humans typically perform when interacting with them. This\nfact has recently motivated the \"sensorimotor\" approach to the challenging task\nof automatic object recognition, where both information sources are fused to\nimprove robustness. In this work, the aforementioned paradigm is adopted,\nsurpassing current limitations of sensorimotor object recognition research.\nSpecifically, the deep learning paradigm is introduced to the problem for the\nfirst time, developing a number of novel neuro-biologically and\nneuro-physiologically inspired architectures that utilize state-of-the-art\nneural networks for fusing the available information sources in multiple ways.\nThe proposed methods are evaluated using a large RGB-D corpus, which is\nspecifically collected for the task of sensorimotor object recognition and is\nmade publicly available. Experimental results demonstrate the utility of\naffordance information to object recognition, achieving an up to 29% relative\nerror reduction by its inclusion.\n", "title": "Deep Affordance-grounded Sensorimotor Object Recognition" }
null
null
null
null
true
null
9646
null
Default
null
null
null
{ "abstract": " Drying of colloidal droplets on solid, rigid substrates is associated with a\ncapillary pressure developing within the droplet. In due course of time, the\ncapillary pressure builds up due to droplet evaporation resulting in the\nformation of a colloidal thin film that is prone to crack formation. In this\nstudy, we show that introducing a minimal amount of nematic liquid crystal\n(NLC) can completely suppress the crack formation. The mechanism behind the\ncurbing of the crack formation may be attributed to the capillary\nstress-absorbing cushion provided by the elastic arrangements of the liquid\ncrystal at the substrate-droplet interface. Cracks and allied surface\ninstabilities are detrimental to the quality of the final product like surface\ncoatings, and therefore, its suppression by an external inert additive is a\npromising technique that will be of immense importance for several industrial\napplications. We believe this fundamental investigation of crack suppression\nwill open up an entire avenue of applications for the NLCs in the field of\ncoatings, broadening its already existing wide range of benefits.\n", "title": "Liquid crystal induced elasto-capillary suppression of crack formation in thin colloidal films" }
null
null
null
null
true
null
9647
null
Default
null
null
null
{ "abstract": " An Autonomous Underwater Vehicle (AUV) should carry out complex tasks in a\nlimited time interval. Since existing AUVs have limited battery capacity and\nrestricted endurance, they should autonomously manage mission time and the\nresources to perform effective persistent deployment in longer missions. Task\nassignment requires making decisions subject to resource constraints, while\ntasks are assigned with costs and/or values that are budgeted in advance. Tasks\nare distributed in a particular operation zone and mapped by a waypoint covered\nnetwork. Thus, design an efficient routing-task priority assign framework\nconsidering vehicle's availabilities and properties is essential for increasing\nmission productivity and on-time mission completion. This depends strongly on\nthe order and priority of the tasks that are located between node-like\nwaypoints in an operation network. On the other hand, autonomous operation of\nAUVs in an unfamiliar dynamic underwater and performing quick response to\nsudden environmental changes is a complicated process. Water current\ninstabilities can deflect the vehicle to an undesired direction and perturb\nAUVs safety. The vehicle's robustness to strong environmental variations is\nextremely crucial for its safe and optimum operations in an uncertain and\ndynamic environment. To this end, the AUV needs to have a general overview of\nthe environment in top level to perform an autonomous action selection (task\nselection) and a lower level local motion planner to operate successfully in\ndealing with continuously changing situations. This research deals with\ndeveloping a novel reactive control architecture to provide a higher level of\ndecision autonomy for the AUV operation that enables a single vehicle to\naccomplish multiple tasks in a single mission in the face of periodic\ndisturbances in a turbulent and highly uncertain environment.\n", "title": "Autonomous Reactive Mission Scheduling and Task-Path Planning Architecture for Autonomous Underwater Vehicle" }
null
null
null
null
true
null
9648
null
Default
null
null
null
{ "abstract": " Random attacks that jointly minimize the amount of information acquired by\nthe operator about the state of the grid and the probability of attack\ndetection are presented. The attacks minimize the information acquired by the\noperator by minimizing the mutual information between the observations and the\nstate variables describing the grid. Simultaneously, the attacker aims to\nminimize the probability of attack detection by minimizing the Kullback-Leibler\n(KL) divergence between the distribution when the attack is present and the\ndistribution under normal operation. The resulting cost function is the\nweighted sum of the mutual information and the KL divergence mentioned above.\nThe tradeoff between the probability of attack detection and the reduction of\nmutual information is governed by the weighting parameter on the KL divergence\nterm in the cost function. The probability of attack detection is evaluated as\na function of the weighting parameter. A sufficient condition on the weighting\nparameter is given for achieving an arbitrarily small probability of attack\ndetection. The attack performance is numerically assessed on the IEEE 30-Bus\nand 118-Bus test systems.\n", "title": "Stealth Attacks on the Smart Grid" }
null
null
null
null
true
null
9649
null
Default
null
null
null
{ "abstract": " In this paper we present an alternative strategy for fine-tuning the\nparameters of a network. We named the technique Gradual Tuning. Once trained on\na first task, the network is fine-tuned on a second task by modifying a\nprogressively larger set of the network's parameters. We test Gradual Tuning on\ndifferent transfer learning tasks, using networks of different sizes trained\nwith different regularization techniques. The result shows that compared to the\nusual fine tuning, our approach significantly reduces catastrophic forgetting\nof the initial task, while still retaining comparable if not better performance\non the new task.\n", "title": "Gradual Tuning: a better way of Fine Tuning the parameters of a Deep Neural Network" }
null
null
null
null
true
null
9650
null
Default
null
null
null
{ "abstract": " This work presents a joint and self-consistent Bayesian treatment of various\nforeground and target contaminations when inferring cosmological power-spectra\nand three dimensional density fields from galaxy redshift surveys. This is\nachieved by introducing additional block sampling procedures for unknown\ncoefficients of foreground and target contamination templates to the previously\npresented ARES framework for Bayesian large scale structure analyses. As a\nresult the method infers jointly and fully self-consistently three dimensional\ndensity fields, cosmological power-spectra, luminosity dependent galaxy biases,\nnoise levels of respective galaxy distributions and coefficients for a set of a\npriori specified foreground templates. In addition this fully Bayesian approach\npermits detailed quantification of correlated uncertainties amongst all\ninferred quantities and correctly marginalizes over observational systematic\neffects. We demonstrate the validity and efficiency of our approach in\nobtaining unbiased estimates of power-spectra via applications to realistic\nmock galaxy observations subject to stellar contamination and dust extinction.\nWhile simultaneously accounting for galaxy biases and unknown noise levels our\nmethod reliably and robustly infers three dimensional density fields and\ncorresponding cosmological power-spectra from deep galaxy surveys. Further our\napproach correctly accounts for joint and correlated uncertainties between\nunknown coefficients of foreground templates and the amplitudes of the\npower-spectrum. An effect amounting up to $10$ percent correlations and\nanti-correlations across large ranges in Fourier space.\n", "title": "Bayesian power-spectrum inference with foreground and target contamination treatment" }
null
null
null
null
true
null
9651
null
Default
null
null
null
{ "abstract": " In this paper, we give some low-dimensional examples of local cocycle 3-Lie\nbialgebras and double construction 3-Lie bialgebras which were introduced in\nthe study of the classical Yang-Baxter equation and Manin triples for 3-Lie\nalgebras. We give an explicit and practical formula to compute the\nskew-symmetric solutions of the 3-Lie classical Yang-Baxter equation (CYBE). As\nan illustration, we obtain all skew-symmetric solutions of the 3-Lie CYBE in\ncomplex 3-Lie algebras of dimension 3 and 4 and then the induced local cocycle\n3-Lie bialgebras. On the other hand, we classify the double construction 3-Lie\nbialgebras for complex 3-Lie algebras in dimensions 3 and 4 and then give the\ncorresponding 8-dimensional pseudo-metric 3-Lie algebras.\n", "title": "3-Lie bialgebras and 3-Lie classical Yang-Baxter equations in low dimensions" }
null
null
null
null
true
null
9652
null
Default
null
null
null
{ "abstract": " High-resolution wide field-of-view (FOV) microscopic imaging plays an\nessential role in various fields of biomedicine, engineering, and physical\nsciences. As an alternative to conventional lens-based scanning techniques,\nlensfree holography provides a new way to effectively bypass the intrinsical\ntrade-off between the spatial resolution and FOV of conventional microscopes.\nUnfortunately, due to the limited sensor pixel-size, unpredictable disturbance\nduring image acquisition, and sub-optimum solution to the phase retrieval\nproblem, typical lensfree microscopes only produce compromised imaging quality\nin terms of lateral resolution and signal-to-noise ratio (SNR). Here, we\npropose an adaptive pixel-super-resolved lensfree imaging (APLI) method which\ncan solve, or at least partially alleviate these limitations. Our approach\naddresses the pixel aliasing problem by Z-scanning only, without resorting to\nsubpixel shifting or beam-angle manipulation. Automatic positional error\ncorrection algorithm and adaptive relaxation strategy are introduced to enhance\nthe robustness and SNR of reconstruction significantly. Based on APLI, we\nperform full-FOV reconstruction of a USAF resolution target ($\\sim$29.85\n$m{m^2}$) and achieve half-pitch lateral resolution of 770 $nm$, surpassing\n2.17 times of the theoretical Nyquist-Shannon sampling resolution limit imposed\nby the sensor pixel-size (1.67 $\\mu m$). Full-FOV imaging result of a typical\ndicot root is also provided to demonstrate its promising potential applications\nin biologic imaging.\n", "title": "Adaptive pixel-super-resolved lensfree holography for wide-field on-chip microscopy" }
null
null
null
null
true
null
9653
null
Default
null
null
null
{ "abstract": " We address personalization issues of image captioning, which have not been\ndiscussed yet in previous research. For a query image, we aim to generate a\ndescriptive sentence, accounting for prior knowledge such as the user's active\nvocabularies in previous documents. As applications of personalized image\ncaptioning, we tackle two post automation tasks: hashtag prediction and post\ngeneration, on our newly collected Instagram dataset, consisting of 1.1M posts\nfrom 6.3K users. We propose a novel captioning model named Context Sequence\nMemory Network (CSMN). Its unique updates over previous memory network models\ninclude (i) exploiting memory as a repository for multiple types of context\ninformation, (ii) appending previously generated words into memory to capture\nlong-term information without suffering from the vanishing gradient problem,\nand (iii) adopting CNN memory structure to jointly represent nearby ordered\nmemory slots for better context understanding. With quantitative evaluation and\nuser studies via Amazon Mechanical Turk, we show the effectiveness of the three\nnovel features of CSMN and its performance enhancement for personalized image\ncaptioning over state-of-the-art captioning models.\n", "title": "Attend to You: Personalized Image Captioning with Context Sequence Memory Networks" }
null
null
null
null
true
null
9654
null
Default
null
null
null
{ "abstract": " A map $f\\colon K\\to \\mathbb R^d$ of a simplicial complex is an almost\nembedding if $f(\\sigma)\\cap f(\\tau)=\\emptyset$ whenever $\\sigma,\\tau$ are\ndisjoint simplices of $K$.\nTheorem. Fix integers $d,k\\ge2$ such that $d=\\frac{3k}2+1$.\n(a) Assume that $P\\ne NP$. Then there exists a finite $k$-dimensional complex\n$K$ that does not admit an almost embedding in $\\mathbb R^d$ but for which\nthere exists an equivariant map $\\tilde K\\to S^{d-1}$.\n(b) The algorithmic problem of recognition almost embeddability of finite\n$k$-dimensional complexes in $\\mathbb R^d$ is NP hard.\nThe proof is based on the technique from the Matoušek-Tancer-Wagner paper\n(proving an analogous result for embeddings), and on singular versions of the\nhigher-dimensional Borromean rings lemma and a generalized van Kampen--Flores\ntheorem.\n", "title": "Hardness of almost embedding simplicial complexes in $\\mathbb R^d$" }
null
null
null
null
true
null
9655
null
Default
null
null
null
{ "abstract": " Optimization of energy cost determines average values of spatio-temporal gait\nparameters such as step duration, step length or step speed. However, during\nwalking, humans need to adapt these parameters at every step to respond to\nexogenous and/or endogenic perturbations. While some neurological mechanisms\nthat trigger these responses are known, our understanding of the fundamental\nprinciples governing step-by-step adaptation remains elusive. We determined the\ngait parameters of 20 healthy subjects with right-foot preference during\ntreadmill walking at speeds of 1.1, 1.4 and 1.7 m/s. We found that when the\nvalue of the gait parameter was conspicuously greater (smaller) than the mean\nvalue, it was either followed immediately by a smaller (greater) value of the\ncontralateral leg (interleg control), or the deviation from the mean value\ndecreased during the next movement of ipsilateral leg (intraleg control). The\nselection of step duration and the selection of step length during such\ntransient control events were performed in unique ways. We quantified the\nsymmetry of short-term control of gait parameters and observed the significant\ndominance of the right leg in short-term control of all three parameters at\nhigher speeds (1.4 and 1.7 m/s).\n", "title": "Asymmetry of short-term control of spatio-temporal gait parameters during treadmill walking" }
null
null
[ "Physics" ]
null
true
null
9656
null
Validated
null
null
null
{ "abstract": " In the Any-Angle Pathfinding problem, the goal is to find the shortest path\nbetween a pair of vertices on a uniform square grid, that is not constrained to\nany fixed number of possible directions over the grid. Visibility Graphs are a\nknown optimal algorithm for solving the problem with the use of pre-processing.\nHowever, Visibility Graphs are known to perform poorly in terms of running\ntime, especially on large, complex maps. In this paper, we introduce two\nimprovements over the Visibility Graph Algorithm to compute optimal paths.\nSparse Visibility Graphs (SVGs) are constructed by pruning unnecessary edges\nfrom the original Visibility Graph. Edge N-Level Sparse Visibility Graphs\n(ENLSVGs) is a hierarchical SVG built by iteratively pruning non-taut paths. We\nalso introduce Line-of-Sight Scans, a faster algorithm for building Visibility\nGraphs over a grid. SVGs run much faster than Visibility Graphs by reducing the\naverage vertex degree. ENLSVGs, a hierarchical algorithm, improves this\nfurther, especially on larger maps. On large maps, with the use of\npre-processing, these algorithms are orders of magnitude faster than existing\nalgorithms like Visibility Graphs and Theta*.\n", "title": "Edge N-Level Sparse Visibility Graphs: Fast Optimal Any-Angle Pathfinding Using Hierarchical Taut Paths" }
null
null
null
null
true
null
9657
null
Default
null
null
null
{ "abstract": " Scenario generation is an important step in the operation and planning of\npower systems with high renewable penetrations. In this work, we proposed a\ndata-driven approach for scenario generation using generative adversarial\nnetworks, which is based on two interconnected deep neural networks. Compared\nwith existing methods based on probabilistic models that are often hard to\nscale or sample from, our method is data-driven, and captures renewable energy\nproduction patterns in both temporal and spatial dimensions for a large number\nof correlated resources. For validation, we use wind and solar times-series\ndata from NREL integration data sets. We demonstrate that the proposed method\nis able to generate realistic wind and photovoltaic power profiles with full\ndiversity of behaviors. We also illustrate how to generate scenarios based on\ndifferent conditions of interest by using labeled data during training. For\nexample, scenarios can be conditioned on weather events~(e.g. high wind day) or\ntime of the year~(e,g. solar generation for a day in July). Because of the\nfeedforward nature of the neural networks, scenarios can be generated extremely\nefficiently without sophisticated sampling techniques.\n", "title": "Model-Free Renewable Scenario Generation Using Generative Adversarial Networks" }
null
null
null
null
true
null
9658
null
Default
null
null
null
{ "abstract": " We give a new proof of Ciocan-Fontanine and Kim's wall-crossing formula\nrelating the virtual classes of the moduli spaces of $\\epsilon$-stable\nquasimaps for different $\\epsilon$ in any genus, whenever the target is a\ncomplete intersection in projective space and there is at least one marked\npoint.\nOur techniques involve a twisted graph space, which we expect to generalize\nto yield wall-crossing formulas for general gauged linear sigma models.\n", "title": "Higher-genus quasimap wall-crossing via localization" }
null
null
null
null
true
null
9659
null
Default
null
null
null
{ "abstract": " When undertaking cyber security risk assessments, we must assign numeric\nvalues to metrics to compute the final expected loss that represents the risk\nthat an organization is exposed to due to cyber threats. Even if risk\nassessment is motivated from real-world observations and data, there is always\na high chance of assigning inaccurate values due to different uncertainties\ninvolved (e.g., evolving threat landscape, human errors) and the natural\ndifficulty of quantifying risk per se. Our previous work has proposed a model\nand a software tool that empowers organizations to compute optimal cyber\nsecurity strategies given their financial constraints, i.e., available cyber\nsecurity budget. We have also introduced a general game-theoretic model with\nuncertain payoffs (probability-distribution-valued payoffs) showing that such\nuncertainty can be incorporated in the game-theoretic model by allowing payoffs\nto be random. In this paper, we combine our aforesaid works and we conclude\nthat although uncertainties in cyber security risk assessment lead, on average,\nto different cyber security strategies, they do not play significant role into\nthe final expected loss of the organization when using our model and\nmethodology to derive this strategies. We show that our tool is capable of\nproviding effective decision support. To the best of our knowledge this is the\nfirst paper that investigates how uncertainties on various parameters affect\ncyber security investments.\n", "title": "Uncertainty in Cyber Security Investments" }
null
null
[ "Computer Science" ]
null
true
null
9660
null
Validated
null
null
null
{ "abstract": " This paper is a continuation of [arXiv:1603.02204]. Exploded layered tropical\n(ELT) algebra is an extension of tropical algebra with a structure of layers.\nThese layers allow us to use classical algebraic results in order to easily\nprove analogous tropical results. Specifically we prove and use an ELT version\nof the transfer principal presented in [2]. In this paper we use the transfer\nprinciple to prove an ELT version of Cayley-Hamilton Theorem, and study the\nmultiplicity of the ELT determinant, ELT adjoint matrices and quasi-invertible\nmatrices. We also define a new notion of trace -- the essential trace -- and\nstudy its properties.\n", "title": "ELT Linear Algebra II" }
null
null
null
null
true
null
9661
null
Default
null
null
null
{ "abstract": " Using image context is an effective approach for improving object detection.\nPreviously proposed methods used contextual cues that rely on semantic or\nspatial information. In this work, we explore a different kind of contextual\ninformation: inner-scene similarity. We present the CISS (Context by Inner\nScene Similarity) algorithm, which is based on the observation that two\nvisually similar sub-image patches are likely to share semantic identities,\nespecially when both appear in the same image. CISS uses base-scores provided\nby a base detector and performs as a post-detection stage. For each candidate\nsub-image (denoted anchor), the CISS algorithm finds a few similar sub-images\n(denoted supporters), and, using them, calculates a new enhanced score for the\nanchor. This is done by utilizing the base-scores of the supporters and a\npre-trained dependency model. The new scores are modeled as a linear function\nof the base scores of the anchor and the supporters and is estimated using a\nminimum mean square error optimization. This approach results in: (a) improved\ndetection of partly occluded objects (when there are similar non-occluded\nobjects in the scene), and (b) fewer false alarms (when the base detector\nmistakenly classifies a background patch as an object). This work relates to\nDuncan and Humphreys' \"similarity theory,\" a psychophysical study. which\nsuggested that the human visual system perceptually groups similar image\nregions and that the classification of one region is affected by the estimated\nidentity of the other. Experimental results demonstrate the enhancement of a\nbase detector's scores on the PASCAL VOC dataset.\n", "title": "Inner-Scene Similarities as a Contextual Cue for Object Detection" }
null
null
null
null
true
null
9662
null
Default
null
null
null
{ "abstract": " The spectral renormalization method was introduced by Ablowitz and Musslimani\nin 2005, [Opt. Lett. 30, pp. 2140-2142] as an effective way to numerically\ncompute (time-independent) bound states for certain nonlinear boundary value\nproblems. % of the nonlinear Schrödinger (NLS), Gross-Pitaevskii and water\nwave type equations to mention a few. In this paper, we extend those ideas to\nthe time domain and introduce a time-dependent spectral renormalization method\nas a numerical means to simulate linear and nonlinear evolution equations. The\nessence of the method is to convert the underlying evolution equation from its\npartial or ordinary differential form (using Duhamel's principle) into an\nintegral equation. The solution sought is then viewed as a fixed point in both\nspace and time. The resulting integral equation is then numerically solved\nusing a simple renormalized fixed-point iteration method. Convergence is\nachieved by introducing a time-dependent renormalization factor which is\nnumerically computed from the physical properties of the governing evolution\nequation. The proposed method has the ability to incorporate physics into the\nsimulations in the form of conservation laws or dissipation rates. This novel\nscheme is implemented on benchmark evolution equations: the classical nonlinear\nSchrödinger (NLS), integrable $PT$ symmetric nonlocal NLS and the viscous\nBurgers' equations, each of which being a prototypical example of a\nconservative and dissipative dynamical system. Numerical implementation and\nalgorithm performance are also discussed.\n", "title": "Time-dependent spectral renormalization method" }
null
null
null
null
true
null
9663
null
Default
null
null
null
{ "abstract": " We present a novel approach for robust manipulation of high-DOF deformable\nobjects such as cloth. Our approach uses a random forest-based controller that\nmaps the observed visual features of the cloth to an optimal control action of\nthe manipulator. The topological structure of this random forest-based\ncontroller is determined automatically based on the training data consisting\nvisual features and optimal control actions. This enables us to integrate the\noverall process of training data classification and controller optimization\ninto an imitation learning (IL) approach. Our approach enables learning of\nrobust control policy for cloth manipulation with guarantees on convergence.We\nhave evaluated our approach on different multi-task cloth manipulation\nbenchmarks such as flattening, folding and twisting. In practice, our approach\nworks well with different deformable features learned based on the specific\ntask or deep learning. Moreover, our controller outperforms a simple or\npiecewise linear controller in terms of robustness to noise. In addition, our\napproach is easy to implement and does not require much parameter tuning.\n", "title": "Cloth Manipulation Using Random-Forest-Based Imitation Learning" }
null
null
null
null
true
null
9664
null
Default
null
null
null
{ "abstract": " We discuss a Bayesian formulation to coarse-graining (CG) of PDEs where the\ncoefficients (e.g. material parameters) exhibit random, fine scale variability.\nThe direct solution to such problems requires grids that are small enough to\nresolve this fine scale variability which unavoidably requires the repeated\nsolution of very large systems of algebraic equations. We establish a\nphysically inspired, data-driven coarse-grained model which learns a low-\ndimensional set of microstructural features that are predictive of the\nfine-grained model (FG) response. Once learned, those features provide a sharp\ndistribution over the coarse scale effec- tive coefficients of the PDE that are\nmost suitable for prediction of the fine scale model output. This ultimately\nallows to replace the computationally expensive FG by a generative proba-\nbilistic model based on evaluating the much cheaper CG several times. Sparsity\nenforcing pri- ors further increase predictive efficiency and reveal\nmicrostructural features that are important in predicting the FG response.\nMoreover, the model yields probabilistic rather than single-point predictions,\nwhich enables the quantification of the unavoidable epistemic uncertainty that\nis present due to the information loss that occurs during the coarse-graining\nprocess.\n", "title": "Probabilistic Reduced-Order Modeling for Stochastic Partial Differential Equations" }
null
null
null
null
true
null
9665
null
Default
null
null
null
{ "abstract": " Accurate protein structural ensembles can be determined with metainference, a\nBayesian inference method that integrates experimental information with prior\nknowledge of the system and deals with all sources of uncertainty and errors as\nwell as with system heterogeneity. Furthermore, metainference can be\nimplemented using the metadynamics approach, which enables the computational\nstudy of complex biological systems requiring extensive conformational\nsampling. In this chapter, we provide a step-by-step guide to perform and\nanalyse metadynamic metainference simulations using the ISDB module of the\nopen-source PLUMED library, as well as a series of practical tips to avoid\ncommon mistakes. Specifically, we will guide the reader in the process of\nlearning how to model the structural ensemble of a small disordered peptide by\ncombining state-of-the-art molecular mechanics force fields with nuclear\nmagnetic resonance data, including chemical shifts, scalar couplings and\nresidual dipolar couplings.\n", "title": "A practical guide to the simultaneous determination of protein structure and dynamics using metainference" }
null
null
null
null
true
null
9666
null
Default
null
null
null
{ "abstract": " The optimal learner for prediction modeling varies depending on the\nunderlying data-generating distribution. Super Learner (SL) is a generic\nensemble learning algorithm that uses cross-validation to select among a\n\"library\" of candidate prediction models. The SL is not restricted to a single\nprediction model, but uses the strengths of a variety of learning algorithms to\nadapt to different databases. While the SL has been shown to perform well in a\nnumber of settings, it has not been thoroughly evaluated in large electronic\nhealthcare databases that are common in pharmacoepidemiology and comparative\neffectiveness research. In this study, we applied and evaluated the performance\nof the SL in its ability to predict treatment assignment using three electronic\nhealthcare databases. We considered a library of algorithms that consisted of\nboth nonparametric and parametric models. We also considered a novel strategy\nfor prediction modeling that combines the SL with the high-dimensional\npropensity score (hdPS) variable selection algorithm. Predictive performance\nwas assessed using three metrics: the negative log-likelihood, area under the\ncurve (AUC), and time complexity. Results showed that the best individual\nalgorithm, in terms of predictive performance, varied across datasets. The SL\nwas able to adapt to the given dataset and optimize predictive performance\nrelative to any individual learner. Combining the SL with the hdPS was the most\nconsistent prediction method and may be promising for PS estimation and\nprediction modeling in electronic healthcare databases.\n", "title": "Propensity score prediction for electronic healthcare databases using Super Learner and High-dimensional Propensity Score Methods" }
null
null
null
null
true
null
9667
null
Default
null
null
null
{ "abstract": " Context. Transit events of extrasolar planets offer the opportunity to study\nthe composition of their atmospheres. Previous work on transmission\nspectroscopy of the close-in gas giant TrES-3 b revealed an increase in\nabsorption towards blue wavelengths of very large amplitude in terms of\natmospheric pressure scale heights, too large to be explained by\nRayleigh-scattering in the planetary atmosphere. Aims. We present a follow-up\nstudy of the optical transmission spectrum of the hot Jupiter TrES-3 b to\ninvestigate the strong increase in opacity towards short wavelengths found by a\nprevious study. Furthermore, we aim to estimate the effect of stellar spots on\nthe transmission spectrum. Methods. This work uses previously published long\nslit spectroscopy transit data of the Gran Telescopio Canarias (GTC) and\npublished broad band observations as well as new observations in different\nbands from the near-UV to the near-IR, for a homogeneous transit light curve\nanalysis. Additionally, a long-term photometric monitoring of the TrES-3 host\nstar was performed. Results. Our newly analysed GTC spectroscopic transit\nobservations show a slope of much lower amplitude than previous studies. We\nconclude from our results the previously reported increasing signal towards\nshort wavelengths is not intrinsic to the TrES-3 system. Furthermore, the broad\nband spectrum favours a flat spectrum. Long-term photometric monitoring rules\nout a significant modification of the transmission spectrum by unocculted star\nspots.\n", "title": "Transmission spectroscopy of the hot Jupiter TrES-3 b: Disproof of an overly large Rayleigh-like feature" }
null
null
null
null
true
null
9668
null
Default
null
null
null
{ "abstract": " Markov decision processes (MDPs) are a popular model for performance analysis\nand optimization of stochastic systems. The parameters of stochastic behavior\nof MDPs are estimates from empirical observations of a system; their values are\nnot known precisely. Different types of MDPs with uncertain, imprecise or\nbounded transition rates or probabilities and rewards exist in the literature.\nCommonly, analysis of models with uncertainties amounts to searching for the\nmost robust policy which means that the goal is to generate a policy with the\ngreatest lower bound on performance (or, symmetrically, the lowest upper bound\non costs). However, hedging against an unlikely worst case may lead to losses\nin other situations. In general, one is interested in policies that behave well\nin all situations which results in a multi-objective view on decision making.\nIn this paper, we consider policies for the expected discounted reward\nmeasure of MDPs with uncertain parameters. In particular, the approach is\ndefined for bounded-parameter MDPs (BMDPs) [8]. In this setting the worst, best\nand average case performances of a policy are analyzed simultaneously, which\nyields a multi-scenario multi-objective optimization problem. The paper\npresents and evaluates approaches to compute the pure Pareto optimal policies\nin the value vector space.\n", "title": "Multi-Objective Approaches to Markov Decision Processes with Uncertain Transition Parameters" }
null
null
null
null
true
null
9669
null
Default
null
null
null
{ "abstract": " In this paper, we give novel certificates for triangular equivalence and rank\nprofiles. These certificates enable to verify the row or column rank profiles\nor the whole rank profile matrix faster than recomputing them, with a\nnegligible overall overhead. We first provide quadratic time and space\nnon-interactive certificates saving the logarithmic factors of previously known\nones. Then we propose interactive certificates for the same problems whose\nMonte Carlo verification complexity requires a small constant number of\nmatrix-vector multiplications, a linear space, and a linear number of extra\nfield operations. As an application we also give an interactive protocol ,\ncertifying the determinant of dense matrices, faster than the best previously\nknown one.\n", "title": "Certificates for triangular equivalence and rank profiles" }
null
null
null
null
true
null
9670
null
Default
null
null
null
{ "abstract": " TF Boosted Trees (TFBT) is a new open-sourced frame-work for the distributed\ntraining of gradient boosted trees. It is based on TensorFlow, and its\ndistinguishing features include a novel architecture, automatic loss\ndifferentiation, layer-by-layer boosting that results in smaller ensembles and\nfaster prediction, principled multi-class handling, and a number of\nregularization techniques to prevent overfitting.\n", "title": "TF Boosted Trees: A scalable TensorFlow based framework for gradient boosting" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
9671
null
Validated
null
null
null
{ "abstract": " The evaluation of a query over a probabilistic database boils down to\ncomputing the probability of a suitable Boolean function, the lineage of the\nquery over the database. The method of query compilation approaches the task in\ntwo stages: first, the query lineage is implemented (compiled) in a circuit\nform where probability computation is tractable; and second, the desired\nprobability is computed over the compiled circuit. A basic theoretical quest in\nquery compilation is that of identifying pertinent classes of queries whose\nlineages admit compact representations over increasingly succinct, tractable\ncircuit classes. Fostering previous work by Jha and Suciu (2012) and Petke and\nRazgon (2013), we focus on queries whose lineages admit circuit implementations\nwith small treewidth, and investigate their compilability within tame classes\nof decision diagrams. In perfect analogy with the characterization of bounded\ncircuit pathwidth by bounded OBDD width, we show that a class of Boolean\nfunctions has bounded circuit treewidth if and only if it has bounded SDD\nwidth. Sentential decision diagrams (SDDs) are central in knowledge\ncompilation, being essentially as tractable as OBDDs but exponentially more\nsuccinct. By incorporating constant width SDDs and polynomial size SDDs, we\nrefine the panorama of query compilation for unions of conjunctive queries with\nand without inequalities.\n", "title": "Circuit Treewidth, Sentential Decision, and Query Compilation" }
null
null
null
null
true
null
9672
null
Default
null
null
null
{ "abstract": " This paper considers the problem of inliers and empty cells and the resulting\nissue of relative inefficiency in estimation under pure samples from a discrete\npopulation when the sample size is small. Many minimum divergence estimators in\nthe $S$-divergence family, although possessing very strong outlier stability\nproperties, often have very poor small sample efficiency in the presence of\ninliers and some are not even defined in the presence of a single empty cell;\nthis limits the practical applicability of these estimators, in spite of their\notherwise sound robustness properties and high asymptotic efficiency. Here, we\nwill study a penalized version of the $S$-divergences such that the resulting\nminimum divergence estimators are free from these issues without altering their\nrobustness properties and asymptotic efficiencies. We will give a general proof\nfor the asymptotic properties of these minimum penalized $S$-divergence\nestimators. This provides a significant addition to the literature as the\nasymptotics of penalized divergences which are not finitely defined are\ncurrently unavailable in the literature. The small sample advantages of the\nminimum penalized $S$-divergence estimators are examined through an extensive\nsimulation study and some empirical suggestions regarding the choice of the\nrelevant underlying tuning parameters are also provided.\n", "title": "Improvements in the Small Sample Efficiency of the Minimum $S$-Divergence Estimators under Discrete Models" }
null
null
null
null
true
null
9673
null
Default
null
null
null
{ "abstract": " We determine all connected homogeneous Kobayashi-hyperbolic manifolds of\ndimension $n\\ge 2$ whose holomorphic automorphism group has dimension $n^2-3$.\nThis result complements existing classifications for automorphism group\ndimension $n^2-2$ (which is in some sense critical) and greater.\n", "title": "Homogeneous Kobayashi-hyperbolic manifolds with automorphism group of subcritical dimension" }
null
null
null
null
true
null
9674
null
Default
null
null
null
{ "abstract": " One important problem in a network is to locate an (invisible) moving entity\nby using distance-detectors placed at strategical locations. For instance, the\nmetric dimension of a graph $G$ is the minimum number $k$ of detectors placed\nin some vertices $\\{v_1,\\cdots,v_k\\}$ such that the vector $(d_1,\\cdots,d_k)$\nof the distances $d(v_i,r)$ between the detectors and the entity's location $r$\nallows to uniquely determine $r \\in V(G)$. In a more realistic setting, instead\nof getting the exact distance information, given devices placed in\n$\\{v_1,\\cdots,v_k\\}$, we get only relative distances between the entity's\nlocation $r$ and the devices (for every $1\\leq i,j\\leq k$, it is provided\nwhether $d(v_i,r) >$, $<$, or $=$ to $d(v_j,r)$). The centroidal dimension of a\ngraph $G$ is the minimum number of devices required to locate the entity in\nthis setting.\nWe consider the natural generalization of the latter problem, where vertices\nmay be probed sequentially until the moving entity is located. At every turn, a\nset $\\{v_1,\\cdots,v_k\\}$ of vertices is probed and then the relative distances\nbetween the vertices $v_i$ and the current location $r$ of the entity are\ngiven. If not located, the moving entity may move along one edge. Let $\\zeta^*\n(G)$ be the minimum $k$ such that the entity is eventually located, whatever it\ndoes, in the graph $G$.\nWe prove that $\\zeta^* (T)\\leq 2$ for every tree $T$ and give an upper bound\non $\\zeta^*(G\\square H)$ in cartesian product of graphs $G$ and $H$. Our main\nresult is that $\\zeta^* (G)\\leq 3$ for any outerplanar graph $G$. We then prove\nthat $\\zeta^* (G)$ is bounded by the pathwidth of $G$ plus 1 and that the\noptimization problem of determining $\\zeta^* (G)$ is NP-hard in general graphs.\nFinally, we show that approximating (up to any constant distance) the entity's\nlocation in the Euclidean plane requires at most two vertices per turn.\n", "title": "Centroidal localization game" }
null
null
null
null
true
null
9675
null
Default
null
null
null
{ "abstract": " We show that even mild improvements of the Polya-Vinogradov inequality would\nimply significant improvements of Burgess' bound on character sums. Our main\ningredients are a lower bound on certain types of character sums (coming from\nworks of the second author joint with J. Bober and Y. Lamzouri) and a\nquantitative relationship between the mean and the logarithmic mean of a\ncompletely multiplicative function.\n", "title": "Improving the Burgess bound via Polya-Vinogradov" }
null
null
[ "Mathematics" ]
null
true
null
9676
null
Validated
null
null
null
{ "abstract": " The phenomenon of polarization of nuclei in the process of stimulated\nrecombination of atoms in the field of circularly polarized laser radiation is\nconsidered. This effect is considered for the case of the proton-electron beams\nused in the method of electron cooling. An estimate is obtained for the maximum\ndegree of polarization of the protons on components of the hyperfine structure\nof the 2s state of the hydrogen atom.\n", "title": "Processes accompanying stimulated recombination of atoms" }
null
null
null
null
true
null
9677
null
Default
null
null
null
{ "abstract": " The existence of string functions, which are not polynomial time computable,\nbut whose graph is checkable in polynomial time, is a basic assumption in\ncryptography. We prove that in the framework of algebraic complexity, there are\nno such families of polynomial functions of polynomially bounded degree over\nfields of characteristic zero. The proof relies on a polynomial upper bound on\nthe approximative complexity of a factor g of a polynomial f in terms of the\n(approximative) complexity of f and the degree of the factor g. This extends a\nresult by Kaltofen (STOC 1986). The concept of approximative complexity allows\nto cope with the case that a factor has an exponential multiplicity, by using a\nperturbation argument. Our result extends to randomized (two-sided error)\ndecision complexity.\n", "title": "The Complexity of Factors of Multivariate Polynomials" }
null
null
null
null
true
null
9678
null
Default
null
null
null
{ "abstract": " In this paper, we outline the vision of chatbots that facilitate the\ninteraction between citizens and policy-makers at the city scale. We report the\nresults of a co-design session attended by more than 60 participants. We give\nan outlook of how some challenges associated with such chatbot systems could be\naddressed in the future.\n", "title": "Chatbots as Conversational Recommender Systems in Urban Contexts" }
null
null
null
null
true
null
9679
null
Default
null
null
null
{ "abstract": " We define the distance between edges of graphs and study the coarse Ricci\ncurvature on edges. We consider the Laplacian on edges based on the\nJost-Horak's definition of the Laplacian on simplicial complexes. As one of our\nmain results, we obtain an estimate of the first non-zero eigenvalue of the\nLaplacian by the Ricci curvature for a regular graph.\n", "title": "An estimate of the first non-zero eigenvalue of the Laplacian by the Ricci curvature on edges of graphs" }
null
null
null
null
true
null
9680
null
Default
null
null
null
{ "abstract": " We present MILABOT: a deep reinforcement learning chatbot developed by the\nMontreal Institute for Learning Algorithms (MILA) for the Amazon Alexa Prize\ncompetition. MILABOT is capable of conversing with humans on popular small talk\ntopics through both speech and text. The system consists of an ensemble of\nnatural language generation and retrieval models, including template-based\nmodels, bag-of-words models, sequence-to-sequence neural network and latent\nvariable neural network models. By applying reinforcement learning to\ncrowdsourced data and real-world user interactions, the system has been trained\nto select an appropriate response from the models in its ensemble. The system\nhas been evaluated through A/B testing with real-world users, where it\nperformed significantly better than many competing systems. Due to its machine\nlearning architecture, the system is likely to improve with additional data.\n", "title": "A Deep Reinforcement Learning Chatbot" }
null
null
null
null
true
null
9681
null
Default
null
null
null
{ "abstract": " We propose a data-driven method to solve a stochastic optimal power flow\n(OPF) problem based on limited information about forecast error distributions.\nThe objective is to determine power schedules for controllable devices in a\npower network to balance operation cost and conditional value-at-risk (CVaR) of\ndevice and network constraint violations. These decisions include scheduled\npower output adjustments and reserve policies, which specify planned reactions\nto forecast errors in order to accommodate fluctuating renewable energy\nsources. Instead of assuming the uncertainties across the networks follow\nprescribed probability distributions, we assume the distributions are only\nobservable through a finite training dataset. By utilizing the Wasserstein\nmetric to quantify differences between the empirical data-based distribution\nand the real data-generating distribution, we formulate a distributionally\nrobust optimization OPF problem to search for power schedules and reserve\npolicies that are robust to sampling errors inherent in the dataset. A simple\nnumerical example illustrates inherent tradeoffs between operation cost and\nrisk of constraint violation, and we show how our proposed method offers a\ndata-driven framework to balance these objectives.\n", "title": "Stochastic Optimal Power Flow Based on Data-Driven Distributionally Robust Optimization" }
null
null
null
null
true
null
9682
null
Default
null
null
null
{ "abstract": " Real-time crime forecasting is important. However, accurate prediction of\nwhen and where the next crime will happen is difficult. No known physical model\nprovides a reasonable approximation to such a complex system. Historical crime\ndata are sparse in both space and time and the signal of interests is weak. In\nthis work, we first present a proper representation of crime data. We then\nadapt the spatial temporal residual network on the well represented data to\npredict the distribution of crime in Los Angeles at the scale of hours in\nneighborhood-sized parcels. These experiments as well as comparisons with\nseveral existing approaches to prediction demonstrate the superiority of the\nproposed model in terms of accuracy. Finally, we present a ternarization\ntechnique to address the resource consumption issue for its deployment in real\nworld. This work is an extension of our short conference proceeding paper [Wang\net al, Arxiv 1707.03340].\n", "title": "Deep Learning for Real-Time Crime Forecasting and its Ternarization" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
9683
null
Validated
null
null
null
{ "abstract": " We investigated the physical properties of the comet-like objects 107P/(4015)\nWilson--Harrington (4015WH) and P/2006 HR30 (Siding Spring; HR30) by applying a\nsimple thermophysical model (TPM) to the near-infrared spectroscopy and\nbroadband observation data obtained by AKARI satellite of JAXA when they showed\nno detectable comet-like activity. We selected these two targets since the\ntendency of thermal inertia to decrease with the size of an asteroid, which has\nbeen demonstrated in recent studies, has not been confirmed for comet-like\nobjects. It was found that 4015WH, which was originally discovered as a comet\nbut has not shown comet-like activity since its discovery, has effective size $\nD= $ 3.74--4.39 km and geometric albedo $ p_V \\approx $ 0.040--0.055 with\nthermal inertia $ \\Gamma = $ 100--250 J m$ ^{-2} $ K$ ^{-1} $ s$ ^{-1/2}$. The\ncorresponding grain size is estimated to 1--3 mm. We also found that HR30,\nwhich was observed as a bare cometary nucleus at the time of our observation,\nhave $ D= $ 23.9--27.1 km and $ p_V= $0.035--0.045 with $ \\Gamma= $ 250--1,000\nJ m$ ^{-2} $ K$ ^{-1} $ s$ ^{-1/2}$. We conjecture the pole latitude $ -\n20^{\\circ} \\lesssim \\beta_s \\lesssim +60^{\\circ}$. The results for both targets\nare consistent with previous studies. Based on the results, we propose that\ncomet-like objects are not clearly distinguishable from asteroidal counterpart\non the $ D $--$ \\Gamma $ plane.\n", "title": "Thermal Modeling of Comet-Like Objects from AKARI Observation" }
null
null
null
null
true
null
9684
null
Default
null
null
null
{ "abstract": " Availability of a validated, realistic fuel cost model is a prerequisite to\nthe development and validation of new optimization methods and control tools.\nThis paper uses an autoregressive integrated moving average (ARIMA) model with\nhistorical fuel cost data in development of a three-step-ahead fuel cost\ndistribution prediction. First, the data features of Form EIA-923 are explored\nand the natural gas fuel costs of Texas generating facilities are used to\ndevelop and validate the forecasting algorithm for the Texas example.\nFurthermore, the spot price associated with the natural gas hub in Texas is\nutilized to enhance the fuel cost prediction. The forecasted data is fit to a\nnormal distribution and the Kullback-Leibler divergence is employed to evaluate\nthe difference between the real fuel cost distributions and the estimated\ndistributions. The comparative evaluation suggests the proposed forecasting\nalgorithm is effective in general and is worth pursuing further.\n", "title": "Improvement to the Prediction of Fuel Cost Distributions Using ARIMA Model" }
null
null
null
null
true
null
9685
null
Default
null
null
null
{ "abstract": " In this work, we show that the model of timed discrete-event systems (TDES)\nproposed by Brandin and Wonham is essentially a synchronous product structure.\nThis resolves an open problem that has remained unaddressed for the past 25\nyears and has its application in developing a more efficient timed state-tree\nstructures (TSTS) framework. The proof is constructive in the sense that an\nexplicit synchronous production rule is provided to generate a TDES from the\nactivity automaton and the timer automata after a suitable transformation of\nthe model.\n", "title": "Timed Discrete-Event Systems are Synchronous Product Structures" }
null
null
null
null
true
null
9686
null
Default
null
null
null
{ "abstract": " The functional significance of resting state networks and their abnormal\nmanifestations in psychiatric disorders are firmly established, as is the\nimportance of the cortical rhythms in mediating these networks. Resting state\nnetworks are known to undergo substantial reorganization from childhood to\nadulthood, but whether distinct cortical rhythms, which are generated by\nseparable neural mechanisms and are often manifested abnormally in psychiatric\nconditions, mediate maturation differentially, remains unknown. Using\nmagnetoencephalography (MEG) to map frequency band specific maturation of\nresting state networks from age 7 to 29 in 162 participants (31 independent),\nwe found significant changes with age in networks mediated by the beta\n(13-30Hz) and gamma (31-80Hz) bands. More specifically, gamma band mediated\nnetworks followed an expected asymptotic trajectory, but beta band mediated\nnetworks followed a linear trajectory. Network integration increased with age\nin gamma band mediated networks, while local segregation increased with age in\nbeta band mediated networks. Spatially, the hubs that changed in importance\nwith age in the beta band mediated networks had relatively little overlap with\nthose that showed the greatest changes in the gamma band mediated networks.\nThese findings are relevant for our understanding of the neural mechanisms of\ncortical maturation, in both typical and atypical development.\n", "title": "Maturation Trajectories of Cortical Resting-State Networks Depend on the Mediating Frequency Band" }
null
null
[ "Statistics", "Quantitative Biology" ]
null
true
null
9687
null
Validated
null
null
null
{ "abstract": " We report on the development of a versatile cryogen-free laboratory cryostat\nbased upon a commercial pulse tube cryocooler. It provides enough cooling power\nfor continuous recondensation of circulating $^4$He gas at a condensation\npressure of approximately 250~mbar. Moreover, the cryostat allows for exchange\nof different cryostat-inserts as well as fast and easy \"wet\" top-loading of\nsamples directly into the 1 K pot with a turn-over time of less than 75~min.\nStarting from room temperature and using a $^4$He cryostat-insert, a base\ntemperature of 1.0~K is reached within approximately seven hours and a cooling\npower of 250~mW is established at 1.24~K.\n", "title": "High-power closed-cycle $^4$He cryostat with top-loading sample exchange" }
null
null
null
null
true
null
9688
null
Default
null
null
null
{ "abstract": " I present a new proof of Kirchberg's $\\mathcal O_2$-stable classification\ntheorem: two separable, nuclear, stable/unital, $\\mathcal O_2$-stable\n$C^\\ast$-algebras are isomorphic if and only if their ideal lattices are order\nisomorphic, or equivalently, their primitive ideal spaces are homeomorphic.\nMany intermediate results do not depend on pure infiniteness of any sort.\n", "title": "A new proof of Kirchberg's $\\mathcal O_2$-stable classification" }
null
null
null
null
true
null
9689
null
Default
null
null
null
{ "abstract": " In this paper we analyse the profile of land use and population density with\nrespect to the distance to the city centre for the European city. In addition\nto providing the radial population density and soil-sealing profiles for a\nlarge set of cities, we demonstrate a remarkable constancy of the profiles\nacross city size.\nOur analysis combines the GMES/Copernicus Urban Atlas 2006 land use database\nat 5m resolution for 300 European cities with more than 100.000 inhabitants and\nthe Geostat population grid at 1km resolution. Population is allocated\nproportionally to surface and weighted by soil sealing and density classes of\nthe Urban Atlas. We analyse the profile of each artificial land use and\npopulation with distance to the town hall.\nIn line with earlier literature, we confirm the strong monocentricity of the\nEuropean city and the negative exponential curve for population density.\nMoreover, we find that land use curves, in particular the share of housing and\nroads, scale along the two horizontal dimensions with the square root of city\npopulation, while population curves scale in three dimensions with the cubic\nroot of city population. In short, European cities of different sizes are\nhomothetic in terms of land use and population density. While earlier\nliterature documented the scaling of average densities (total surface and\npopulation) with city size, we document the scaling of the whole radial\ndistance profile with city size, thus liaising intra-urban radial analysis and\nsystems of cities. In addition to providing a new empirical view of the\nEuropean city, our scaling offers a set of practical and coherent definitions\nof a city, independent of its population, from which we can re-question urban\nscaling laws and Zipf's law for cities.\n", "title": "Scaling evidence of the homothetic nature of cities" }
null
null
[ "Physics" ]
null
true
null
9690
null
Validated
null
null
null
{ "abstract": " Let $H$ be a semisimple algebraic group, $K$ a maximal compact subgroup of\n$G:=H(\\mathbb{R})$, and $\\Gamma\\subset H(\\mathbb{Q})$ a congruence arithmetic\nsubgroup. In this paper, we generalize existing subconvex bounds for\nHecke-Maass forms on the locally symmetric space $\\Gamma \\backslash G/K$ to\ncorresponding bounds on the arithmetic quotient $\\Gamma \\backslash G$ for\ncocompact lattices using the spectral function of an elliptic operator. The\nbounds obtained extend known subconvex bounds for automorphic forms to\nnon-trivial $K$-types, yielding subconvex bounds for new classes of automorphic\nrepresentations, and constitute subconvex bounds for eigenfunctions on compact\nmanifolds with both positive and negative sectional curvature. We also obtain\nnew subconvex bounds for holomorphic modular forms in the weight aspect.\n", "title": "Subconvex bounds for Hecke-Maass forms on compact arithmetic quotients of semisimple Lie groups" }
null
null
null
null
true
null
9691
null
Default
null
null
null
{ "abstract": " We present results of empirical studies on positive speech on Twitter. By\npositive speech we understand speech that works for the betterment of a given\nsituation, in this case relations between different communities in a\nconflict-prone country. We worked with four Twitter data sets. Through\nsemi-manual opinion mining, we found that positive speech accounted for < 1% of\nthe data . In fully automated studies, we tested two approaches: unsupervised\nstatistical analysis, and supervised text classification based on distributed\nword representation. We discuss benefits and challenges of those approaches and\nreport empirical evidence obtained in the study.\n", "title": "Studying Positive Speech on Twitter" }
null
null
null
null
true
null
9692
null
Default
null
null
null
{ "abstract": " In almost any geostatistical analysis, one of the underlying, often implicit,\nmodelling assump- tions is that the spatial locations, where measurements are\ntaken, are recorded without error. In this study we develop geostatistical\ninference when this assumption is not valid. This is often the case when, for\nexample, individual address information is randomly altered to provide pri-\nvacy protection or imprecisions are induced by geocoding processes and\nmeasurement devices. Our objective is to develop a method of inference based on\nthe composite likelihood that over- comes the inherent computational limits of\nthe full likelihood method as set out in Fanshawe and Diggle (2011). Through a\nsimulation study, we then compare the performance of our proposed approach with\nan N-weighted least squares estimation procedure, based on a corrected version\nof the empirical variogram. Our results indicate that the composite-likelihood\napproach outper- forms the latter, leading to smaller root-mean-square-errors\nin the parameter estimates. Finally, we illustrate an application of our method\nto analyse data on malnutrition from a Demographic and Health Survey conducted\nin Senegal in 2011, where locations were randomly perturbed to protect the\nprivacy of respondents.\n", "title": "Geostatistical inference in the presence of geomasking: a composite-likelihood approach" }
null
null
null
null
true
null
9693
null
Default
null
null
null
{ "abstract": " In kernel methods, the median heuristic has been widely used as a way of\nsetting the bandwidth of RBF kernels. While its empirical performances make it\na safe choice under many circumstances, there is little theoretical\nunderstanding of why this is the case. Our aim in this paper is to advance our\nunderstanding of the median heuristic by focusing on the setting of kernel\ntwo-sample test. We collect new findings that may be of interest for both\ntheoreticians and practitioners. In theory, we provide a convergence analysis\nthat shows the asymptotic normality of the bandwidth chosen by the median\nheuristic in the setting of kernel two-sample test. Systematic empirical\ninvestigations are also conducted in simple settings, comparing the\nperformances based on the bandwidths chosen by the median heuristic and those\nby the maximization of test power.\n", "title": "Large sample analysis of the median heuristic" }
null
null
null
null
true
null
9694
null
Default
null
null
null
{ "abstract": " Similar to most of the real world data, the ubiquitous presence of\nnon-stationarities in the EEG signals significantly perturb the feature\ndistribution thus deteriorating the performance of Brain Computer Interface. In\nthis letter, a novel method is proposed based on Joint Approximate\nDiagonalization (JAD) to optimize stationarity for multiclass motor imagery\nBrain Computer Interface (BCI) in an information theoretic framework.\nSpecifically, in the proposed method, we estimate the subspace which optimizes\nthe discriminability between the classes and simultaneously preserve\nstationarity within the motor imagery classes. We determine the subspace for\nthe proposed approach through optimization using gradient descent on an\northogonal manifold. The performance of the proposed stationarity enforcing\nalgorithm is compared to that of baseline One-Versus-Rest (OVR)-CSP and JAD on\npublicly available BCI competition IV dataset IIa. Results show that an\nimprovement in average classification accuracies across the subjects over the\nbaseline algorithms and thus essence of alleviating within session\nnon-stationarities.\n", "title": "Divergence Framework for EEG based Multiclass Motor Imagery Brain Computer Interface" }
null
null
null
null
true
null
9695
null
Default
null
null
null
{ "abstract": " Principal component analysis (PCA) is one of the most powerful tools in\nmachine learning. The simplest method for PCA, the power iteration, requires\n$\\mathcal O(1/\\Delta)$ full-data passes to recover the principal component of a\nmatrix with eigen-gap $\\Delta$. Lanczos, a significantly more complex method,\nachieves an accelerated rate of $\\mathcal O(1/\\sqrt{\\Delta})$ passes. Modern\napplications, however, motivate methods that only ingest a subset of available\ndata, known as the stochastic setting. In the online stochastic setting, simple\nalgorithms like Oja's iteration achieve the optimal sample complexity $\\mathcal\nO(\\sigma^2/\\Delta^2)$. Unfortunately, they are fully sequential, and also\nrequire $\\mathcal O(\\sigma^2/\\Delta^2)$ iterations, far from the $\\mathcal\nO(1/\\sqrt{\\Delta})$ rate of Lanczos. We propose a simple variant of the power\niteration with an added momentum term, that achieves both the optimal sample\nand iteration complexity. In the full-pass setting, standard analysis shows\nthat momentum achieves the accelerated rate, $\\mathcal O(1/\\sqrt{\\Delta})$. We\ndemonstrate empirically that naively applying momentum to a stochastic method,\ndoes not result in acceleration. We perform a novel, tight variance analysis\nthat reveals the \"breaking-point variance\" beyond which this acceleration does\nnot occur. By combining this insight with modern variance reduction techniques,\nwe construct stochastic PCA algorithms, for the online and offline setting,\nthat achieve an accelerated iteration complexity $\\mathcal O(1/\\sqrt{\\Delta})$.\nDue to the embarassingly parallel nature of our methods, this acceleration\ntranslates directly to wall-clock time if deployed in a parallel environment.\nOur approach is very general, and applies to many non-convex optimization\nproblems that can now be accelerated using the same technique.\n", "title": "Accelerated Stochastic Power Iteration" }
null
null
null
null
true
null
9696
null
Default
null
null
null
{ "abstract": " We present the results of three-dimensional (3D) ideal magnetohydrodynamics\n(MHD) simulations on the dynamics of a perpendicularly inhomogeneous plasma\ndisturbed by propagating Alfvénic waves. Simpler versions of this scenario\nhave been extensively studied as the phenomenon of phase mixing. We show that,\nby generalizing the textbook version of phase mixing, interesting phenomena are\nobtained, such as turbulence-like behavior and complex current-sheet structure,\na novelty in longitudinally homogeneous plasma excited by unidirectionally\npropagating waves. This constitutes an important finding for turbulence-related\nphenomena in astrophysics in general, relaxing the conditions that have to be\nfulfilled in order to generate turbulent behavior.\n", "title": "Generalized phase mixing: Turbulence-like behaviour from unidirectionally propagating MHD waves" }
null
null
null
null
true
null
9697
null
Default
null
null
null
{ "abstract": " Along with the advance of opinion mining techniques, public mood has been\nfound to be a key element for stock market prediction. However, how market\nparticipants' behavior is affected by public mood has been rarely discussed.\nConsequently, there has been little progress in leveraging public mood for the\nasset allocation problem, which is preferred in a trusted and interpretable\nway. In order to address the issue of incorporating public mood analyzed from\nsocial media, we propose to formalize public mood into market views, because\nmarket views can be integrated into the modern portfolio theory. In our\nframework, the optimal market views will maximize returns in each period with a\nBayesian asset allocation model. We train two neural models to generate the\nmarket views, and benchmark the model performance on other popular asset\nallocation strategies. Our experimental results suggest that the formalization\nof market views significantly increases the profitability (5% to 10% annually)\nof the simulated portfolio at a given risk level.\n", "title": "Discovering Bayesian Market Views for Intelligent Asset Allocation" }
null
null
[ "Quantitative Finance" ]
null
true
null
9698
null
Validated
null
null
null
{ "abstract": " In this article we consider static Bayesian parameter estimation for\npartially observed diffusions that are discretely observed. We work under the\nassumption that one must resort to discretizing the underlying diffusion\nprocess, for instance using the Euler-Maruyama method. Given this assumption,\nwe show how one can use Markov chain Monte Carlo (MCMC) and particularly\nparticle MCMC [Andrieu, C., Doucet, A. and Holenstein, R. (2010). Particle\nMarkov chain Monte Carlo methods (with discussion). J. R. Statist. Soc. Ser. B,\n72, 269--342] to implement a new approximation of the multilevel (ML) Monte\nCarlo (MC) collapsing sum identity. Our approach comprises constructing an\napproximate coupling of the posterior density of the joint distribution over\nparameter and hidden variables at two different discretization levels and then\ncorrecting by an importance sampling method. The variance of the weights are\nindependent of the length of the observed data set. The utility of such a\nmethod is that, for a prescribed level of mean square error, the cost of this\nMLMC method is provably less than i.i.d. sampling from the posterior associated\nto the most precise discretization. However the method here comprises using\nonly known and efficient simulation methodologies. The theoretical results are\nillustrated by inference of the parameters of two prototypical processes given\nnoisy partial observations of the process: the first is an Ornstein Uhlenbeck\nprocess and the second is a more general Langevin equation.\n", "title": "Bayesian Static Parameter Estimation for Partially Observed Diffusions via Multilevel Monte Carlo" }
null
null
null
null
true
null
9699
null
Default
null
null
null
{ "abstract": " In many applications, the interdependencies among a set of $N$ time series\n$\\{ x_{nk}, k>0 \\}_{n=1}^{N}$ are well captured by a graph or network $G$. The\nnetwork itself may change over time as well (i.e., as $G_k$). We expect the\nnetwork changes to be at a much slower rate than that of the time series. This\npaper introduces eigennetworks, networks that are building blocks to compose\nthe actual networks $G_k$ capturing the dependencies among the time series.\nThese eigennetworks can be estimated by first learning the time series of\ngraphs $G_k$ from the data, followed by a Principal Network Analysis procedure.\nAlgorithms for learning both the original time series of graphs and the\neigennetworks are presented and discussed. Experiments on simulated and real\ntime series data demonstrate the performance of the learning and the\ninterpretation of the eigennetworks.\n", "title": "EigenNetworks" }
null
null
null
null
true
null
9700
null
Default
null
null