text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Private record linkage (PRL) is the problem of identifying pairs of records\nthat are similar as per an input matching rule from databases held by two\nparties that do not trust one another. We identify three key desiderata that a\nPRL solution must ensure: 1) perfect precision and high recall of matching\npairs, 2) a proof of end-to-end privacy, and 3) communication and computational\ncosts that scale subquadratically in the number of input records. We show that\nall of the existing solutions for PRL - including secure 2-party computation\n(S2PC), and their variants that use non-private or differentially private (DP)\nblocking to ensure subquadratic cost - violate at least one of the three\ndesiderata. In particular, S2PC techniques guarantee end-to-end privacy but\nhave either low recall or quadratic cost. In contrast, no end-to-end privacy\nguarantee has been formalized for solutions that achieve subquadratic cost.\nThis is true even for solutions that compose DP and S2PC: DP does not permit\nthe release of any exact information about the databases, while S2PC algorithms\nfor PRL allow the release of matching records.\nIn light of this deficiency, we propose a novel privacy model, called output\nconstrained differential privacy, that shares the strong privacy protection of\nDP, but allows for the truthful release of the output of a certain function\napplied to the data. We apply this to PRL, and show that protocols satisfying\nthis privacy model permit the disclosure of the true matching records, but\ntheir execution is insensitive to the presence or absence of a single\nnon-matching record. We find that prior work that combine DP and S2PC\ntechniques even fail to satisfy this end-to-end privacy model. Hence, we\ndevelop novel protocols that provably achieve this end-to-end privacy\nguarantee, together with the other two desiderata of PRL.\n", "title": "Composing Differential Privacy and Secure Computation: A case study on scaling private record linkage" }
null
null
null
null
true
null
1801
null
Default
null
null
null
{ "abstract": " Despite the recent popularity of deep generative state space models, few\ncomparisons have been made between network architectures and the inference\nsteps of the Bayesian filtering framework -- with most models simultaneously\napproximating both state transition and update steps with a single recurrent\nneural network (RNN). In this paper, we introduce the Recurrent Neural Filter\n(RNF), a novel recurrent variational autoencoder architecture that learns\ndistinct representations for each Bayesian filtering step, captured by a series\nof encoders and decoders. Testing this on three real-world time series\ndatasets, we demonstrate that decoupling representations not only improves the\naccuracy of one-step-ahead forecasts while providing realistic uncertainty\nestimates, but also facilitates multistep prediction through the separation of\nencoder stages.\n", "title": "Recurrent Neural Filters: Learning Independent Bayesian Filtering Steps for Time Series Prediction" }
null
null
null
null
true
null
1802
null
Default
null
null
null
{ "abstract": " We consider the optimal coverage problem where a multi-agent network is\ndeployed in an environment with obstacles to maximize a joint event detection\nprobability. The objective function of this problem is non-convex and no global\noptimum is guaranteed by gradient-based algorithms developed to date. We first\nshow that the objective function is monotone submodular, a class of functions\nfor which a simple greedy algorithm is known to be within 0.63 of the optimal\nsolution. We then derive two tighter lower bounds by exploiting the curvature\ninformation (total curvature and elemental curvature) of the objective\nfunction. We further show that the tightness of these lower bounds is\ncomplementary with respect to the sensing capabilities of the agents. The\ngreedy algorithm solution can be subsequently used as an initial point for a\ngradient-based algorithm to obtain solutions even closer to the global optimum.\nSimulation results show that this approach leads to significantly better\nperformance relative to previously used algorithms.\n", "title": "A Submodularity-Based Approach for Multi-Agent Optimal Coverage Problems" }
null
null
null
null
true
null
1803
null
Default
null
null
null
{ "abstract": " A novel and scalable geometric multi-level algorithm is presented for the\nnumerical solution of elliptic partial differential equations, specially\ndesigned to run with high occupancy of streaming processors inside Graphics\nProcessing Units(GPUs). The algorithm consists of iterative, superposed\noperations on a single grid, and it is composed of two simple full-grid\nroutines: a restriction and a coarsened interpolation-relaxation. The\nrestriction is used to collect sources using recursive coarsened averages, and\nthe interpolation-relaxation simultaneously applies coarsened finite-difference\noperators and interpolations. The routines are scheduled in a saw-like refining\ncycle. Convergence to machine precision is achieved repeating the full cycle\nusing accumulated residuals and successively collecting the solution. Its total\nnumber of operations scale linearly with the number of nodes. It provides an\nattractive fast solver for Boundary Value Problems (BVPs), specially for\nsimulations running entirely in the GPU. Applications shown in this work\ninclude the deformation of two-dimensional grids, the computation of\nthree-dimensional streamlines for a singular trifoil-knot vortex and the\ncalculation of three-dimensional electric potentials in heterogeneous\ndielectric media.\n", "title": "A GPU-based Multi-level Algorithm for Boundary Value Problems" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
1804
null
Validated
null
null
null
{ "abstract": " Recently, the k-induction algorithm has proven to be a successful approach\nfor both finding bugs and proving correctness. However, since the algorithm is\nan incremental approach, it might waste resources trying to prove incorrect\nprograms. In this paper, we propose to extend the k-induction algorithm in\norder to shorten the number of steps required to find a property violation. We\nconvert the algorithm into a meet-in-the-middle bidirectional search algorithm,\nusing the counterexample produced from over-approximating the program. The\npreliminary results show that the number of steps required to find a property\nviolation is reduced to $\\lfloor\\frac{k}{2} + 1\\rfloor$ and the verification\ntime for programs with large state space is reduced considerably.\n", "title": "Counterexample-Guided k-Induction Verification for Fast Bug Detection" }
null
null
[ "Computer Science" ]
null
true
null
1805
null
Validated
null
null
null
{ "abstract": " Gaussian process (GP) regression has been widely used in supervised machine\nlearning due to its flexibility and inherent ability to describe uncertainty in\nfunction estimation. In the context of control, it is seeing increasing use for\nmodeling of nonlinear dynamical systems from data, as it allows the direct\nassessment of residual model uncertainty. We present a model predictive control\n(MPC) approach that integrates a nominal system with an additive nonlinear part\nof the dynamics modeled as a GP. Approximation techniques for propagating the\nstate distribution are reviewed and we describe a principled way of formulating\nthe chance constrained MPC problem, which takes into account residual\nuncertainties provided by the GP model to enable cautious control. Using\nadditional approximations for efficient computation, we finally demonstrate the\napproach in a simulation example, as well as in a hardware implementation for\nautonomous racing of remote controlled race cars, highlighting improvements\nwith regard to both performance and safety over a nominal controller.\n", "title": "Cautious Model Predictive Control using Gaussian Process Regression" }
null
null
null
null
true
null
1806
null
Default
null
null
null
{ "abstract": " Using movement primitive libraries is an effective means to enable robots to\nsolve more complex tasks. In order to build these movement libraries, current\nalgorithms require a prior segmentation of the demonstration trajectories. A\npromising approach is to model the trajectory as being generated by a set of\nSwitching Linear Dynamical Systems and inferring a meaningful segmentation by\ninspecting the transition points characterized by the switching dynamics. With\nrespect to the learning, a nonparametric Bayesian approach is employed\nutilizing a Gibbs sampler.\n", "title": "Probabilistic Trajectory Segmentation by Means of Hierarchical Dirichlet Process Switching Linear Dynamical Systems" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
1807
null
Validated
null
null
null
{ "abstract": " Radio astronomy observational facilities are under constant upgradation and\ndevelopment to achieve better capabilities including increasing the time and\nfrequency resolutions of the recorded data, and increasing the receiving and\nrecording bandwidth. As only a limited spectrum resource has been allocated to\nradio astronomy by the International Telecommunication Union, this results in\nthe radio observational instrumentation being inevitably exposed to undesirable\nradio frequency interference (RFI) signals which originate mainly from\nterrestrial human activity and are becoming stronger with time. RFIs degrade\nthe quality of astronomical data and even lead to data loss. The impact of RFIs\non scientific outcome is becoming progressively difficult to manage. In this\narticle, we motivate the requirement for RFI mitigation, and review the RFI\ncharacteristics, mitigation techniques and strategies. Mitigation strategies\nadopted at some representative observatories, telescopes and arrays are also\nintroduced. We also discuss and present advantages and shortcomings of the four\nclasses of RFI mitigation strategies, applicable at the connected causal\nstages: preventive, pre-detection, pre-correlation and post-correlation. The\nproper identification and flagging of RFI is key to the reduction of data loss\nand improvement in data quality, and is also the ultimate goal of developing\nRFI mitigation techniques. This can be achieved through a strategy involving a\ncombination of the discussed techniques in stages. Recent advances in high\nspeed digital signal processing and high performance computing allow for\nperforming RFI excision of large data volumes generated from large telescopes\nor arrays in both real time and offline modes, aiding the proposed strategy.\n", "title": "Radio Frequency Interference Mitigation" }
null
null
null
null
true
null
1808
null
Default
null
null
null
{ "abstract": " Data quality of Phasor Measurement Unit (PMU) is receiving increasing\nattention as it has been identified as one of the limiting factors that affect\nmany wide-area measurement system (WAMS) based applications. In general,\nexisting PMU calibration methods include offline testing and model based\napproaches. However, in practice, the effectiveness of both is limited due to\nthe very strong assumptions employed. This paper presents a novel framework for\nonline bias error detection and calibration of PMU measurement using\ndensity-based spatial clustering of applications with noise (DBSCAN) based on\nmuch relaxed assumptions. With a new problem formulation, the proposed data\nmining based methodology is applicable across a wide spectrum of practical\nconditions and one side-product of it is more accurate transmission line\nparameters for EMS database and protective relay settings. Case studies\ndemonstrate the effectiveness of the proposed approach.\n", "title": "Online Calibration of Phasor Measurement Unit Using Density-Based Spatial Clustering" }
null
null
null
null
true
null
1809
null
Default
null
null
null
{ "abstract": " We provide a detailed (and fully rigorous) derivation of several fundamental\nproperties of bounded weak solutions to initial-value problems for general\nconservative 2nd-order parabolic equations with p-Laplacian diffusion and\n(arbitrary) bounded and integrable initial data.\n", "title": "Some basic properties of bounded solutions of parabolic equations with p-Laplacian diffusion" }
null
null
null
null
true
null
1810
null
Default
null
null
null
{ "abstract": " We address the controversy over the proximity effect between topological\nmaterials and high T$_{c}$ superconductors. Junctions are produced between\nBi$_{2}$Sr$_{2}$CaCu$_{2}$O$_{8+\\delta}$ and materials with different Fermi\nsurfaces (Bi$_{2}$Te$_{3}$ \\& graphite). Both cases reveal tunneling spectra\nconsistent with Andreev reflection. This is confirmed by magnetic field that\nshifts features via the Doppler effect. This is modeled with a single parameter\nthat accounts for tunneling into a screening supercurrent. Thus the tunneling\ninvolves Cooper pairs crossing the heterostructure, showing the Fermi surface\nmis-match does not hinder the ability to form transparent interfaces, which is\naccounted for by the extended Brillouin zone and different lattice symmetries.\n", "title": "Andreev Reflection without Fermi surface alignment in High T$_{c}$-Topological heterostructures" }
null
null
null
null
true
null
1811
null
Default
null
null
null
{ "abstract": " This paper presents a novel method for structural data recognition using a\nlarge number of graph models. In general, prevalent methods for structural data\nrecognition have two shortcomings: 1) Only a single model is used to capture\nstructural variation. 2) Naive recognition methods are used, such as the\nnearest neighbor method. In this paper, we propose strengthening the\nrecognition performance of these models as well as their ability to capture\nstructural variation. The proposed method constructs a large number of graph\nmodels and trains decision trees using the models. This paper makes two main\ncontributions. The first is a novel graph model that can quickly perform\ncalculations, which allows us to construct several models in a feasible amount\nof time. The second contribution is a novel approach to structural data\nrecognition: graph model boosting. Comprehensive structural variations can be\ncaptured with a large number of graph models constructed in a boosting\nframework, and a sophisticated classifier can be formed by aggregating the\ndecision trees. Consequently, we can carry out structural data recognition with\npowerful recognition capability in the face of comprehensive structural\nvariation. The experiments shows that the proposed method achieves impressive\nresults and outperforms existing methods on datasets of IAM graph database\nrepository.\n", "title": "Structural Data Recognition with Graph Model Boosting" }
null
null
null
null
true
null
1812
null
Default
null
null
null
{ "abstract": " We propose to introduce the concept of exceptional points in intermediate\ncourses on mathematics and classical mechanics by means of simple textbook\nexamples. The first one is an ordinary second-order differential equation with\nconstant coefficients. The second one is the well known damped harmonic\noscillator. They enable one to connect the occurrence of linearly dependent\nexponential solutions with a defective matrix that cannot be diagonalized but\ncan be transformed into a Jordan canonical form.\n", "title": "Exceptional points in two simple textbook examples" }
null
null
null
null
true
null
1813
null
Default
null
null
null
{ "abstract": " In this paper we consider a location model of the form $Y = m(X) +\n\\varepsilon$, where $m(\\cdot)$ is the unknown regression function, the error\n$\\varepsilon$ is independent of the $p$-dimensional covariate $X$ and\n$E(\\varepsilon)=0$. Given i.i.d. data $(X_1,Y_1),\\ldots,(X_n,Y_n)$ and given an\nestimator $\\hat m(\\cdot)$ of the function $m(\\cdot)$ (which can be parametric\nor nonparametric of nature), we estimate the distribution of the error term\n$\\varepsilon$ by the empirical distribution of the residuals $Y_i-\\hat m(X_i)$,\n$i=1,\\ldots,n$. To approximate the distribution of this estimator, Koul and\nLahiri (1994) and Neumeyer (2008, 2009) proposed bootstrap procedures, based on\nsmoothing the residuals either before or after drawing bootstrap samples. So\nfar it has been an open question whether a classical non-smooth residual\nbootstrap is asymptotically valid in this context. In this paper we solve this\nopen problem, and show that the non-smooth residual bootstrap is consistent. We\nillustrate this theoretical result by means of simulations, that show the\naccuracy of this bootstrap procedure for various models, testing procedures and\nsample sizes.\n", "title": "Bootstrap of residual processes in regression: to smooth or not to smooth ?" }
null
null
null
null
true
null
1814
null
Default
null
null
null
{ "abstract": " We show that the Poisson centre of truncated maximal parabolic subalgebras of\na simple Lie algebra of type B, D and E_6 is a polynomial algebra.\nIn roughly half of the cases the polynomiality of the Poisson centre was\nalready known by a completely different method.\nFor the rest of the cases, our approach is to construct an algebraic slice in\nthe sense of Kostant given by an adapted pair and the computation of an\nimproved upper bound for the Poisson centre.\n", "title": "Polynomiality for the Poisson centre of truncated maximal parabolic subalgebras" }
null
null
[ "Mathematics" ]
null
true
null
1815
null
Validated
null
null
null
{ "abstract": " Motivated by the question of whether the recently introduced Reduced Cutset\nCoding (RCC) offers rate-complexity performance benefits over conventional\ncontext-based conditional coding for sources with two-dimensional Markov\nstructure, this paper compares several row-centric coding strategies that vary\nin the amount of conditioning as well as whether a model or an empirical table\nis used in the encoding of blocks of rows. The conclusion is that, at least for\nsources exhibiting low-order correlations, 1-sided model-based conditional\ncoding is superior to the method of RCC for a given constraint on complexity,\nand conventional context-based conditional coding is nearly as good as the\n1-sided model-based coding.\n", "title": "Row-Centric Lossless Compression of Markov Images" }
null
null
null
null
true
null
1816
null
Default
null
null
null
{ "abstract": " Recent years have seen growing interest in the streaming instability as a\ncandidate mechanism to produce planetesimals. However, these investigations\nhave been limited to small-scale simulations. We now present the results of a\nglobal protoplanetary disk evolution model that incorporates planetesimal\nformation by the streaming instability, along with viscous accretion,\nphotoevaporation by EUV, FUV, and X-ray photons, dust evolution, the water ice\nline, and stratified turbulence. Our simulations produce massive (60-130\n$M_\\oplus$) planetesimal belts beyond 100 au and up to $\\sim 20 M_\\oplus$ of\nplanetesimals in the middle regions (3-100 au). Our most comprehensive model\nforms 8 $M_\\oplus$ of planetesimals inside 3 au, where they can give rise to\nterrestrial planets. The planetesimal mass formed in the inner disk depends\ncritically on the timing of the formation of an inner cavity in the disk by\nhigh-energy photons. Our results show that the combination of photoevaporation\nand the streaming instability are efficient at converting the solid component\nof protoplanetary disks into planetesimals. Our model, however, does not form\nenough early planetesimals in the inner and middle regions of the disk to give\nrise to giant planets and super-Earths with gaseous envelopes. Additional\nprocesses such as particle pileups and mass loss driven by MHD winds may be\nneeded to drive the formation of early planetesimal generations in the planet\nforming regions of protoplanetary disks.\n", "title": "Planetesimal formation by the streaming instability in a photoevaporating disk" }
null
null
null
null
true
null
1817
null
Default
null
null
null
{ "abstract": " The metal-to-metal clearances of a steam turbine during full or part load\noperation are among the main drivers of efficiency. The requirement to add\nclearances is driven by a number of factors including the relative movements of\nthe steam turbine shell and rotor during transient conditions such as startup\nand shutdown. This paper includes a description of a control algorithm to\nmanage external heating blankets for the thermal control of the shell\ndeflections during turbine shutdown. The proposed method is tolerant of changes\nin the heat loss characteristics of the system as well as simultaneous\ncomponent failures.\n", "title": "Fault Tolerant Thermal Control of Steam Turbine Shell Deflections" }
null
null
null
null
true
null
1818
null
Default
null
null
null
{ "abstract": " Summary statistics of genome-wide association studies (GWAS) teach causal\nrelationship between millions of genetic markers and tens and thousands of\nphenotypes. However, underlying biological mechanisms are yet to be elucidated.\nWe can achieve necessary interpretation of GWAS in a causal mediation\nframework, looking to establish a sparse set of mediators between genetic and\ndownstream variables, but there are several challenges. Unlike existing methods\nrely on strong and unrealistic assumptions, we tackle practical challenges\nwithin a principled summary-based causal inference framework. We analyzed the\nproposed methods in extensive simulations generated from real-world genetic\ndata. We demonstrated only our approach can accurately redeem causal genes,\neven without knowing actual individual-level data, despite the presence of\ncompeting non-causal trails.\n", "title": "Causal Mediation Analysis Leveraging Multiple Types of Summary Statistics Data" }
null
null
null
null
true
null
1819
null
Default
null
null
null
{ "abstract": " Biological networks are a very convenient modelling and visualisation tool to\ndiscover knowledge from modern high-throughput genomics and postgenomics data\nsets. Indeed, biological entities are not isolated, but are components of\ncomplex multi-level systems. We go one step further and advocate for the\nconsideration of causal representations of the interactions in living\nsystems.We present the causal formalism and bring it out in the context of\nbiological networks, when the data is observational. We also discuss its\nability to decipher the causal information flow as observed in gene expression.\nWe also illustrate our exploration by experiments on small simulated networks\nas well as on a real biological data set.\n", "title": "Causal Queries from Observational Data in Biological Systems via Bayesian Networks: An Empirical Study in Small Networks" }
null
null
null
null
true
null
1820
null
Default
null
null
null
{ "abstract": " Bytewise approximate matching algorithms have in recent years shown\nsignificant promise in de- tecting files that are similar at the byte level.\nThis is very useful for digital forensic investigators, who are regularly faced\nwith the problem of searching through a seized device for pertinent data. A\ncommon scenario is where an investigator is in possession of a collection of\n\"known-illegal\" files (e.g. a collection of child abuse material) and wishes to\nfind whether copies of these are stored on the seized device. Approximate\nmatching addresses shortcomings in traditional hashing, which can only find\nidentical files, by also being able to deal with cases of merged files,\nembedded files, partial files, or if a file has been changed in any way.\nMost approximate matching algorithms work by comparing pairs of files, which\nis not a scalable approach when faced with large corpora. This paper\ndemonstrates the effectiveness of using a \"Hierarchical Bloom Filter Tree\"\n(HBFT) data structure to reduce the running time of\ncollection-against-collection matching, with a specific focus on the MRSH-v2\nalgorithm. Three experiments are discussed, which explore the effects of\ndifferent configurations of HBFTs. The proposed approach dramatically reduces\nthe number of pairwise comparisons required, and demonstrates substantial speed\ngains, while maintaining effectiveness.\n", "title": "Hierarchical Bloom Filter Trees for Approximate Matching" }
null
null
null
null
true
null
1821
null
Default
null
null
null
{ "abstract": " GANDALF is a new hydrodynamics and N-body dynamics code designed for\ninvestigating planet formation, star formation and star cluster problems.\nGANDALF is written in C++, parallelised with both OpenMP and MPI and contains a\npython library for analysis and visualisation. The code has been written with a\nfully object-oriented approach to easily allow user-defined implementations of\nphysics modules or other algorithms. The code currently contains\nimplementations of Smoothed Particle Hydrodynamics, Meshless Finite-Volume and\ncollisional N-body schemes, but can easily be adapted to include additional\nparticle schemes. We present in this paper the details of its implementation,\nresults from the test suite, serial and parallel performance results and\ndiscuss the planned future development. The code is freely available as an open\nsource project on the code-hosting website github at\nthis https URL and is available under the GPLv2\nlicense.\n", "title": "GANDALF - Graphical Astrophysics code for N-body Dynamics And Lagrangian Fluids" }
null
null
[ "Physics" ]
null
true
null
1822
null
Validated
null
null
null
{ "abstract": " We consider Boltzmann-Gibbs measures associated with log-correlated Gaussian\nfields as potentials and study their multifractal properties which exhibit\nphase transitions. In particular, the pre-freezing and freezing phenomena of\nthe annealed exponent, predicted by Fyodorov using a modified\nreplica-symmetry-breaking ansatz, are generalised to arbitrary dimension and\nverified using results from Gaussian multiplicative chaos theory.\n", "title": "Pre-freezing transition in Boltzmann-Gibbs measures associated with log-correlated fields" }
null
null
null
null
true
null
1823
null
Default
null
null
null
{ "abstract": " The design of good heuristics or approximation algorithms for NP-hard\ncombinatorial optimization problems often requires significant specialized\nknowledge and trial-and-error. Can we automate this challenging, tedious\nprocess, and learn the algorithms instead? In many real-world applications, it\nis typically the case that the same optimization problem is solved again and\nagain on a regular basis, maintaining the same problem structure but differing\nin the data. This provides an opportunity for learning heuristic algorithms\nthat exploit the structure of such recurring problems. In this paper, we\npropose a unique combination of reinforcement learning and graph embedding to\naddress this challenge. The learned greedy policy behaves like a meta-algorithm\nthat incrementally constructs a solution, and the action is determined by the\noutput of a graph embedding network capturing the current state of the\nsolution. We show that our framework can be applied to a diverse range of\noptimization problems over graphs, and learns effective algorithms for the\nMinimum Vertex Cover, Maximum Cut and Traveling Salesman problems.\n", "title": "Learning Combinatorial Optimization Algorithms over Graphs" }
null
null
null
null
true
null
1824
null
Default
null
null
null
{ "abstract": " This paper studies the optimal extraction policy of an oil field as well as\nthe efficient taxation of the revenues generated. Taking into account the fact\nthat the oil price in worldwide commodity markets fluctuates randomly following\nglobal and seasonal macroeconomic parameters, we model the evolution of the oil\nprice as a mean reverting regime-switching jump diffusion process. Given that\noil producing countries rely on oil sale revenues as well as taxes levied on\noil companies for a good portion of the revenue side of their budgets, we\nformulate this problem as a differential game where the two players are the\nmining company whose aim is to maximize the revenues generated from its\nextracting activities and the government agency in charge of regulating and\ntaxing natural resources. We prove the existence of a Nash equilibrium and the\nconvergence of an approximating scheme for the value functions. Furthermore,\noptimal extraction and fiscal policies that should be applied when the\nequilibrium is reached are derived.A numerical example is presented to\nillustrate these results.\n", "title": "Optimal Oil Production and Taxation in Presence of Global Disruptions" }
null
null
null
null
true
null
1825
null
Default
null
null
null
{ "abstract": " Scattering for the mass-critical fractional Schrödinger equation with a\ncubic Hartree-type nonlinearity for initial data in a small ball in the\nscale-invariant space of three-dimensional radial and square-integrable initial\ndata is established. For this, we prove a bilinear estimate for free solutions\nand extend it to perturbations of bounded quadratic variation. This result is\nshown to be sharp by proving the unboundedness of a third order derivative of\nthe flow map in the super-critical range.\n", "title": "Critical well-posedness and scattering results for fractional Hartree-type equations" }
null
null
null
null
true
null
1826
null
Default
null
null
null
{ "abstract": " Developer preferences, language capabilities and the persistence of older\nlanguages contribute to the trend that large software codebases are often\nmultilingual, that is, written in more than one computer language. While\ndevelopers can leverage monolingual software development tools to build\nsoftware components, companies are faced with the problem of managing the\nresultant large, multilingual codebases to address issues with security,\nefficiency, and quality metrics. The key challenge is to address the opaque\nnature of the language interoperability interface: one language calling\nprocedures in a second (which may call a third, or even back to the first),\nresulting in a potentially tangled, inefficient and insecure codebase. An\narchitecture is proposed for lightweight static analysis of large multilingual\ncodebases: the MLSA architecture. Its modular and table-oriented structure\naddresses the open-ended nature of multiple languages and language\ninteroperability APIs. We focus here as an application on the construction of\ncall-graphs that capture both inter-language and intra-language calls. The\nalgorithms for extracting multilingual call-graphs from codebases are\npresented, and several examples of multilingual software engineering analysis\nare discussed. The state of the implementation and testing of MLSA is\npresented, and the implications for future work are discussed.\n", "title": "Lightweight Multilingual Software Analysis" }
null
null
[ "Computer Science" ]
null
true
null
1827
null
Validated
null
null
null
{ "abstract": " The demand for single photon sources at $\\lambda~=~1.54~\\mu$m, which follows\nfrom the consistent development of quantum networks based on commercial optical\nfibers, makes Er:O$_x$ centers in Si still a viable resource thanks to the\noptical transition of $Er^{3+}~:~^4I_{13/2}~\\rightarrow~^4I_{15/2}$. Yet, to\ndate, the implementation of such system remains hindered by its extremely low\nemission rate. In this Letter, we explore the room-temperature\nphotoluminescence (PL) at the telecomm wavelength of very low implantation\ndoses of $Er:O_x$ in $Si$. The emitted photons, excited by a $\\lambda~=~792~nm$\nlaser in both large areas and confined dots of diameter down to $5~\\mu$m, are\ncollected by an inverted confocal microscope. The lower-bound number of\ndetectable emission centers within our diffraction-limited illumination spot is\nestimated to be down to about 10$^4$, corresponding to an emission rate per\nindividual ion of about $4~\\times~10^{3}$ photons/s.\n", "title": "Room-temperature 1.54 $μ$m photoluminescence of Er:O$_x$ centers at extremely low concentration in silicon" }
null
null
null
null
true
null
1828
null
Default
null
null
null
{ "abstract": " As enjoying the closed form solution, least squares support vector machine\n(LSSVM) has been widely used for classification and regression problems having\nthe comparable performance with other types of SVMs. However, LSSVM has two\ndrawbacks: sensitive to outliers and lacking sparseness. Robust LSSVM (R-LSSVM)\novercomes the first partly via nonconvex truncated loss function, but the\ncurrent algorithms for R-LSSVM with the dense solution are faced with the\nsecond drawback and are inefficient for training large-scale problems. In this\npaper, we interpret the robustness of R-LSSVM from a re-weighted viewpoint and\ngive a primal R-LSSVM by the representer theorem. The new model may have sparse\nsolution if the corresponding kernel matrix has low rank. Then approximating\nthe kernel matrix by a low-rank matrix and smoothing the loss function by\nentropy penalty function, we propose a convergent sparse R-LSSVM (SR-LSSVM)\nalgorithm to achieve the sparse solution of primal R-LSSVM, which overcomes two\ndrawbacks of LSSVM simultaneously. The proposed algorithm has lower complexity\nthan the existing algorithms and is very efficient for training large-scale\nproblems. Many experimental results illustrate that SR-LSSVM can achieve better\nor comparable performance with less training time than related algorithms,\nespecially for training large scale problems.\n", "title": "Sparse Algorithm for Robust LSSVM in Primal Space" }
null
null
null
null
true
null
1829
null
Default
null
null
null
{ "abstract": " We consider the networked multi-agent reinforcement learning (MARL) problem\nin a fully decentralized setting, where agents learn to coordinate to achieve\nthe joint success. This problem is widely encountered in many areas including\ntraffic control, distributed control, and smart grids. We assume that the\nreward function for each agent can be different and observed only locally by\nthe agent itself. Furthermore, each agent is located at a node of a\ncommunication network and can exchanges information only with its neighbors.\nUsing softmax temporal consistency and a decentralized optimization method, we\nobtain a principled and data-efficient iterative algorithm. In the first step\nof each iteration, an agent computes its local policy and value gradients and\nthen updates only policy parameters. In the second step, the agent propagates\nto its neighbors the messages based on its value function and then updates its\nown value function. Hence we name the algorithm value propagation. We prove a\nnon-asymptotic convergence rate 1/T with the nonlinear function approximation.\nTo the best of our knowledge, it is the first MARL algorithm with convergence\nguarantee in the control, off-policy and non-linear function approximation\nsetting. We empirically demonstrate the effectiveness of our approach in\nexperiments.\n", "title": "Value Propagation for Decentralized Networked Deep Multi-agent Reinforcement Learning" }
null
null
null
null
true
null
1830
null
Default
null
null
null
{ "abstract": " Non-interactive Local Differential Privacy (LDP) requires data analysts to\ncollect data from users through noisy channel at once. In this paper, we extend\nthe frontiers of Non-interactive LDP learning and estimation from several\naspects. For learning with smooth generalized linear losses, we propose an\napproximate stochastic gradient oracle estimated from non-interactive LDP\nchannel, using Chebyshev expansion. Combined with inexact gradient methods, we\nobtain an efficient algorithm with quasi-polynomial sample complexity bound.\nFor the high-dimensional world, we discover that under $\\ell_2$-norm assumption\non data points, high-dimensional sparse linear regression and mean estimation\ncan be achieved with logarithmic dependence on dimension, using random\nprojection and approximate recovery. We also extend our methods to Kernel Ridge\nRegression. Our work is the first one that makes learning and estimation\npossible for a broad range of learning tasks under non-interactive LDP model.\n", "title": "Collect at Once, Use Effectively: Making Non-interactive Locally Private Learning Possible" }
null
null
null
null
true
null
1831
null
Default
null
null
null
{ "abstract": " Statistical learning relies upon data sampled from a distribution, and we\nusually do not care what actually generated it in the first place. From the\npoint of view of causal modeling, the structure of each distribution is induced\nby physical mechanisms that give rise to dependences between observables.\nMechanisms, however, can be meaningful autonomous modules of generative models\nthat make sense beyond a particular entailed data distribution, lending\nthemselves to transfer between problems. We develop an algorithm to recover a\nset of independent (inverse) mechanisms from a set of transformed data points.\nThe approach is unsupervised and based on a set of experts that compete for\ndata generated by the mechanisms, driving specialization. We analyze the\nproposed method in a series of experiments on image data. Each expert learns to\nmap a subset of the transformed data back to a reference distribution. The\nlearned mechanisms generalize to novel domains. We discuss implications for\ntransfer learning and links to recent trends in generative modeling.\n", "title": "Learning Independent Causal Mechanisms" }
null
null
null
null
true
null
1832
null
Default
null
null
null
{ "abstract": " This work is a technical approach to modeling false information nature,\ndesign, belief impact and containment in multi-agent networks. We present a\nBayesian mathematical model for source information and viewer's belief, and how\nthe former impacts the latter in a media (network) of broadcasters and viewers.\nGiven the proposed model, we study how a particular information (true or false)\ncan be optimally designed into a report, so that on average it conveys the most\namount of the original intended information to the viewers of the network.\nConsequently, the model allows us to study susceptibility of a particular group\nof viewers to false information, as a function of statistical metrics of the\ntheir prior beliefs (e.g. bias, hesitation, open-mindedness, credibility\nassessment etc.). In addition, based on the same model we can study false\ninformation \"containment\" strategies imposed by network administrators.\nSpecifically, we study a credibility assessment strategy, where every\ndisseminated report must be within a certain distance of the truth. We study\nthe trade-off between false and true information-belief convergence using this\nscheme which leads to ways for optimally deciding how truth sensitive an\ninformation dissemination network should operate.\n", "title": "A Bayesian Model for False Information Belief Impact, Optimal Design, and Fake News Containment" }
null
null
null
null
true
null
1833
null
Default
null
null
null
{ "abstract": " Despite intense interest in realizing topological phases across a variety of\nelectronic, photonic and mechanical platforms, the detailed microscopic origin\nof topological behavior often remains elusive. To bridge this conceptual gap,\nwe show how hallmarks of topological modes - boundary localization and\nchirality - emerge from Newton's laws in mechanical topological systems. We\nfirst construct a gyroscopic lattice with analytically solvable edge modes, and\nshow how the Lorentz and spring restoring forces conspire to support very\nrobust \"dangling bond\" boundary modes. The chirality and locality of these\nmodes intuitively emerges from microscopic balancing of restoring forces and\ncyclotron tendencies. Next, we introduce the highlight of this work, a very\nexperimentally realistic mechanical non-equilibrium (Floquet) Chern lattice\ndriven by AC electromagnets. Through appropriate synchronization of the AC\ndriving protocol, the Floquet lattice is \"pushed around\" by a rotating\npotential analogous to an object washed ashore by water waves. Besides hosting\n\"dangling bond\" chiral modes analogous to the gyroscopic boundary modes, our\nFloquet Chern lattice also supports peculiar half-period chiral modes with no\nstatic analog. With key parameters controlled electronically, our setup has the\nadvantage of being dynamically tunable for applications involving arbitrary\nFloquet modulations. The physical intuition gleaned from our two prototypical\ntopological systems are applicable not just to arbitrarily complicated\nmechanical systems, but also photonic and electrical topological setups.\n", "title": "Topological dynamics of gyroscopic and Floquet lattices from Newton's laws" }
null
null
null
null
true
null
1834
null
Default
null
null
null
{ "abstract": " We examine topological solitons in a minimal variational model for a chiral\nmagnet, so-called chiral skyrmions. In the regime of large background fields,\nwe prove linear stability of axisymmetric chiral skyrmions under arbitrary\nperturbations in the energy space, a long-standing open question in physics\nliterature. Moreover, we show strict local minimality of axisymmetric chiral\nskyrmions and nearby existence of moving soliton solution for the\nLandau-Lifshitz-Gilbert equation driven by a small spin transfer torque.\n", "title": "Stability of axisymmetric chiral skyrmions" }
null
null
null
null
true
null
1835
null
Default
null
null
null
{ "abstract": " Plasma wake-field acceleration is one of the main technologies being\ndeveloped for future high-energy colliders. Potentially, it can create a\ncost-effective path to the highest possible energies for e+e- or\n{\\gamma}-{\\gamma} colliders and produce a profound effect on the developments\nfor high-energy physics. Acceleration in a blowout regime, where all plasma\nelectrons are swept away from the axis, is presently considered to be the\nprimary choice for beam acceleration. In this paper, we derive a universal\nefficiency-instability relation, between the power efficiency and the key\ninstability parameter of the trailing bunch for beam acceleration in the\nblowout regime. We also show that the suppression of instability in the\ntrailing bunch can be achieved through BNS damping by the introduction of a\nbeam energy variation along the bunch. Unfortunately, in the high efficiency\nregime, the required energy variation is quite high, and is not presently\ncompatible with collider-quality beams. We would like to stress that the\ndevelopment of the instability imposes a fundamental limitation on the\nacceleration efficiency, and it is unclear how it could be overcome for\nhigh-luminosity linear colliders. With minor modifications, the considered\nlimitation on the power efficiency is applicable to other types of\nacceleration.\n", "title": "Efficiency versus instability in plasma accelerators" }
null
null
null
null
true
null
1836
null
Default
null
null
null
{ "abstract": " We obtain a rigorous upper bound on the resistivity $\\rho$ of an electron\nfluid whose electronic mean free path is short compared to the scale of spatial\ninhomogeneities. When such a hydrodynamic electron fluid supports a non-thermal\ndiffusion process -- such as an imbalance mode between different bands -- we\nshow that the resistivity bound becomes $\\rho \\lesssim A \\, \\Gamma$. The\ncoefficient $A$ is independent of temperature and inhomogeneity lengthscale,\nand $\\Gamma$ is a microscopic momentum-preserving scattering rate. In this way\nwe obtain a unified and novel mechanism -- without umklapp -- for $\\rho \\sim\nT^2$ in a Fermi liquid and the crossover to $\\rho \\sim T$ in quantum critical\nregimes. This behavior is widely observed in transition metal oxides, organic\nmetals, pnictides and heavy fermion compounds and has presented a longstanding\nchallenge to transport theory. Our hydrodynamic bound allows phonon\ncontributions to diffusion constants, including thermal diffusion, to directly\naffect the electrical resistivity.\n", "title": "Resistivity bound for hydrodynamic bad metals" }
null
null
[ "Physics" ]
null
true
null
1837
null
Validated
null
null
null
{ "abstract": " This paper introduces and addresses a wide class of stochastic bandit\nproblems where the function mapping the arm to the corresponding reward\nexhibits some known structural properties. Most existing structures (e.g.\nlinear, Lipschitz, unimodal, combinatorial, dueling, ...) are covered by our\nframework. We derive an asymptotic instance-specific regret lower bound for\nthese problems, and develop OSSB, an algorithm whose regret matches this\nfundamental limit. OSSB is not based on the classical principle of \"optimism in\nthe face of uncertainty\" or on Thompson sampling, and rather aims at matching\nthe minimal exploration rates of sub-optimal arms as characterized in the\nderivation of the regret lower bound. We illustrate the efficiency of OSSB\nusing numerical experiments in the case of the linear bandit problem and show\nthat OSSB outperforms existing algorithms, including Thompson sampling.\n", "title": "Minimal Exploration in Structured Stochastic Bandits" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
1838
null
Validated
null
null
null
{ "abstract": " While modern day web applications aim to create impact at the civilization\nlevel, they have become vulnerable to adversarial activity, where the next\ncyber-attack can take any shape and can originate from anywhere. The increasing\nscale and sophistication of attacks, has prompted the need for a data driven\nsolution, with machine learning forming the core of many cybersecurity systems.\nMachine learning was not designed with security in mind, and the essential\nassumption of stationarity, requiring that the training and testing data follow\nsimilar distributions, is violated in an adversarial domain. In this paper, an\nadversary's view point of a classification based system, is presented. Based on\na formal adversarial model, the Seed-Explore-Exploit framework is presented,\nfor simulating the generation of data driven and reverse engineering attacks on\nclassifiers. Experimental evaluation, on 10 real world datasets and using the\nGoogle Cloud Prediction Platform, demonstrates the innate vulnerability of\nclassifiers and the ease with which evasion can be carried out, without any\nexplicit information about the classifier type, the training data or the\napplication domain. The proposed framework, algorithms and empirical\nevaluation, serve as a white hat analysis of the vulnerabilities, and aim to\nfoster the development of secure machine learning frameworks.\n", "title": "Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains" }
null
null
null
null
true
null
1839
null
Default
null
null
null
{ "abstract": " We discuss the relative merits of optimistic and randomized approaches to\nexploration in reinforcement learning. Optimistic approaches presented in the\nliterature apply an optimistic boost to the value estimate at each state-action\npair and select actions that are greedy with respect to the resulting\noptimistic value function. Randomized approaches sample from among\nstatistically plausible value functions and select actions that are greedy with\nrespect to the random sample. Prior computational experience suggests that\nrandomized approaches can lead to far more statistically efficient learning. We\npresent two simple analytic examples that elucidate why this is the case. In\nprinciple, there should be optimistic approaches that fare well relative to\nrandomized approaches, but that would require intractable computation.\nOptimistic approaches that have been proposed in the literature sacrifice\nstatistical efficiency for the sake of computational efficiency. Randomized\napproaches, on the other hand, may enable simultaneous statistical and\ncomputational efficiency.\n", "title": "On Optimistic versus Randomized Exploration in Reinforcement Learning" }
null
null
null
null
true
null
1840
null
Default
null
null
null
{ "abstract": " Size, weight, and power constrained platforms impose constraints on\ncomputational resources that introduce unique challenges in implementing\nlocalization algorithms. We present a framework to perform fast localization on\nsuch platforms enabled by the compressive capabilities of Gaussian Mixture\nModel representations of point cloud data. Given raw structural data from a\ndepth sensor and pitch and roll estimates from an on-board attitude reference\nsystem, a multi-hypothesis particle filter localizes the vehicle by exploiting\nthe likelihood of the data originating from the mixture model. We demonstrate\nanalysis of this likelihood in the vicinity of the ground truth pose and detail\nits utilization in a particle filter-based vehicle localization strategy, and\nlater present results of real-time implementations on a desktop system and an\noff-the-shelf embedded platform that outperform localization results from\nrunning a state-of-the-art algorithm on the same environment.\n", "title": "Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations" }
null
null
null
null
true
null
1841
null
Default
null
null
null
{ "abstract": " We consider four-dimensional gravity coupled to a non-linear sigma model\nwhose scalar manifold is a non-compact geometrically finite surface $\\Sigma$\nendowed with a Riemannian metric of constant negative curvature. When the\nspace-time is an FLRW universe, such theories produce a very wide\ngeneralization of two-field $\\alpha$-attractor models, being parameterized by a\npositive constant $\\alpha$, by the choice of a finitely-generated surface group\n$\\Gamma\\subset \\mathrm{PSL}(2,\\mathbb{R})$ (which is isomorphic with the\nfundamental group of $\\Sigma$) and by the choice of a scalar potential defined\non $\\Sigma$. The traditional two-field $\\alpha$-attractor models arise when\n$\\Gamma$ is the trivial group, in which case $\\Sigma$ is the Poincaré disk.\nWe give a general prescription for the study of such models through\nuniformization in the so-called \"non-elementary\" case and discuss some of their\nqualitative features in the gradient flow approximation, which we relate to\nMorse theory. We also discuss some aspects of the SRST approximation in these\nmodels, showing that it is generally not well-suited for studying dynamics near\ncusp ends. When $\\Sigma$ is non-compact and the scalar potential is\n\"well-behaved\" at the ends, we show that, in the {\\em naive} local one-field\ntruncation, our generalized models have the same universal behavior as ordinary\none-field $\\alpha$-attractors if inflation happens near any of the ends of\n$\\Sigma$ where the extended potential has a local maximum, for trajectories\nwhich are well approximated by non-canonically parameterized geodesics near the\nends, we also discuss spiral trajectories near the ends.\n", "title": "Generalized two-field $α$-attractor models from geometrically finite hyperbolic surfaces" }
null
null
null
null
true
null
1842
null
Default
null
null
null
{ "abstract": " We show the hardness of the geodetic hull number for chordal graphs.\n", "title": "The Geodetic Hull Number is Hard for Chordal Graphs" }
null
null
null
null
true
null
1843
null
Default
null
null
null
{ "abstract": " Using etale cohomology, we define a birational invariant for varieties in\ncharacteristic $p$ that serves as an obstruction to uniruledness - a variant on\nan obstruction to unirationality due to Ekedahl. We apply this to\n$\\overline{M}_{1,n}$ and show that $\\overline{M}_{1,n}$ is not uniruled in\ncharacteristic $p$ as long as $n \\geq p \\geq 11$. To do this, we use Deligne's\ndescription of the etale cohomology of $\\overline{M}_{1,n}$ and apply the\ntheory of congruences between modular forms.\n", "title": "$\\overline{M}_{1,n}$ is usually not uniruled in characteristic $p$" }
null
null
null
null
true
null
1844
null
Default
null
null
null
{ "abstract": " We propose novel semi-supervised and active learning algorithms for the\nproblem of community detection on networks. The algorithms are based on\noptimizing the likelihood function of the community assignments given a graph\nand an estimate of the statistical model that generated it. The optimization\nframework is inspired by prior work on the unsupervised community detection\nproblem in Stochastic Block Models (SBM) using Semi-Definite Programming (SDP).\nIn this paper we provide the next steps in the evolution of learning\ncommunities in this context which involves a constrained semi-definite\nprogramming algorithm, and a newly presented active learning algorithm. The\nactive learner intelligently queries nodes that are expected to maximize the\nchange in the model likelihood. Experimental results show that this active\nlearning algorithm outperforms the random-selection semi-supervised version of\nthe same algorithm as well as other state-of-the-art active learning\nalgorithms. Our algorithms significantly improved performance is demonstrated\non both real-world and SBM-generated networks even when the SBM has a signal to\nnoise ratio (SNR) below the known unsupervised detectability threshold.\n", "title": "Active Community Detection: A Maximum Likelihood Approach" }
null
null
null
null
true
null
1845
null
Default
null
null
null
{ "abstract": " We consider the problem of recovering a function input of a differential\nequation formulated on an unknown domain $M$. We assume to have access to a\ndiscrete domain $M_n=\\{x_1, \\dots, x_n\\} \\subset M$, and to noisy measurements\nof the output solution at $p\\le n$ of those points. We introduce a graph-based\nBayesian inverse problem, and show that the graph-posterior measures over\nfunctions in $M_n$ converge, in the large $n$ limit, to a posterior over\nfunctions in $M$ that solves a Bayesian inverse problem with known domain.\nThe proofs rely on the variational formulation of the Bayesian update, and on\na new topology for the study of convergence of measures over functions on point\nclouds to a measure over functions on the continuum. Our framework, techniques,\nand results may serve to lay the foundations of robust uncertainty\nquantification of graph-based tasks in machine learning. The ideas are\npresented in the concrete setting of recovering the initial condition of the\nheat equation on an unknown manifold.\n", "title": "Continuum Limit of Posteriors in Graph Bayesian Inverse Problems" }
null
null
null
null
true
null
1846
null
Default
null
null
null
{ "abstract": " Automatic conflict detection has grown in relevance with the advent of\nbody-worn technology, but existing metrics such as turn-taking and overlap are\npoor indicators of conflict in police-public interactions. Moreover, standard\ntechniques to compute them fall short when applied to such diversified and\nnoisy contexts. We develop a pipeline catered to this task combining adaptive\nnoise removal, non-speech filtering and new measures of conflict based on the\nrepetition and intensity of phrases in speech. We demonstrate the effectiveness\nof our approach on body-worn audio data collected by the Los Angeles Police\nDepartment.\n", "title": "Automatic Conflict Detection in Police Body-Worn Audio" }
null
null
null
null
true
null
1847
null
Default
null
null
null
{ "abstract": " Assuming a conjecture about factorization homology with adjoints, we prove\nthe cobordism hypothesis, after Baez-Dolan, Costello, Hopkins-Lurie, and Lurie.\n", "title": "The cobordism hypothesis" }
null
null
null
null
true
null
1848
null
Default
null
null
null
{ "abstract": " We discover a population of short-period, Neptune-size planets sharing key\nsimilarities with hot Jupiters: both populations are preferentially hosted by\nmetal-rich stars, and both are preferentially found in Kepler systems with\nsingle transiting planets. We use accurate LAMOST DR4 stellar parameters for\nmain-sequence stars to study the distributions of short-period 1d < P < 10d\nKepler planets as a function of host star metallicity. The radius distribution\nof planets around metal-rich stars is more \"puffed up\" as compared to that\naround metal-poor hosts. In two period-radius regimes, planets preferentially\nreside around metal-rich stars, while there are hardly any planets around\nmetal-poor stars. One is the well-known hot Jupiters, and the other is a\npopulation of Neptune-size planets (2 R_Earth <~ R_p <~ 6 R_Earth), dubbed as\n\"Hoptunes\". Also like hot Jupiters, Hoptunes occur more frequently in systems\nwith single transiting planets though the fraction of Hoptunes occurring in\nmultiples is larger than that of hot Jupiters. About 1% of solar-type stars\nhost \"Hoptunes\", and the frequencies of Hoptunes and hot Jupiters increase with\nconsistent trends as a function of [Fe/H]. In the planet radius distribution,\nhot Jupiters and Hoptunes are separated by a \"valley\" at approximately Saturn\nsize (in the range of 6 R_Earth <~ R_p <~ 10 R_Earth), and this \"hot-Saturn\nvalley\" represents approximately an order-of-magnitude decrease in planet\nfrequency compared to hot Jupiters and Hoptunes. The empirical \"kinship\"\nbetween Hoptunes and hot Jupiters suggests likely common processes (migration\nand/or formation) responsible for their existence.\n", "title": "LAMOST telescope reveals that Neptunian cousins of hot Jupiters are mostly single offspring of stars that are rich in heavy elements" }
null
null
null
null
true
null
1849
null
Default
null
null
null
{ "abstract": " Describing the dimension reduction (DR) techniques by means of probabilistic\nmodels has recently been given special attention. Probabilistic models, in\naddition to a better interpretability of the DR methods, provide a framework\nfor further extensions of such algorithms. One of the new approaches to the\nprobabilistic DR methods is to preserving the internal structure of data. It is\nmeant that it is not necessary that the data first be converted from the matrix\nor tensor format to the vector format in the process of dimensionality\nreduction. In this paper, a latent variable model for matrix-variate data for\ncanonical correlation analysis (CCA) is proposed. Since in general there is not\nany analytical maximum likelihood solution for this model, we present two\napproaches for learning the parameters. The proposed methods are evaluated\nusing the synthetic data in terms of convergence and quality of mappings. Also,\nreal data set is employed for assessing the proposed methods with several\nprobabilistic and none-probabilistic CCA based approaches. The results confirm\nthe superiority of the proposed methods with respect to the competing\nalgorithms. Moreover, this model can be considered as a framework for further\nextensions.\n", "title": "A Latent Variable Model for Two-Dimensional Canonical Correlation Analysis and its Variational Inference" }
null
null
null
null
true
null
1850
null
Default
null
null
null
{ "abstract": " Many practical problems are characterized by a preference relation over\nadmissible solutions, where preferred solutions are minimal in some sense. For\nexample, a preferred diagnosis usually comprises a minimal set of reasons that\nis sufficient to cause the observed anomaly. Alternatively, a minimal\ncorrection subset comprises a minimal set of reasons whose deletion is\nsufficient to eliminate the observed anomaly. Circumscription formalizes such\npreference relations by associating propositional theories with minimal models.\nThe resulting enumeration problem is addressed here by means of a new algorithm\ntaking advantage of unsatisfiable core analysis. Empirical evidence of the\nefficiency of the algorithm is given by comparing the performance of the\nresulting solver, CIRCUMSCRIPTINO, with HCLASP, CAMUS MCS, LBX and MCSLS on the\nenumeration of minimal models for problems originating from practical\napplications.\nThis paper is under consideration for acceptance in TPLP.\n", "title": "Model enumeration in propositional circumscription via unsatisfiable core analysis" }
null
null
null
null
true
null
1851
null
Default
null
null
null
{ "abstract": " Summarization of long sequences into a concise statement is a core problem in\nnatural language processing, requiring non-trivial understanding of the input.\nBased on the promising results of graph neural networks on highly structured\ndata, we develop a framework to extend existing sequence encoders with a graph\ncomponent that can reason about long-distance relationships in weakly\nstructured data such as text. In an extensive evaluation, we show that the\nresulting hybrid sequence-graph models outperform both pure sequence models as\nwell as pure graph models on a range of summarization tasks.\n", "title": "Structured Neural Summarization" }
null
null
null
null
true
null
1852
null
Default
null
null
null
{ "abstract": " A first order theory T is said to be \"tight\" if for any two deductively\nclosed extensions U and V of T (both of which are formulated in the language of\nT), U and V are bi-interpretable iff U = V. By a theorem of Visser, PA (Peano\nArithmetic) is tight. Here we show that Z_2 (second order arithmetic), ZF\n(Zermelo-Fraenkel set theory), and KM (Kelley-Morse theory of classes) are also\ntight theories.\n", "title": "Variations on a Visserian Theme" }
null
null
null
null
true
null
1853
null
Default
null
null
null
{ "abstract": " We investigate the accuracy and robustness of one of the most common methods\nused in glaciology for the discretization of the $\\mathfrak{p}$-Stokes\nequations: equal order finite elements with Galerkin Least-Squares (GLS)\nstabilization. Furthermore we compare the results to other stabilized methods.\nWe find that the vertical velocity component is more sensitive to the choice of\nGLS stabilization parameter than horizontal velocity. Additionally, the\naccuracy of the vertical velocity component is especially important since\nerrors in this component can cause ice surface instabilities and propagate into\nfuture ice volume predictions. If the element cell size is set to the minimum\nedge length and the stabilization parameter is allowed to vary non-linearly\nwith viscosity, the GLS stabilization parameter found in literature is a good\nchoice on simple domains. However, near ice margins the standard parameter\nchoice may result in significant oscillations in the vertical component of the\nsurface velocity. For these cases, other stabilization techniques, such as the\ninterior penalty method, result in better accuracy and are less sensitive to\nthe choice of the stabilization parameter. During this work we also discovered\nthat the manufactured solutions often used to evaluate errors in glaciology are\nnot reliable due to high artificial surface forces at singularities. We perform\nour numerical experiments in both FEniCS and Elmer/Ice.\n", "title": "Galerkin Least-Squares Stabilization in Ice Sheet Modeling - Accuracy, Robustness, and Comparison to other Techniques" }
null
null
null
null
true
null
1854
null
Default
null
null
null
{ "abstract": " During software maintenance, developers usually deal with a significant\nnumber of software change requests. As a part of this, they often formulate an\ninitial query from the request texts, and then attempt to map the concepts\ndiscussed in the request to relevant source code locations in the software\nsystem (a.k.a., concept location). Unfortunately, studies suggest that they\noften perform poorly in choosing the right search terms for a change task. In\nthis paper, we propose a novel technique --ACER-- that takes an initial query,\nidentifies appropriate search terms from the source code using a novel term\nweight --CodeRank, and then suggests effective reformulation to the initial\nquery by exploiting the source document structures, query quality analysis and\nmachine learning. Experiments with 1,675 baseline queries from eight subject\nsystems report that our technique can improve 71% of the baseline queries which\nis highly promising. Comparison with five closely related existing techniques\nin query reformulation not only validates our empirical findings but also\ndemonstrates the superiority of our technique.\n", "title": "Improved Query Reformulation for Concept Location using CodeRank and Document Structures" }
null
null
null
null
true
null
1855
null
Default
null
null
null
{ "abstract": " The use of computers in statistical physics is common because the sheer\nnumber of equations that describe the behavior of an entire system particle by\nparticle often makes it impossible to solve them exactly. Monte Carlo methods\nform a particularly important class of numerical methods for solving problems\nin statistical physics. Although these methods are simple in principle, their\nproper use requires a good command of statistical mechanics, as well as\nconsiderable computational resources. The aim of this paper is to demonstrate\nhow the usage of widely accessible graphics cards on personal computers can\nelevate the computing power in Monte Carlo simulations by orders of magnitude,\nthus allowing live classroom demonstration of phenomena that would otherwise be\nout of reach. As an example, we use the public goods game on a square lattice\nwhere two strategies compete for common resources in a social dilemma\nsituation. We show that the second-order phase transition to an absorbing phase\nin the system belongs to the directed percolation universality class, and we\ncompare the time needed to arrive at this result by means of the main processor\nand by means of a suitable graphics card. Parallel computing on graphics\nprocessing units has been developed actively during the last decade, to the\npoint where today the learning curve for entry is anything but steep for those\nfamiliar with programming. The subject is thus ripe for inclusion in graduate\nand advanced undergraduate curricula, and we hope that this paper will\nfacilitate this process in the realm of physics education. To that end, we\nprovide a documented source code for an easy reproduction of presented results\nand for further development of Monte Carlo simulations of similar systems.\n", "title": "High-performance parallel computing in the classroom using the public goods game as an example" }
null
null
null
null
true
null
1856
null
Default
null
null
null
{ "abstract": " We consider a helical system of fermions with a generic spin (or pseudospin)\norbit coupling. Using the equation of motion approach for the single-particle\ndistribution functions, and a mean-field decoupling of the higher order\ndistribution functions, we find a closed form for the charge and spin density\nfluctuations in terms of the charge and spin density linear response functions.\nApproximating the nonlocal exchange term with a Hubbard-like local-field\nfactor, we obtain coupled spin and charge density response matrix beyond the\nrandom phase approximation, whose poles give the dispersion of four collective\nspin-charge modes. We apply our generic technique to the well-explored\ntwo-dimensional system with Rashba spin-orbit coupling and illustrate how it\ngives results for the collective modes, Drude weight, and spin-Hall\nconductivity which are in very good agreement with the results obtained from\nother more sophisticated approaches.\n", "title": "Coupled spin-charge dynamics in helical Fermi liquids beyond the random phase approximation" }
null
null
null
null
true
null
1857
null
Default
null
null
null
{ "abstract": " We study correlations in fermionic lattice systems with long-range\ninteractions in thermal equilibrium. We prove a bound on the correlation decay\nbetween anti-commuting operators and generalize a long-range Lieb-Robinson type\nbound. Our results show that in these systems of spatial dimension $D$ with,\nnot necessarily translation invariant, two-site interactions decaying\nalgebraically with the distance with an exponent $\\alpha \\geq 2\\,D$,\ncorrelations between such operators decay at least algebraically with an\nexponent arbitrarily close to $\\alpha$ at any non-zero temperature. Our bound\nis asymptotically tight, which we demonstrate by a high temperature expansion\nand by numerically analyzing density-density correlations in the 1D quadratic\n(free, exactly solvable) Kitaev chain with long-range pairing.\n", "title": "Correlation decay in fermionic lattice systems with power-law interactions at non-zero temperature" }
null
null
null
null
true
null
1858
null
Default
null
null
null
{ "abstract": " We present an integrated microsimulation framework to estimate the pedestrian\nmovement over time and space with limited data on directional counts. Using the\nactivity-based approach, simulation can compute the overall demand and\ntrajectory of each agent, which are in accordance with the available partial\nobservations and are in response to the initial and evolving supply conditions\nand schedules. This simulation contains a chain of processes including:\nactivities generation, decision point choices, and assignment. They are\nconsidered in an iteratively updating loop so that the simulation can\ndynamically correct its estimates of demand. A Markov chain is constructed for\nthis loop. These considerations transform the problem into a convergence\nproblem. A Metropolitan Hasting algorithm is then adapted to identify the\noptimal solution. This framework can be used to fill the lack of data or to\nmodel the reactions of demand to exogenous changes in the scenario. Finally, we\npresent a case study on Montreal Central Station, on which we tested the\ndeveloped framework and calibrated the models. We then applied it to a possible\nfuture scenario for the same station.\n", "title": "Integrated Microsimulation Framework for Dynamic Pedestrian Movement Estimation in Mobility Hub" }
null
null
null
null
true
null
1859
null
Default
null
null
null
{ "abstract": " Stochastic optimization naturally arises in machine learning. Efficient\nalgorithms with provable guarantees, however, are still largely missing, when\nthe objective function is nonconvex and the data points are dependent. This\npaper studies this fundamental challenge through a streaming PCA problem for\nstationary time series data. Specifically, our goal is to estimate the\nprinciple component of time series data with respect to the covariance matrix\nof the stationary distribution. Computationally, we propose a variant of Oja's\nalgorithm combined with downsampling to control the bias of the stochastic\ngradient caused by the data dependency. Theoretically, we quantify the\nuncertainty of our proposed stochastic algorithm based on diffusion\napproximations. This allows us to prove the asymptotic rate of convergence and\nfurther implies near optimal asymptotic sample complexity. Numerical\nexperiments are provided to support our analysis.\n", "title": "Dimensionality Reduction for Stationary Time Series via Stochastic Nonconvex Optimization" }
null
null
null
null
true
null
1860
null
Default
null
null
null
{ "abstract": " We consider a variation on the problem of prediction with expert advice,\nwhere new forecasters that were unknown until then may appear at each round. As\noften in prediction with expert advice, designing an algorithm that achieves\nnear-optimal regret guarantees is straightforward, using aggregation of\nexperts. However, when the comparison class is sufficiently rich, for instance\nwhen the best expert and the set of experts itself changes over time, such\nstrategies naively require to maintain a prohibitive number of weights\n(typically exponential with the time horizon). By contrast, designing\nstrategies that both achieve a near-optimal regret and maintain a reasonable\nnumber of weights is highly non-trivial. We consider three increasingly\nchallenging objectives (simple regret, shifting regret and sparse shifting\nregret) that extend existing notions defined for a fixed expert ensemble; in\neach case, we design strategies that achieve tight regret bounds, adaptive to\nthe parameters of the comparison class, while being computationally\ninexpensive. Moreover, our algorithms are anytime, agnostic to the number of\nincoming experts and completely parameter-free. Such remarkable results are\nmade possible thanks to two simple but highly effective recipes: first the\n\"abstention trick\" that comes from the specialist framework and enables to\nhandle the least challenging notions of regret, but is limited when addressing\nmore sophisticated objectives. Second, the \"muting trick\" that we introduce to\ngive more flexibility. We show how to combine these two tricks in order to\nhandle the most challenging class of comparison strategies.\n", "title": "Efficient tracking of a growing number of experts" }
null
null
null
null
true
null
1861
null
Default
null
null
null
{ "abstract": " Subsequence clustering of multivariate time series is a useful tool for\ndiscovering repeated patterns in temporal data. Once these patterns have been\ndiscovered, seemingly complicated datasets can be interpreted as a temporal\nsequence of only a small number of states, or clusters. For example, raw sensor\ndata from a fitness-tracking application can be expressed as a timeline of a\nselect few actions (i.e., walking, sitting, running). However, discovering\nthese patterns is challenging because it requires simultaneous segmentation and\nclustering of the time series. Furthermore, interpreting the resulting clusters\nis difficult, especially when the data is high-dimensional. Here we propose a\nnew method of model-based clustering, which we call Toeplitz Inverse\nCovariance-based Clustering (TICC). Each cluster in the TICC method is defined\nby a correlation network, or Markov random field (MRF), characterizing the\ninterdependencies between different observations in a typical subsequence of\nthat cluster. Based on this graphical representation, TICC simultaneously\nsegments and clusters the time series data. We solve the TICC problem through\nalternating minimization, using a variation of the expectation maximization\n(EM) algorithm. We derive closed-form solutions to efficiently solve the two\nresulting subproblems in a scalable way, through dynamic programming and the\nalternating direction method of multipliers (ADMM), respectively. We validate\nour approach by comparing TICC to several state-of-the-art baselines in a\nseries of synthetic experiments, and we then demonstrate on an automobile\nsensor dataset how TICC can be used to learn interpretable clusters in\nreal-world scenarios.\n", "title": "Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
1862
null
Validated
null
null
null
{ "abstract": " In this paper we consider a nonlocal energy $I_\\alpha$ whose kernel is\nobtained by adding to the Coulomb potential an anisotropic term weighted by a\nparameter $\\alpha\\in \\R$. The case $\\alpha=0$ corresponds to purely logarithmic\ninteractions, minimised by the celebrated circle law for a quadratic\nconfinement; $\\alpha=1$ corresponds to the energy of interacting dislocations,\nminimised by the semi-circle law. We show that for $\\alpha\\in (0,1)$ the\nminimiser can be computed explicitly and is the normalised characteristic\nfunction of the domain enclosed by an \\emph{ellipse}. To prove our result we\nborrow techniques from fluid dynamics, in particular those related to\nKirchhoff's celebrated result that domains enclosed by ellipses are rotating\nvortex patches, called \\emph{Kirchhoff ellipses}. Therefore we show a\nsurprising connection between vortices and dislocations.\n", "title": "The ellipse law: Kirchhoff meets dislocations" }
null
null
null
null
true
null
1863
null
Default
null
null
null
{ "abstract": " Recently, the advancement in industrial automation and high-speed printing\nhas raised numerous challenges related to the printing quality inspection of\nfinal products. This paper proposes a machine vision based technique to assess\nthe printing quality of text on industrial objects. The assessment is based on\nthree quality defects such as text misalignment, varying printing shades, and\nmisprinted text. The proposed scheme performs the quality inspection through\nstochastic assessment technique based on the second-order statistics of\nprinting. First: the text-containing area on printed product is identified\nthrough image processing techniques. Second: the alignment testing of the\nidentified text-containing area is performed. Third: optical character\nrecognition is performed to divide the text into different small boxes and only\nthe intensity value of each text-containing box is taken as a random variable\nand second-order statistics are estimated to determine the varying printing\ndefects in the text under one, two and three sigma thresholds. Fourth: the\nK-Nearest Neighbors based supervised machine learning is performed to provide\nthe stochastic process for misprinted text detection. Finally, the technique is\ndeployed on an industrial image for the printing quality assessment with\nvarying values of n and m. The results have shown that the proposed SAML-QC\ntechnique can perform real-time automated inspection for industrial printing.\n", "title": "SAML-QC: a Stochastic Assessment and Machine Learning based QC technique for Industrial Printing" }
null
null
null
null
true
null
1864
null
Default
null
null
null
{ "abstract": " We present an approach to testing the gravitational redshift effect using the\nRadioAstron satellite. The experiment is based on a modification of the Gravity\nProbe A scheme of nonrelativistic Doppler compensation and benefits from the\nhighly eccentric orbit and ultra-stable atomic hydrogen maser frequency\nstandard of the RadioAstron satellite. Using the presented techniques we expect\nto reach an accuracy of the gravitational redshift test of order $10^{-5}$, a\nmagnitude better than that of Gravity Probe A. Data processing is ongoing, our\npreliminary results agree with the validity of the Einstein Equivalence\nPrinciple.\n", "title": "Probing the gravitational redshift with an Earth-orbiting satellite" }
null
null
null
null
true
null
1865
null
Default
null
null
null
{ "abstract": " We present a novel approach to fast on-the-fly low order finite element\nassembly for scalar elliptic partial differential equations of Darcy type with\nvariable coefficients optimized for matrix-free implementations. Our approach\nintroduces a new operator that is obtained by appropriately scaling the\nreference stiffness matrix from the constant coefficient case. Assuming\nsufficient regularity, an a priori analysis shows that solutions obtained by\nthis approach are unique and have asymptotically optimal order convergence in\nthe $H^1$- and the $L^2$-norm on hierarchical hybrid grids. For the\npre-asymptotic regime, we present a local modification that guarantees uniform\nellipticity of the operator. Cost considerations show that our novel approach\nrequires roughly one third of the floating-point operations compared to a\nclassical finite element assembly scheme employing nodal integration. Our\ntheoretical considerations are illustrated by numerical tests that confirm the\nexpectations with respect to accuracy and run-time. A large scale application\nwith more than a hundred billion ($1.6\\cdot10^{11}$) degrees of freedom\nexecuted on 14,310 compute cores demonstrates the efficiency of the new scaling\napproach.\n", "title": "A stencil scaling approach for accelerating matrix-free finite element implementations" }
null
null
[ "Computer Science" ]
null
true
null
1866
null
Validated
null
null
null
{ "abstract": " We consider the potential for positioning with a system where antenna arrays\nare deployed as a large intelligent surface (LIS). We derive\nFisher-informations and Cramér-Rao lower bounds (CRLB) in closed-form for\nterminals along the central perpendicular line (CPL) of the LIS for all three\nCartesian dimensions. For terminals at positions other than the CPL,\nclosed-form expressions for the Fisher-informations and CRLBs seem out of\nreach, and we alternatively provide approximations (in closed-form) which are\nshown to be very accurate. We also show that under mild conditions, the CRLBs\nin general decrease quadratically in the surface-area for both the $x$ and $y$\ndimensions. For the $z$-dimension (distance from the LIS), the CRLB decreases\nlinearly in the surface-area when terminals are along the CPL. However, when\nterminals move away from the CPL, the CRLB is dramatically increased and then\nalso decreases quadratically in the surface-area. We also extensively discuss\nthe impact of different deployments (centralized and distributed) of the LIS.\n", "title": "Cramér-Rao Lower Bounds for Positioning with Large Intelligent Surfaces" }
null
null
null
null
true
null
1867
null
Default
null
null
null
{ "abstract": " In this expository work we discuss the asymptotic behaviour of the solutions\nof the classical heat equation posed in the whole Euclidean space.\nAfter an introductory review of the main facts on the existence and\nproperties of solutions, we proceed with the proofs of convergence to the\nGaussian fundamental solution, a result that holds for all integrable\nsolutions, and represents in the PDE setting the Central Limit Theorem of\nprobability. We present several methods of proof: first, the scaling method.\nThen several versions of the representation method. This is followed by the\nfunctional analysis approach that leads to the famous related equations,\nFokker-Planck and Ornstein-Uhlenbeck. The analysis of this connection is also\ngiven in rather complete form here. Finally, we present the Boltzmann entropy\nmethod, coming from kinetic equations.\nThe different methods are interesting because of the possible extension to\nprove the asymptotic behaviour or stabilization analysis for more general\nequations, linear or nonlinear. It all depends a lot on the particular\nfeatures, and only one or some of the methods work in each case.Other settings\nof the Heat Equation are briefly discussed in Section 9 and a longer mention of\nresults for different equations is done in Section 10.\n", "title": "Asymptotic behaviour methods for the Heat Equation. Convergence to the Gaussian" }
null
null
null
null
true
null
1868
null
Default
null
null
null
{ "abstract": " Artificial Spin Ice (ASI), consisting of a two dimensional array of nanoscale\nmagnetic elements, provides a fascinating opportunity to observe the physics of\nout of equilibrium systems. Initial studies concentrated on the static, frozen\nstate, whilst more recent studies have accessed the out-of-equilibrium dynamic,\nfluctuating state. This opens up exciting possibilities such as the observation\nof systems exploring their energy landscape through monopole quasiparticle\ncreation, potentially leading to ASI magnetricity, and to directly observe\nunconventional phase transitions. In this work we have measured and analysed\nthe magnetic relaxation of thermally active ASI systems by means of SQUID\nmagnetometry. We have investigated the effect of the interaction strength on\nthe magnetization dynamics at different temperatures in the range where the\nnanomagnets are thermally active and have observed that they follow an\nArrhenius-type Néel-Brown behaviour. An unexpected negative correlation of\nthe average blocking temperature with the interaction strength is also\nobserved, which is supported by Monte Carlo simulations. The magnetization\nrelaxation measurements show faster relaxation for more strongly coupled\nnanoelements with similar dimensions. The analysis of the stretching exponents\nobtained from the measurements suggest 1-D chain-like magnetization dynamics.\nThis indicates that the nature of the interactions between nanoelements lowers\nthe dimensionality of the ASI from 2-D to 1-D. Finally, we present a way to\nquantify the effective interaction energy of a square ASI system, and compare\nit to the interaction energy calculated from a simple dipole model and also to\nthe magnetostatic energy computed with micromagnetic simulations.\n", "title": "Magnetization dynamics of weakly interacting sub-100 nm square artificial spin ices" }
null
null
[ "Physics" ]
null
true
null
1869
null
Validated
null
null
null
{ "abstract": " Since the events of the Arab Spring, there has been increased interest in\nusing social media to anticipate social unrest. While efforts have been made\ntoward automated unrest prediction, we focus on filtering the vast volume of\ntweets to identify tweets relevant to unrest, which can be provided to\ndownstream users for further analysis. We train a supervised classifier that is\nable to label Arabic language tweets as relevant to unrest with high\nreliability. We examine the relationship between training data size and\nperformance and investigate ways to optimize the model building process while\nminimizing cost. We also explore how confidence thresholds can be set to\nachieve desired levels of performance.\n", "title": "Filtering Tweets for Social Unrest" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
1870
null
Validated
null
null
null
{ "abstract": " We initiate the algorithmic study of the following \"structured augmentation\"\nquestion: is it possible to increase the connectivity of a given graph G by\nsuperposing it with another given graph H? More precisely, graph F is the\nsuperposition of G and H with respect to injective mapping \\phi: V(H)->V(G) if\nevery edge uv of F is either an edge of G, or \\phi^{-1}(u)\\phi^{-1}(v) is an\nedge of H. We consider the following optimization problem. Given graphs G,H,\nand a weight function \\omega assigning non-negative weights to pairs of\nvertices of V(G), the task is to find \\varphi of minimum weight\n\\omega(\\phi)=\\sum_{xy\\in E(H)}\\omega(\\phi(x)\\varphi(y)) such that the edge\nconnectivity of the superposition F of G and H with respect to \\phi is higher\nthan the edge connectivity of G. Our main result is the following \"dichotomy\"\ncomplexity classification. We say that a class of graphs C has bounded\nvertex-cover number, if there is a constant t depending on C only such that the\nvertex-cover number of every graph from C does not exceed t. We show that for\nevery class of graphs C with bounded vertex-cover number, the problems of\nsuperposing into a connected graph F and to 2-edge connected graph F, are\nsolvable in polynomial time when H\\in C. On the other hand, for any hereditary\nclass C with unbounded vertex-cover number, both problems are NP-hard when H\\in\nC. For the unweighted variants of structured augmentation problems, i.e. the\nproblems where the task is to identify whether there is a superposition of\ngraphs of required connectivity, we provide necessary and sufficient\ncombinatorial conditions on the existence of such superpositions. These\nconditions imply polynomial time algorithms solving the unweighted variants of\nthe problems.\n", "title": "Structured Connectivity Augmentation" }
null
null
null
null
true
null
1871
null
Default
null
null
null
{ "abstract": " We derive a semi-analytic formula for the transition probability of\nthree-dimensional Brownian motion in the positive octant with absorption at the\nboundaries. Separation of variables in spherical coordinates leads to an\neigenvalue problem for the resulting boundary value problem in the two angular\ncomponents. The main theoretical result is a solution to the original problem\nexpressed as an expansion into special functions and an eigenvalue which has to\nbe chosen to allow a matching of the boundary condition. We discuss and test\nseveral computational methods to solve a finite-dimensional approximation to\nthis nonlinear eigenvalue problem. Finally, we apply our results to the\ncomputation of default probabilities and credit valuation adjustments in a\nstructural credit model with mutual liabilities.\n", "title": "Transition probability of Brownian motion in the octant and its application to default modeling" }
null
null
null
null
true
null
1872
null
Default
null
null
null
{ "abstract": " Recurrent Neural Networks (RNNs) are used in state-of-the-art models in\ndomains such as speech recognition, machine translation, and language\nmodelling. Sparsity is a technique to reduce compute and memory requirements of\ndeep learning models. Sparse RNNs are easier to deploy on devices and high-end\nserver processors. Even though sparse operations need less compute and memory\nrelative to their dense counterparts, the speed-up observed by using sparse\noperations is less than expected on different hardware platforms. In order to\naddress this issue, we investigate two different approaches to induce block\nsparsity in RNNs: pruning blocks of weights in a layer and using group lasso\nregularization to create blocks of weights with zeros. Using these techniques,\nwe demonstrate that we can create block-sparse RNNs with sparsity ranging from\n80% to 90% with small loss in accuracy. This allows us to reduce the model size\nby roughly 10x. Additionally, we can prune a larger dense network to recover\nthis loss in accuracy while maintaining high block sparsity and reducing the\noverall parameter count. Our technique works with a variety of block sizes up\nto 32x32. Block-sparse RNNs eliminate overheads related to data storage and\nirregular memory accesses while increasing hardware efficiency compared to\nunstructured sparsity.\n", "title": "Block-Sparse Recurrent Neural Networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
1873
null
Validated
null
null
null
{ "abstract": " With any (not necessarily proper) edge $k$-colouring\n$\\gamma:E(G)\\longrightarrow\\{1,\\dots,k\\}$ of a graph $G$,one can associate a\nvertex colouring $\\sigma\\_{\\gamma}$ given by $\\sigma\\_{\\gamma}(v)=\\sum\\_{e\\ni\nv}\\gamma(e)$.A neighbour-sum-distinguishing edge $k$-colouring is an edge\ncolouring whose associated vertex colouring is proper.The\nneighbour-sum-distinguishing index of a graph $G$ is then the smallest $k$ for\nwhich $G$ admitsa neighbour-sum-distinguishing edge $k$-colouring.These notions\nnaturally extends to total colourings of graphs that assign colours to both\nvertices and edges.We study in this paper equitable\nneighbour-sum-distinguishing edge colourings andtotal colourings, that is\ncolourings $\\gamma$ for whichthe number of elements in any two colour classes\nof $\\gamma$ differ by at most one.We determine the equitable\nneighbour-sum-distinguishing indexof complete graphs, complete bipartite graphs\nand forests,and the equitable neighbour-sum-distinguishing total chromatic\nnumberof complete graphs and bipartite graphs.\n", "title": "Equitable neighbour-sum-distinguishing edge and total colourings" }
null
null
null
null
true
null
1874
null
Default
null
null
null
{ "abstract": " The celebrated Nadaraya-Watson kernel estimator is among the most studied\nmethod for nonparametric regression. A classical result is that its rate of\nconvergence depends on the number of covariates and deteriorates quickly as the\ndimension grows, which underscores the \"curse of dimensionality\" and has\nlimited its use in high dimensional settings. In this article, we show that\nwhen the true regression function is single or multi-index, the effects of the\ncurse of dimensionality may be mitigated for the Nadaraya-Watson kernel\nestimator. Specifically, we prove that with $K$-fold cross-validation, the\nNadaraya-Watson kernel estimator indexed by a positive semidefinite bandwidth\nmatrix has an oracle property that its rate of convergence depends on the\nnumber of indices of the regression function rather than the number of\ncovariates. Intuitively, this oracle property is a consequence of allowing the\nbandwidths to diverge to infinity as opposed to restricting them all to\nconverge to zero at certain rates as done in previous theoretical studies. Our\nresult provides a theoretical perspective for the use of kernel estimation in\nhigh dimensional nonparametric regression and other applications such as metric\nlearning when a low rank structure is anticipated. Numerical illustrations are\ngiven through simulations and real data examples.\n", "title": "An Oracle Property of The Nadaraya-Watson Kernel Estimator for High Dimensional Nonparametric Regression" }
null
null
null
null
true
null
1875
null
Default
null
null
null
{ "abstract": " We consider the problem of bandit optimization, inspired by stochastic\noptimization and online learning problems with bandit feedback. In this\nproblem, the objective is to minimize a global loss function of all the\nactions, not necessarily a cumulative loss. This framework allows us to study a\nvery general class of problems, with applications in statistics, machine\nlearning, and other fields. To solve this problem, we analyze the\nUpper-Confidence Frank-Wolfe algorithm, inspired by techniques for bandits and\nconvex optimization. We give theoretical guarantees for the performance of this\nalgorithm over various classes of functions, and discuss the optimality of\nthese results.\n", "title": "Fast Rates for Bandit Optimization with Upper-Confidence Frank-Wolfe" }
null
null
null
null
true
null
1876
null
Default
null
null
null
{ "abstract": " We theoretically study a scheme to develop an atomic based MW interferometry\nusing the Rydberg states in Rb. Unlike the traditional MW interferometry, this\nscheme is not based upon the electrical circuits, hence the sensitivity of the\nphase and the amplitude/strength of the MW field is not limited by the Nyquist\nthermal noise. Further this system has great advantage due to its very high\nbandwidth, ranging from radio frequency (RF), micro wave (MW) to terahertz\nregime. In addition, this is \\textbf{orders of magnitude} more sensitive to\nfield strength as compared to the prior demonstrations on the MW electrometry\nusing the Rydberg atomic states. However previously studied atomic systems are\nonly sensitive to the field strength but not to the phase and hence this scheme\nprovides a great opportunity to characterize the MW completely including the\npropagation direction and the wavefront. This study opens up a new dimension in\nthe Radar technology such as in synthetic aperture radar interferometry. The MW\ninterferometry is based upon a six-level loopy ladder system involving the\nRydberg states in which two sub-systems interfere constructively or\ndestructively depending upon the phase between the MW electric fields closing\nthe loop.\n", "title": "Highly sensitive atomic based MW interferometry" }
null
null
null
null
true
null
1877
null
Default
null
null
null
{ "abstract": " The Kalman Filter has been called one of the greatest inventions in\nstatistics during the 20th century. Its purpose is to measure the state of a\nsystem by processing the noisy data received from different electronic sensors.\nIn comparison, a useful resource for managers in their effort to make the right\ndecisions is the wisdom of crowds. This phenomenon allows managers to combine\njudgments by different employees to get estimates that are often more accurate\nand reliable than estimates, which managers produce alone. Since harnessing the\ncollective intelligence of employees, and filtering signals from multiple noisy\nsensors appear related, we looked at the possibility of using the Kalman Filter\non estimates by people. Our predictions suggest, and our findings based on the\nSurvey of Professional Forecasters reveal, that the Kalman Filter can help\nmanagers solve their decision-making problems by giving them stronger signals\nbefore they choose. Indeed, when used on a subset of forecasters identified by\nthe Contribution Weighted Model, the Kalman Filter beat that rule clearly,\nacross all the forecasting horizons in the survey.\n", "title": "The Wisdom of a Kalman Crowd" }
null
null
null
null
true
null
1878
null
Default
null
null
null
{ "abstract": " We present a new method for the separation of superimposed, independent,\nauto-correlated components from noisy multi-channel measurement. The presented\nmethod simultaneously reconstructs and separates the components, taking all\nchannels into account and thereby increases the effective signal-to-noise ratio\nconsiderably, allowing separations even in the high noise regime.\nCharacteristics of the measurement instruments can be included, allowing for\napplication in complex measurement situations. Independent posterior samples\ncan be provided, permitting error estimates on all desired quantities. Using\nthe concept of information field theory, the algorithm is not restricted to any\ndimensionality of the underlying space or discretization scheme thereof.\n", "title": "Noisy independent component analysis of auto-correlated components" }
null
null
null
null
true
null
1879
null
Default
null
null
null
{ "abstract": " GC-1 and GC-2 are two globular clusters (GCs) in the remote halo of M81 and\nM82 in the M81 group discovered by Jang et al. using the {\\it Hubble Space\nTelescope} ({\\it HST}) images. These two GCs were observed as part of the\nBeijing--Arizona--Taiwan--Connecticut (BATC) Multicolor Sky Survey, using 14\nintermediate-band filters covering a wavelength range of 4000--10000 \\AA. We\naccurately determine these two clusters' ages and masses by comparing their\nspectral energy distributions (from 2267 to 20000~{\\AA}, comprising photometric\ndata in the near-ultraviolet of the {\\it Galaxy Evolution Explorer}, 14 BATC\nintermediate-band, and Two Micron All Sky Survey near-infrared $JHK_{\\rm s}$\nfilters) with theoretical stellar population-synthesis models, resulting in\nages of $15.50\\pm3.20$ for GC-1 and $15.10\\pm2.70$ Gyr for GC-2. The masses of\nGC-1 and GC-2 obtained here are $1.77-2.04\\times 10^6$ and $5.20-7.11\\times\n10^6 \\rm~M_\\odot$, respectively. In addition, the deep observations with the\nAdvanced Camera for Surveys and Wide Field Camera 3 on the {\\it HST} are used\nto provide the surface brightness profiles of GC-1 and GC-2. The structural and\ndynamical parameters are derived from fitting the profiles to three different\nmodels; in particular, the internal velocity dispersions of GC-1 and GC-2 are\nderived, which can be compared with ones obtained based on spectral\nobservations in the future. For the first time, in this paper, the $r_h$ versus\n$M_V$ diagram shows that GC-2 is an ultra-compact dwarf in the M81 group.\n", "title": "Ages and structural and dynamical parameters of two globular clusters in the M81 group" }
null
null
null
null
true
null
1880
null
Default
null
null
null
{ "abstract": " We present a method to generate renewable scenarios using Bayesian\nprobabilities by implementing the Bayesian generative adversarial\nnetwork~(Bayesian GAN), which is a variant of generative adversarial networks\nbased on two interconnected deep neural networks. By using a Bayesian\nformulation, generators can be constructed and trained to produce scenarios\nthat capture different salient modes in the data, allowing for better diversity\nand more accurate representation of the underlying physical process. Compared\nto conventional statistical models that are often hard to scale or sample from,\nthis method is model-free and can generate samples extremely efficiently. For\nvalidation, we use wind and solar times-series data from NREL integration data\nsets to train the Bayesian GAN. We demonstrate that proposed method is able to\ngenerate clusters of wind scenarios with different variance and mean value, and\nis able to distinguish and generate wind and solar scenarios simultaneously\neven if the historical data are intentionally mixed.\n", "title": "Bayesian Renewables Scenario Generation via Deep Generative Networks" }
null
null
null
null
true
null
1881
null
Default
null
null
null
{ "abstract": " Many social and economic systems are naturally represented as networks, from\noff-line and on-line social networks, to bipartite networks, like Netflix and\nAmazon, between consumers and products. Graphons, developed as limits of\ngraphs, form a natural, nonparametric method to describe and estimate large\nnetworks like Facebook and LinkedIn. Here we describe the development of the\ntheory of graphons, for both dense and sparse networks, over the last decade.\nWe also review theorems showing that we can consistently estimate graphons from\nmassive networks in a wide variety of models. Finally, we show how to use\ngraphons to estimate missing links in a sparse network, which has applications\nfrom estimating social and information networks in development economics, to\nrigorously and efficiently doing collaborative filtering with applications to\nmovie recommendations in Netflix and product suggestions in Amazon.\n", "title": "Graphons: A Nonparametric Method to Model, Estimate, and Design Algorithms for Massive Networks" }
null
null
null
null
true
null
1882
null
Default
null
null
null
{ "abstract": " In this article Hopf parametric adjunctions are defined and analysed within\nthe context of the 2-adjunction of the type $\\mathbf{Adj}$-$\\mathbf{Mnd}$. In\norder to do so, the definition of adjoint objects in the 2-category of\nadjunctions and in the 2-category of monads for $Cat$ are revised and\ncharacterized. This article finalises with the application of the obtained\nresults on current categorical characterization of Hopf Monads.\n", "title": "Hopf Parametric Adjoint Objects through a 2-adjunction of the type Adj-Mnd" }
null
null
null
null
true
null
1883
null
Default
null
null
null
{ "abstract": " Solving symmetric positive definite linear problems is a fundamental\ncomputational task in machine learning. The exact solution, famously, is\ncubicly expensive in the size of the matrix. To alleviate this problem, several\nlinear-time approximations, such as spectral and inducing-point methods, have\nbeen suggested and are now in wide use. These are low-rank approximations that\nchoose the low-rank space a priori and do not refine it over time. While this\nallows linear cost in the data-set size, it also causes a finite, uncorrected\napproximation error. Authors from numerical linear algebra have explored ways\nto iteratively refine such low-rank approximations, at a cost of a small number\nof matrix-vector multiplications. This idea is particularly interesting in the\nmany situations in machine learning where one has to solve a sequence of\nrelated symmetric positive definite linear problems. From the machine learning\nperspective, such deflation methods can be interpreted as transfer learning of\na low-rank approximation across a time-series of numerical tasks. We study the\nuse of such methods for our field. Our empirical results show that, on\nregression and classification problems of intermediate size, this approach can\ninterpolate between low computational cost and numerical precision.\n", "title": "Krylov Subspace Recycling for Fast Iterative Least-Squares in Machine Learning" }
null
null
null
null
true
null
1884
null
Default
null
null
null
{ "abstract": " Despite remarkable achievements in its practical tractability, the notorious\nclass of NP-complete problems has been escaping all attempts to find a\nworst-case polynomial time-bound solution algorithms for any of them. The vast\nmajority of work relies on Turing machines or equivalent models, all of which\nrelate to digital computing. This raises the question of whether a computer\nthat is (partly) non-digital could offer a new door towards an efficient\nsolution. And indeed, the partition problem, which is another NP-complete\nsibling of the famous Boolean satisfiability problem SAT, might be open to\nefficient solutions using analogue computing. We investigate this hypothesis\nhere, providing experimental evidence that Partition, and in turn also SAT, may\nbecome tractable on a combined digital and analogue computing machine. This\nwork provides mostly theoretical and based on simulations, and as such does not\nexhibit a polynomial time algorithm to solve NP-complete problems. Instead, it\nis intended as a pointer to new directions of research on special-purpose\ncomputing architectures that may help handling the class NP efficiently.\n", "title": "Towards a Physical Oracle for the Partition Problem using Analogue Computing" }
null
null
null
null
true
null
1885
null
Default
null
null
null
{ "abstract": " These notes aim at presenting an overview of Bayesian statistics, the\nunderlying concepts and application methodology that will be useful to\nastronomers seeking to analyse and interpret a wide variety of data about the\nUniverse. The level starts from elementary notions, without assuming any\nprevious knowledge of statistical methods, and then progresses to more\nadvanced, research-level topics. After an introduction to the importance of\nstatistical inference for the physical sciences, elementary notions of\nprobability theory and inference are introduced and explained. Bayesian methods\nare then presented, starting from the meaning of Bayes Theorem and its use as\ninferential engine, including a discussion on priors and posterior\ndistributions. Numerical methods for generating samples from arbitrary\nposteriors (including Markov Chain Monte Carlo and Nested Sampling) are then\ncovered. The last section deals with the topic of Bayesian model selection and\nhow it is used to assess the performance of models, and contrasts it with the\nclassical p-value approach. A series of exercises of various levels of\ndifficulty are designed to further the understanding of the theoretical\nmaterial, including fully worked out solutions for most of them.\n", "title": "Bayesian Methods in Cosmology" }
null
null
null
null
true
null
1886
null
Default
null
null
null
{ "abstract": " Extracting useful entities and attribute values from illicit domains such as\nhuman trafficking is a challenging problem with the potential for widespread\nsocial impact. Such domains employ atypical language models, have `long tails'\nand suffer from the problem of concept drift. In this paper, we propose a\nlightweight, feature-agnostic Information Extraction (IE) paradigm specifically\ndesigned for such domains. Our approach uses raw, unlabeled text from an\ninitial corpus, and a few (12-120) seed annotations per domain-specific\nattribute, to learn robust IE models for unobserved pages and websites.\nEmpirically, we demonstrate that our approach can outperform feature-centric\nConditional Random Field baselines by over 18\\% F-Measure on five annotated\nsets of real-world human trafficking datasets in both low-supervision and\nhigh-supervision settings. We also show that our approach is demonstrably\nrobust to concept drift, and can be efficiently bootstrapped even in a serial\ncomputing environment.\n", "title": "Information Extraction in Illicit Domains" }
null
null
null
null
true
null
1887
null
Default
null
null
null
{ "abstract": " This tutorial provides a gentle introduction to kernel density estimation\n(KDE) and recent advances regarding confidence bands and geometric/topological\nfeatures. We begin with a discussion of basic properties of KDE: the\nconvergence rate under various metrics, density derivative estimation, and\nbandwidth selection. Then, we introduce common approaches to the construction\nof confidence intervals/bands, and we discuss how to handle bias. Next, we talk\nabout recent advances in the inference of geometric and topological features of\na density function using KDE. Finally, we illustrate how one can use KDE to\nestimate a cumulative distribution function and a receiver operating\ncharacteristic curve. We provide R implementations related to this tutorial at\nthe end.\n", "title": "A Tutorial on Kernel Density Estimation and Recent Advances" }
null
null
null
null
true
null
1888
null
Default
null
null
null
{ "abstract": " State-level minimum Bayes risk (sMBR) training has become the de facto\nstandard for sequence-level training of speech recognition acoustic models. It\nhas an elegant formulation using the expectation semiring, and gives large\nimprovements in word error rate (WER) over models trained solely using\ncross-entropy (CE) or connectionist temporal classification (CTC). sMBR\ntraining optimizes the expected number of frames at which the reference and\nhypothesized acoustic states differ. It may be preferable to optimize the\nexpected WER, but WER does not interact well with the expectation semiring, and\nprevious approaches based on computing expected WER exactly involve expanding\nthe lattices used during training. In this paper we show how to perform\noptimization of the expected WER by sampling paths from the lattices used\nduring conventional sMBR training. The gradient of the expected WER is itself\nan expectation, and so may be approximated using Monte Carlo sampling. We show\nexperimentally that optimizing WER during acoustic model training gives 5%\nrelative improvement in WER over a well-tuned sMBR baseline on a 2-channel\nquery recognition task (Google Home).\n", "title": "Optimizing expected word error rate via sampling for speech recognition" }
null
null
null
null
true
null
1889
null
Default
null
null
null
{ "abstract": " The increasing illegal parking has become more and more serious. Nowadays the\nmethods of detecting illegally parked vehicles are based on background\nsegmentation. However, this method is weakly robust and sensitive to\nenvironment. Benefitting from deep learning, this paper proposes a novel\nillegal vehicle parking detection system. Illegal vehicles captured by camera\nare firstly located and classified by the famous Single Shot MultiBox Detector\n(SSD) algorithm. To improve the performance, we propose to optimize SSD by\nadjusting the aspect ratio of default box to accommodate with our dataset\nbetter. After that, a tracking and analysis of movement is adopted to judge the\nillegal vehicles in the region of interest (ROI). Experiments show that the\nsystem can achieve a 99% accuracy and real-time (25FPS) detection with strong\nrobustness in complex environments.\n", "title": "Real-Time Illegal Parking Detection System Based on Deep Learning" }
null
null
null
null
true
null
1890
null
Default
null
null
null
{ "abstract": " We discuss some extensions of results from the recent paper by Chernoyarov et\nal. (Ann. Inst. Stat. Math., October 2016) concerning limit distributions of\nBayesian and maximum likelihood estimators in the model \"signal plus white\nnoise\" with irregular cusp-type signals. Using a new representation of\nfractional Brownian motion (fBm) in terms of cusp functions we show that as the\nnoise intensity tends to zero, the limit distributions are expressed in terms\nof fBm for the full range of asymmetric cusp-type signals correspondingly with\nthe Hurst parameter H, 0<H<1. Simulation results for the densities and\nvariances of the limit distributions of Bayesian and maximum likelihood\nestimators are also provided.\n", "title": "On a representation of fractional Brownian motion and the limit distributions of statistics arising in cusp statistical models" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
1891
null
Validated
null
null
null
{ "abstract": " We tightly analyze the sample complexity of CCA, provide a learning algorithm\nthat achieves optimal statistical performance in time linear in the required\nnumber of samples (up to log factors), as well as a streaming algorithm with\nsimilar guarantees.\n", "title": "Stochastic Canonical Correlation Analysis" }
null
null
null
null
true
null
1892
null
Default
null
null
null
{ "abstract": " We propose a novel approach to address the Simultaneous Detection and\nSegmentation problem. Using hierarchical structures we use an efficient and\naccurate procedure that exploits the hierarchy feature information using\nLocality Sensitive Hashing. We build on recent work that utilizes convolutional\nneural networks to detect bounding boxes in an image and then use the top\nsimilar hierarchical region that best fits each bounding box after hashing, we\ncall this approach CZ Segmentation. We then refine our final segmentation\nresults by automatic hierarchy pruning. CZ Segmentation introduces a train-free\nalternative to Hypercolumns. We conduct extensive experiments on PASCAL VOC\n2012 segmentation dataset, showing that CZ gives competitive state-of-the-art\nobject segmentations.\n", "title": "Segmentation of Instances by Hashing" }
null
null
null
null
true
null
1893
null
Default
null
null
null
{ "abstract": " This paper introduces the combinatorial Boolean model (CBM), which is defined\nas the class of linear combinations of conjunctions of Boolean attributes. This\npaper addresses the issue of learning CBM from labeled data. CBM is of high\nknowledge interpretability but naïve learning of it requires exponentially\nlarge computation time with respect to data dimension and sample size. To\novercome this computational difficulty, we propose an algorithm GRAB (GRAfting\nfor Boolean datasets), which efficiently learns CBM within the\n$L_1$-regularized loss minimization framework. The key idea of GRAB is to\nreduce the loss minimization problem to the weighted frequent itemset mining,\nin which frequent patterns are efficiently computable. We employ benchmark\ndatasets to empirically demonstrate that GRAB is effective in terms of\ncomputational efficiency, prediction accuracy and knowledge discovery.\n", "title": "Grafting for Combinatorial Boolean Model using Frequent Itemset Mining" }
null
null
null
null
true
null
1894
null
Default
null
null
null
{ "abstract": " On September 10, 2017, Hurricane Irma made landfall in the Florida Keys and\ncaused significant damage. Informed by hydrodynamic storm surge and wave\nmodeling and post-storm satellite imagery, a rapid damage survey was soon\nconducted for 1600+ residential buildings in Big Pine Key and Marathon. Damage\ncategorizations and statistical analysis reveal distinct factors governing\ndamage at these two locations. The distance from the coast is significant for\nthe damage in Big Pine Key, as severely damaged buildings were located near\nnarrow waterways connected to the ocean. Building type and size are critical in\nMarathon, highlighted by the near-complete destruction of trailer communities\nthere. These observations raise issues of affordability and equity that need\nconsideration in damage recovery and rebuilding for resilience.\n", "title": "Rapid Assessment of Damaged Homes in the Florida Keys after Hurricane Irma" }
null
null
null
null
true
null
1895
null
Default
null
null
null
{ "abstract": " Human behavioural patterns exhibit selfish or competitive, as well as\nselfless or altruistic tendencies, both of which have demonstrable effects on\nhuman social and economic activity. In behavioural economics, such effects have\ntraditionally been illustrated experimentally via simple games like the\ndictator and ultimatum games. Experiments with these games suggest that, beyond\nrational economic thinking, human decision-making processes are influenced by\nsocial preferences, such as an inclination to fairness. In this study we\nsuggest that the apparent gap between competitive and altruistic human\ntendencies can be bridged by assuming that people are primarily maximising\ntheir status, i.e., a utility function different from simple profit\nmaximisation. To this end we analyse a simple agent-based model, where\nindividuals play the repeated dictator game in a social network they can\nmodify. As model parameters we consider the living costs and the rate at which\nagents forget infractions by others. We find that individual strategies used in\nthe game vary greatly, from selfish to selfless, and that both of the above\nparameters determine when individuals form complex and cohesive social\nnetworks.\n", "title": "Status maximization as a source of fairness in a networked dictator game" }
null
null
null
null
true
null
1896
null
Default
null
null
null
{ "abstract": " We study the special central configurations of the curved N-body problem in\nS^3. We show that there are special central configurations formed by N masses\nfor any N >2. We then extend the concept of special central configurations to\nS^n, n>0, and study one interesting class of special central configurations in\nS^n, the Dziobek special central configurations. We obtain a criterion for them\nand reduce it to two sets of equations. Then we apply these equations to\nspecial central configurations of 3 bodies on S^1, 4 bodies on S^2, and 5\nbodies in S^3.\n", "title": "On Dziobek Special Central Configurations" }
null
null
null
null
true
null
1897
null
Default
null
null
null
{ "abstract": " Following the selection of The Gravitational Universe by ESA, and the\nsuccessful flight of LISA Pathfinder, the LISA Consortium now proposes a 4 year\nmission in response to ESA's call for missions for L3. The observatory will be\nbased on three arms with six active laser links, between three identical\nspacecraft in a triangular formation separated by 2.5 million km.\nLISA is an all-sky monitor and will offer a wide view of a dynamic cosmos\nusing Gravitational Waves as new and unique messengers to unveil The\nGravitational Universe. It provides the closest ever view of the infant\nUniverse at TeV energy scales, has known sources in the form of verification\nbinaries in the Milky Way, and can probe the entire Universe, from its smallest\nscales near the horizons of black holes, all the way to cosmological scales.\nThe LISA mission will scan the entire sky as it follows behind the Earth in its\norbit, obtaining both polarisations of the Gravitational Waves simultaneously,\nand will measure source parameters with astrophysically relevant sensitivity in\na band from below $10^{-4}\\,$Hz to above $10^{-1}\\,$Hz.\n", "title": "Laser Interferometer Space Antenna" }
null
null
null
null
true
null
1898
null
Default
null
null
null
{ "abstract": " Empirical Bayes is a versatile approach to `learn from a lot' in two ways:\nfirst, from a large number of variables and second, from a potentially large\namount of prior information, e.g. stored in public repositories. We review\napplications of a variety of empirical Bayes methods to several well-known\nmodel-based prediction methods including penalized regression, linear\ndiscriminant analysis, and Bayesian models with sparse or dense priors. We\ndiscuss `formal' empirical Bayes methods which maximize the marginal\nlikelihood, but also more informal approaches based on other data summaries. We\ncontrast empirical Bayes to cross-validation and full Bayes, and discuss hybrid\napproaches. To study the relation between the quality of an empirical Bayes\nestimator and $p$, the number of variables, we consider a simple empirical\nBayes estimator in a linear model setting.\nWe argue that empirical Bayes is particularly useful when the prior contains\nmultiple parameters which model a priori information on variables, termed\n`co-data'. In particular, we present two novel examples that allow for co-data.\nFirst, a Bayesian spike-and-slab setting that facilitates inclusion of multiple\nco-data sources and types; second, a hybrid empirical Bayes-full Bayes ridge\nregression approach for estimation of the posterior predictive interval.\n", "title": "Learning from a lot: Empirical Bayes in high-dimensional prediction settings" }
null
null
null
null
true
null
1899
null
Default
null
null
null
{ "abstract": " Techniques for reducing the variance of gradient estimates used in stochastic\nprogramming algorithms for convex finite-sum problems have received a great\ndeal of attention in recent years. By leveraging dissipativity theory from\ncontrol, we provide a new perspective on two important variance-reduction\nalgorithms: SVRG and its direct accelerated variant Katyusha. Our perspective\nprovides a physically intuitive understanding of the behavior of SVRG-like\nmethods via a principle of energy conservation. The tools discussed here allow\nus to automate the convergence analysis of SVRG-like methods by capturing their\nessential properties in small semidefinite programs amenable to standard\nanalysis and computational techniques. Our approach recovers existing\nconvergence results for SVRG and Katyusha and generalizes the theory to\nalternative parameter choices. We also discuss how our approach complements the\nlinear coupling technique. Our combination of perspectives leads to a better\nunderstanding of accelerated variance-reduced stochastic methods for finite-sum\nproblems.\n", "title": "Dissipativity Theory for Accelerating Stochastic Variance Reduction: A Unified Analysis of SVRG and Katyusha Using Semidefinite Programs" }
null
null
null
null
true
null
1900
null
Default
null
null