text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Previous secondary eclipse observations of the hot Jupiter Qatar-1b in the Ks\nband suggest that it may have an unusually high day side temperature,\nindicative of minimal heat redistribution. There have also been indications\nthat the orbit may be slightly eccentric, possibly forced by another planet in\nthe system. We investigate the day side temperature and orbital eccentricity\nusing secondary eclipse observations with Spitzer. We observed the secondary\neclipse with Spitzer/IRAC in subarray mode, in both 3.6 and 4.5 micron\nwavelengths. We used pixel-level decorrelation to correct for Spitzer's\nintra-pixel sensitivity variations and thereby obtain accurate eclipse depths\nand central phases. Our 3.6 micron eclipse depth is 0.149 +/- 0.051% and the\n4.5 micron depth is 0.273 +/- 0.049%. Fitting a blackbody planet to our data\nand two recent Ks band eclipse depths indicates a brightness temperature of\n1506 +/- 71K. Comparison to model atmospheres for the planet indicates that its\ndegree of longitudinal heat redistribution is intermediate between fully\nuniform and day side only. The day side temperature of the planet is unlikely\nto be as high (1885K) as indicated by the ground-based eclipses in the Ks band,\nunless the planet's emergent spectrum deviates strongly from model atmosphere\npredictions. The average central phase for our Spitzer eclipses is 0.4984 +/-\n0.0017, yielding e cos(omega) = -0.0028 +/- 0.0027. Our results are consistent\nwith a circular orbit, and we constrain e cos(omega) much more strongly than\nhas been possible with previous observations.\n", "title": "Spitzer Secondary Eclipses of Qatar-1b" }
null
null
[ "Physics" ]
null
true
null
16401
null
Validated
null
null
null
{ "abstract": " Using a combination of analytic and numerical methods, we study the\npolarizability of a (non-interacting) Anderson insulator in one, two, and three\ndimensions and demonstrate that, in a wide range of parameters, it scales\nproportionally to the square of the localization length, contrary to earlier\nclaims based on the effective-medium approximation. We further analyze the\neffect of electron-electron interactions on the dielectric constant in\nquasi-1D, quasi-2D and 3D materials with large localization length, including\nboth Coulomb repulsion and phonon-mediated attraction. The phonon-mediated\nattraction (in the pseudogapped state on the insulating side of the\nSuperconductor-Insulator Transition) produces a correction to the dielectric\nconstant, which may be detected from a linear response of a dielectric constant\nto an external magnetic field.\n", "title": "Dielectric response of Anderson and pseudogapped insulators" }
null
null
null
null
true
null
16402
null
Default
null
null
null
{ "abstract": " As a large-scale instance of dramatic collective behaviour, the 2005 French\nriots started in a poor suburb of Paris, then spread in all of France, lasting\nabout three weeks. Remarkably, although there were no displacements of rioters,\nthe riot activity did travel. Access to daily national police data has allowed\nus to explore the dynamics of riot propagation. Here we show that an\nepidemic-like model, with just a few parameters and a single sociological\nvariable characterizing neighbourhood deprivation, accounts quantitatively for\nthe full spatio-temporal dynamics of the riots. This is the first time that\nsuch data-driven modelling involving contagion both within and between cities\n(through geographic proximity or media) at the scale of a country, and on a\ndaily basis, is performed. Moreover, we give a precise mathematical\ncharacterization to the expression \"wave of riots\", and provide a visualization\nof the propagation around Paris, exhibiting the wave in a way not described\nbefore. The remarkable agreement between model and data demonstrates that\ngeographic proximity played a major role in the propagation, even though\ninformation was readily available everywhere through media. Finally, we argue\nthat our approach gives a general framework for the modelling of the dynamics\nof spontaneous collective uprisings.\n", "title": "Epidemiological modeling of the 2005 French riots: a spreading wave and the role of contagion" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
16403
null
Validated
null
null
null
{ "abstract": " The classical linear Black--Scholes model for pricing derivative securities\nis a popular model in financial industry. It relies on several restrictive\nassumptions such as completeness, and frictionless of the market as well as the\nassumption on the underlying asset price dynamics following a geometric\nBrownian motion. The main purpose of this paper is to generalize the classical\nBlack--Scholes model for pricing derivative securities by taking into account\nfeedback effects due to an influence of a large trader on the underlying asset\nprice dynamics exhibiting random jumps. The assumption that an investor can\ntrade large amounts of assets without affecting the underlying asset price\nitself is usually not satisfied, especially in illiquid markets. We generalize\nthe Frey--Stremme nonlinear option pricing model for the case the underlying\nasset follows a Levy stochastic process with jumps. We derive and analyze a\nfully nonlinear parabolic partial-integro differential equation for the price\nof the option contract. We propose a semi-implicit numerical discretization\nscheme and perform various numerical experiments showing influence of a large\ntrader and intensity of jumps on the option price.\n", "title": "Option Pricing in Illiquid Markets with Jumps" }
null
null
[ "Quantitative Finance" ]
null
true
null
16404
null
Validated
null
null
null
{ "abstract": " At the heart of the Bitcoin is a blockchain protocol, a protocol for\nachieving consensus on a public ledger that records bitcoin transactions. To\nthe extent that a blockchain protocol is used for applications such as contract\nsigning and making certain transactions (such as house sales) public, we need\nto understand what guarantees the protocol gives us in terms of agents'\nknowledge. Here, we provide a complete characterization of agent's knowledge\nwhen running a blockchain protocol using a variant of common knowledge that\ntakes into account the fact that agents can enter and leave the system, it is\nnot known which agents are in fact following the protocol (some agents may want\nto deviate if they can gain by doing so), and the fact that the guarantees\nprovided by blockchain protocols are probabilistic. We then consider some\nscenarios involving contracts and show that this level of knowledge suffices\nfor some scenarios, but not others.\n", "title": "A Knowledge-Based Analysis of the Blockchain Protocol" }
null
null
null
null
true
null
16405
null
Default
null
null
null
{ "abstract": " The regret bound of an optimization algorithms is one of the basic criteria\nfor evaluating the performance of the given algorithm. By inspecting the\ndifferences between the regret bounds of traditional algorithms and adaptive\none, we provide a guide for choosing an optimizer with respect to the given\ndata set and the loss function. For analysis, we assume that the loss function\nis convex and its gradient is Lipschitz continuous.\n", "title": "Convergence Analysis of Optimization Algorithms" }
null
null
null
null
true
null
16406
null
Default
null
null
null
{ "abstract": " A set of points $X = X_B \\cup X_R \\subseteq \\mathbb{R}^d$ is linearly\nseparable if the convex hulls of $X_B$ and $X_R$ are disjoint, hence there\nexists a hyperplane separating $X_B$ from $X_R$. Such a hyperplane provides a\nmethod for classifying new points, according to which side of the hyperplane\nthe new points lie. When such a linear separation is not possible, it may still\nbe possible to partition $X_B$ and $X_R$ into prespecified numbers of groups,\nin such a way that every group from $X_B$ is linearly separable from every\ngroup from $X_R$. We may also discard some points as outliers, and seek to\nminimize the number of outliers necessary to find such a partition. Based on\nthese ideas, Bertsimas and Shioda proposed the classification and regression by\ninteger optimization (CRIO) method in 2007. In this work we explore the integer\nprogramming aspects of the classification part of CRIO, in particular\ntheoretical properties of the associated formulation. We are able to find\nfacet-inducing inequalities coming from the stable set polytope, hence showing\nthat this classification problem has exploitable combinatorial properties.\n", "title": "On the combinatorics of the 2-class classification problem" }
null
null
null
null
true
null
16407
null
Default
null
null
null
{ "abstract": " Magnetic fields are ubiquitous in the Universe. Extragalactic disks, halos\nand clusters have consistently been shown, via diffuse radio-synchrotron\nemission and Faraday rotation measurements, to exhibit magnetic field strengths\nranging from a few nG to tens of $\\mu$G. The energy density of these fields is\ntypically comparable to the energy density of the fluid motions of the plasma\nin which they are embedded, making magnetic fields essential players in the\ndynamics of the luminous matter. The standard theoretical model for the origin\nof these strong magnetic fields is through the amplification of tiny seed\nfields via turbulent dynamo to the level consistent with current observations.\nHere we demonstrate, using laser-produced colliding plasma flows, that\nturbulence is indeed capable of rapidly amplifying seed fields to near\nequipartition with the turbulent fluid motions. These results support the\nnotion that turbulent dynamo is a viable mechanism responsible for the observed\npresent-day magnetization of the Universe.\n", "title": "Laboratory evidence of dynamo amplification of magnetic fields in a turbulent plasma" }
null
null
null
null
true
null
16408
null
Default
null
null
null
{ "abstract": " The multiple colliding laser pulse concept formulated in Ref. [1] is\nbeneficial for achieving an extremely high amplitude of coherent\nelectromagnetic field. Since the topology of electric and magnetic fields\noscillating in time of multiple colliding laser pulses is far from trivial and\nthe radiation friction effects are significant in the high field limit, the\ndynamics of charged particles interacting with the multiple colliding laser\npulses demonstrates remarkable features corresponding to random walk\ntrajectories, limit circles, attractors, regular patterns and Levy flights.\nUnder extremely high intensity conditions the nonlinear dissipation mechanism\nstabilizes the particle motion resulting in the charged particle trajectory\nbeing located within narrow regions and in the occurrence of a new class of\nregular patterns made by the particle ensembles.\n", "title": "Radiating Electron Interaction with Multiple Colliding Electromagnetic Waves: Random Walk Trajectories, Levy Flights, Limit Circles, and Attractors (Survey of the Structurally Determinate Patterns)" }
null
null
null
null
true
null
16409
null
Default
null
null
null
{ "abstract": " In this work, we jointly address the problem of text detection and\nrecognition in natural scene images based on convolutional recurrent neural\nnetworks. We propose a unified network that simultaneously localizes and\nrecognizes text with a single forward pass, avoiding intermediate processes\nlike image cropping and feature re-calculation, word separation, or character\ngrouping. In contrast to existing approaches that consider text detection and\nrecognition as two distinct tasks and tackle them one by one, the proposed\nframework settles these two tasks concurrently. The whole framework can be\ntrained end-to-end, requiring only images, the ground-truth bounding boxes and\ntext labels. Through end-to-end training, the learned features can be more\ninformative, which improves the overall performance. The convolutional features\nare calculated only once and shared by both detection and recognition, which\nsaves processing time. Our proposed method has achieved competitive performance\non several benchmark datasets.\n", "title": "Towards End-to-end Text Spotting with Convolutional Recurrent Neural Networks" }
null
null
null
null
true
null
16410
null
Default
null
null
null
{ "abstract": " Water and hydroxyl, once thought to be found only in the primitive airless\nbodies that formed beyond roughly 2.5-3 AU, have recently been detected on the\nMoon and Vesta, which both have surfaces dominated by evolved, non-primitive\ncompositions. In both these cases, the water/OH is thought to be exogenic,\neither brought in via impacts with comets or hydrated asteroids or created via\nsolar wind interactions with silicates in the regolith or both. Such exogenic\nprocesses should also be occurring on other airless body surfaces. To test this\nhypothesis, we used the NASA Infrared Telescope Facility (IRTF) to measure\nreflectance spectra (2.0 to 4.1 {\\mu}m) of two large near-Earth asteroids\n(NEAs) with compositions generally interpreted as anhydrous: 433 Eros and 1036\nGanymed. OH is detected on both of these bodies in the form of absorption\nfeatures near 3 {\\mu}m. The spectra contain a component of thermal emission at\nlonger wavelengths, from which we estimate thermal of 167+/- 98 J m-2s-1/2K-1\nfor Eros (consistent with previous estimates) and 214+/- 80 J m-2s-1/2K-1 for\nGanymed, the first reported measurement of thermal inertia for this object.\nThese observations demonstrate that processes responsible for water/OH creation\non large airless bodies also act on much smaller bodies.\n", "title": "Evidence for OH or H2O on the surface of 433 Eros and 1036 Ganymed" }
null
null
null
null
true
null
16411
null
Default
null
null
null
{ "abstract": " We propose a general framework for studying jump-diffusion systems driven by\nboth Gaussian noise and a jump process with state-dependent intensity. Of\nparticular natural interest are the jump locations: the system evaluated at the\njump times. However, the state-dependence of the jump rate provides direct\ncoupling between the diffusion and jump components, making disentangling the\ntwo to study individually difficult. We provide an iterative map formulation of\nthe sequence of distributions of jump locations. Computation of these\ndistributions allows for the extraction of the interjump time statistics. These\nquantities reveal a relationship between the long-time distribution of jump\nlocation and the stationary density of the full process. We provide a few\nexamples to demonstrate the analytical and numerical tools stemming from the\nresults proposed in the paper, including an application that shows a\nnon-monotonic dependence on the strength of diffusion.\n", "title": "Jump Locations of Jump-Diffusion Processes with State-Dependent Rates" }
null
null
null
null
true
null
16412
null
Default
null
null
null
{ "abstract": " We have recently suggested that dust growth in the cold gas phase dominates\nthe dust abundance in elliptical galaxies while dust is efficiently destroyed\nin the hot X-ray emitting plasma (hot gas). In order to understand the dust\nevolution in elliptical galaxies, we construct a simple model that includes\ndust growth in the cold gas and dust destruction in the hot gas. We also take\ninto account the effect of mass exchange between these two gas components\ninduced by active galactic nucleus (AGN) feedback. We survey reasonable ranges\nof the relevant parameters in the model and find that AGN feedback cycles\nactually produce a variety in cold gas mass and dust-to-gas ratio. By comparing\nwith an observational sample of nearby elliptical galaxies, we find that,\nalthough the dust-to-gas ratio varies by an order of magnitude in our model,\nthe entire range of the observed dust-to-gas ratios is difficult to be\nreproduced under a single parameter set. Variation of the dust growth\nefficiency is the most probable solution to explain the large variety in\ndust-to-gas ratio of the observational sample. Therefore, dust growth can play\na central role in creating the variation in dust-to-gas ratio through the AGN\nfeedback cycle and through the variation in dust growth efficiency.\n", "title": "Dust evolution with active galactic nucleus feedback in elliptical galaxies" }
null
null
null
null
true
null
16413
null
Default
null
null
null
{ "abstract": " The paradigm shift from shallow classifiers with hand-crafted features to\nend-to-end trainable deep learning models has shown significant improvements on\nsupervised learning tasks. Despite the promising power of deep neural networks\n(DNN), how to alleviate overfitting during training has been a research topic\nof interest. In this paper, we present a Generative-Discriminative Variational\nModel (GDVM) for visual classification, in which we introduce a latent variable\ninferred from inputs for exhibiting generative abilities towards prediction. In\nother words, our GDVM casts the supervised learning task as a generative\nlearning process, with data discrimination to be jointly exploited for improved\nclassification. In our experiments, we consider the tasks of multi-class\nclassification, multi-label classification, and zero-shot learning. We show\nthat our GDVM performs favorably against the baselines or recent generative DNN\nmodels.\n", "title": "Generative-Discriminative Variational Model for Visual Recognition" }
null
null
null
null
true
null
16414
null
Default
null
null
null
{ "abstract": " In this paper, we present a novel approach for initializing deep neural\nnetworks, i.e., by turning PCA into neural layers. Usually, the initialization\nof the weights of a deep neural network is done in one of the three following\nways: 1) with random values, 2) layer-wise, usually as Deep Belief Network or\nas auto-encoder, and 3) re-use of layers from another network (transfer\nlearning). Therefore, typically, many training epochs are needed before\nmeaningful weights are learned, or a rather similar dataset is required for\nseeding a fine-tuning of transfer learning. In this paper, we describe how to\nturn a PCA into an auto-encoder, by generating an encoder layer of the PCA\nparameters and furthermore adding a decoding layer. We analyze the\ninitialization technique on real documents. First, we show that a PCA-based\ninitialization is quick and leads to a very stable initialization. Furthermore,\nfor the task of layout analysis we investigate the effectiveness of PCA-based\ninitialization and show that it outperforms state-of-the-art random weight\ninitialization methods.\n", "title": "PCA-Initialized Deep Neural Networks Applied To Document Image Analysis" }
null
null
null
null
true
null
16415
null
Default
null
null
null
{ "abstract": " It is of practical significance to define the notion of a measure of quality\nof a control system, i.e., a quantitative extension of the classical notion of\ncontrollability. In this article we demonstrate that the three standard\nmeasures of quality involving the trace, minimum eigenvalue, and the\ndeterminant of the controllability grammian achieve their optimum values when\nthe columns of the controllability matrix from a tight frame. Motivated by\nthis, and in view of some recent developments in frame theoretic signal\nprocessing, we provide a measure of quality for LTI systems based on a measure\nof tightness of the columns of the reachability matrix .\n", "title": "On a frame theoretic measure of quality of LTI systems" }
null
null
null
null
true
null
16416
null
Default
null
null
null
{ "abstract": " A task of clustering data given in the ordinal scale under conditions of\noverlapping clusters has been considered. It's proposed to use an approach\nbased on memberhsip and likelihood functions sharing. A number of performed\nexperiments proved effectiveness of the proposed method. The proposed method is\ncharacterized by robustness to outliers due to a way of ordering values while\nconstructing membership functions.\n", "title": "Fuzzy Clustering Data Given on the Ordinal Scale Based on Membership and Likelihood Functions Sharing" }
null
null
null
null
true
null
16417
null
Default
null
null
null
{ "abstract": " ALICE (A Large Ion Collider Experiment) is the heavy-ion detector designed to\nstudy the strongly interacting state of matter realized in relativistic\nheavy-ion collisions at the CERN Large Hadron Collider (LHC). A major upgrade\nof the experiment is planned during the 2019-2020 long shutdown. In order to\ncope with a data rate 100 times higher than during LHC Run 1 and with the\ncontinuous read-out of the Time Projection Chamber (TPC), it is necessary to\nupgrade the Online and Offline Computing to a new common system called O2 . The\nO2 read- out chain will use commodity x86 Linux servers equipped with custom\nPCIe FPGA-based read- out cards. This paper discusses the driver architecture\nfor the cards that will be used in O2 : the PCIe v2 x8, Xilinx Virtex 6 based\nC-RORC (Common Readout Receiver Card) and the PCIe v3 x16, Intel Arria 10 based\nCRU (Common Readout Unit). Access to the PCIe cards is provided via three\nlayers of software. Firstly, the low-level PCIe (PCI Express) layer responsible\nfor the userspace interface for low-level operations such as memory mapping the\nPCIe BAR (Base Address Registers) and creating scatter-gather lists, which is\nprovided by the PDA (Portable Driver Architecture) library developed by the\nFrankfurt Institute for Advanced Studies (FIAS). Above that sits our userspace\ndriver which implements synchronization, controls the read-out card -- e.g.\nresetting and configuring the card, providing it with bus addresses to transfer\ndata to and checking for data arrival -- and presents a uniform, high-level C++\ninterface that abstracts over the differences between the C-RORC and CRU. This\ninterface -- of which direct usage is principally intended for high-performance\nread-out processes -- allows users to configure and use the various aspects of\nthe read-out cards, such as configuration, DMA transfers and commands to the\nfront-end. [...]\n", "title": "The ALICE O2 common driver for the C-RORC and CRU read-out cards" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
16418
null
Validated
null
null
null
{ "abstract": " This paper proposes an approach to domain transfer based on a pairwise loss\nfunction that helps transfer control policies learned in simulation onto a real\nrobot. We explore the idea in the context of a 'category level' manipulation\ntask where a control policy is learned that enables a robot to perform a mating\ntask involving novel objects. We explore the case where depth images are used\nas the main form of sensor input. Our experimental results demonstrate that\nproposed method consistently outperforms baseline methods that train only in\nsimulation or that combine real and simulated data in a naive way.\n", "title": "Adapting control policies from simulation to reality using a pairwise loss" }
null
null
null
null
true
null
16419
null
Default
null
null
null
{ "abstract": " We report the results of the 2dF-VST ATLAS Cold Spot galaxy redshift survey\n(2CSz) based on imaging from VST ATLAS and spectroscopy from 2dF AAOmega over\nthe core of the CMB Cold Spot. We sparsely surveyed the inner 5$^{\\circ}$\nradius of the Cold Spot to a limit of $i_{AB} \\le 19.2$, sampling $\\sim7000$\ngalaxies at $z<0.4$. We have found voids at $z=$ 0.14, 0.26 and 0.30 but they\nare interspersed with small over-densities and the scale of these voids is\ninsufficient to explain the Cold Spot through the $\\Lambda$CDM ISW effect.\nCombining with previous data out to $z\\sim1$, we conclude that the CMB Cold\nSpot could not have been imprinted by a void confined to the inner core of the\nCold Spot. Additionally we find that our 'control' field GAMA G23 shows a\nsimilarity in its galaxy redshift distribution to the Cold Spot. Since the GAMA\nG23 line-of-sight shows no evidence of a CMB temperature decrement we conclude\nthat the Cold Spot may have a primordial origin rather than being due to\nline-of-sight effects.\n", "title": "Evidence against a supervoid causing the CMB Cold Spot" }
null
null
null
null
true
null
16420
null
Default
null
null
null
{ "abstract": " We construct a toy a model which demonstrates that large field single scalar\ninflation can produce an arbitrarily small scalar to tensor ratio in the window\nof e-foldings recoverable from CMB experiments. This is done by generalizing\nthe $\\alpha$-attractor models to allow the potential to approach a constant as\nrapidly as we desire for super-planckian field values. This implies that a\nnon-detection of r alone can never rule out entirely the theory of large field\ninflation.\n", "title": "How to Produce an Arbitrarily Small Tensor to Scalar Ratio" }
null
null
null
null
true
null
16421
null
Default
null
null
null
{ "abstract": " We develop a metalearning approach for learning hierarchically structured\npolicies, improving sample efficiency on unseen tasks through the use of shared\nprimitives---policies that are executed for large numbers of timesteps.\nSpecifically, a set of primitives are shared within a distribution of tasks,\nand are switched between by task-specific policies. We provide a concrete\nmetric for measuring the strength of such hierarchies, leading to an\noptimization problem for quickly reaching high reward on unseen tasks. We then\npresent an algorithm to solve this problem end-to-end through the use of any\noff-the-shelf reinforcement learning method, by repeatedly sampling new tasks\nand resetting task-specific policies. We successfully discover meaningful motor\nprimitives for the directional movement of four-legged robots, solely by\ninteracting with distributions of mazes. We also demonstrate the\ntransferability of primitives to solve long-timescale sparse-reward obstacle\ncourses, and we enable 3D humanoid robots to robustly walk and crawl with the\nsame policy.\n", "title": "Meta Learning Shared Hierarchies" }
null
null
null
null
true
null
16422
null
Default
null
null
null
{ "abstract": " We present the procedure to build and validate the bright-star masks for the\nHyper-Suprime-Cam Strategic Subaru Proposal (HSC-SSP) survey. To identify and\nmask the saturated stars in the full HSC-SSP footprint, we rely on the Gaia and\nTycho-2 star catalogues. We first assemble a pure star catalogue down to\n$G_{\\rm Gaia} < 18$ after removing $\\sim1.5\\%$ of sources that appear extended\nin the Sloan Digital Sky Survey (SDSS). We perform visual inspection on the\nearly data from the S16A internal release of HSC-SSP, finding that our star\ncatalogue is $99.2\\%$ pure down to $G_{\\rm Gaia} < 18$. Second, we build the\nmask regions in an automated way using stacked detected source measurements\naround bright stars binned per $G_{\\rm Gaia}$ magnitude. Finally, we validate\nthose masks from visual inspection and comparison with the literature of galaxy\nnumber counts and angular two-point correlation functions. This version\n(Arcturus) supersedes the previous version (Sirius) used in the S16A internal\nand DR1 public releases. We publicly release the full masks and tools to flag\nobjects in the entire footprint of the planned HSC-SSP observations at this\naddress: this ftp URL.\n", "title": "The bright-star masks for the HSC-SSP survey" }
null
null
null
null
true
null
16423
null
Default
null
null
null
{ "abstract": " Generative adversarial networks (GANs) are a family of generative models that\ndo not minimize a single training criterion. Unlike other generative models,\nthe data distribution is learned via a game between a generator (the generative\nmodel) and a discriminator (a teacher providing training signal) that each\nminimize their own cost. GANs are designed to reach a Nash equilibrium at which\neach player cannot reduce their cost without changing the other players'\nparameters. One useful approach for the theory of GANs is to show that a\ndivergence between the training distribution and the model distribution obtains\nits minimum value at equilibrium. Several recent research directions have been\nmotivated by the idea that this divergence is the primary guide for the\nlearning process and that every step of learning should decrease the\ndivergence. We show that this view is overly restrictive. During GAN training,\nthe discriminator provides learning signal in situations where the gradients of\nthe divergences between distributions would not be useful. We provide empirical\ncounterexamples to the view of GAN training as divergence minimization.\nSpecifically, we demonstrate that GANs are able to learn distributions in\nsituations where the divergence minimization point of view predicts they would\nfail. We also show that gradient penalties motivated from the divergence\nminimization perspective are equally helpful when applied in other contexts in\nwhich the divergence minimization perspective does not predict they would be\nhelpful. This contributes to a growing body of evidence that GAN training may\nbe more usefully viewed as approaching Nash equilibria via trajectories that do\nnot necessarily minimize a specific divergence at each step.\n", "title": "Many Paths to Equilibrium: GANs Do Not Need to Decrease a Divergence At Every Step" }
null
null
null
null
true
null
16424
null
Default
null
null
null
{ "abstract": " We discuss memory models which are based on tensor decompositions using\nlatent representations of entities and events. We show how episodic memory and\nsemantic memory can be realized and discuss how new memory traces can be\ngenerated from sensory input: Existing memories are the basis for perception\nand new memories are generated via perception. We relate our mathematical\napproach to the hippocampal memory indexing theory. We describe the first\ndetailed mathematical models for the complete processing pipeline from sensory\ninput and its semantic decoding, i.e., perception, to the formation of episodic\nand semantic memories and their declarative semantic decodings. Our main\nhypothesis is that perception includes an active semantic decoding process,\nwhich relies on latent representations of entities and predicates, and that\nepisodic and semantic memories depend on the same decoding process. We\ncontribute to the debate between the leading memory consolidation theories,\ni.e., the standard consolidation theory (SCT) and the multiple trace theory\n(MTT). The latter is closely related to the complementary learning systems\n(CLS) framework. In particular, we show explicitly how episodic memory can\nteach the neocortex to form a semantic memory, which is a core issue in MTT and\nCLS.\n", "title": "The Tensor Memory Hypothesis" }
null
null
null
null
true
null
16425
null
Default
null
null
null
{ "abstract": " Published by Reporters Without Borders every year, the Press Freedom Index\n(PFI) reflects the fear and tension in the newsroom pushed by the government\nand private sectors. While the PFI is invaluable in monitoring media\nenvironments worldwide, the current survey-based method has inherent\nlimitations to updates in terms of cost and time. In this work, we introduce an\nalternative way to measure the level of press freedom using media attention\ndiversity compiled from Unfiltered News.\n", "title": "Data-driven Approach to Measuring the Level of Press Freedom Using Media Attention Diversity from Unfiltered News" }
null
null
[ "Computer Science" ]
null
true
null
16426
null
Validated
null
null
null
{ "abstract": " Advanced satellite-based frequency transfers by TWCP and IPPP have been\nperformed between NICT and KRISS. We confirm that the disagreement between them\nis less than 1x10^{-16} at an averaging time of several days. Additionally, an\nintercontinental frequency ratio measurement of Sr and Yb optical lattice\nclocks was directly performed by TWCP. We achieved an uncertainty at the\nmid-10^{-16} level after a total measurement time of 12 hours. The frequency\nratio was consistent with the recently reported values within the uncertainty.\n", "title": "Advanced Satellite-based Frequency Transfer at the 10^{-16} Level" }
null
null
[ "Physics" ]
null
true
null
16427
null
Validated
null
null
null
{ "abstract": " In this paper we investigate the problem of detecting dynamically evolving\nsignals. We model the signal as an $n$ dimensional vector that is either zero\nor has $s$ non-zero components. At each time step $t\\in \\mathbb{N}$ the\nnon-zero components change their location independently with probability $p$.\nThe statistical problem is to decide whether the signal is a zero vector or in\nfact it has non-zero components. This decision is based on $m$ noisy\nobservations of individual signal components collected at times $t=1,\\ldots,m$.\nWe consider two different sensing paradigms, namely adaptive and non-adaptive\nsensing. For non-adaptive sensing the choice of components to measure has to be\ndecided before the data collection process started, while for adaptive sensing\none can adjust the sensing process based on observations collected earlier. We\ncharacterize the difficulty of this detection problem in both sensing paradigms\nin terms of the aforementioned parameters, with special interest to the speed\nof change of the active components. In addition we provide an adaptive sensing\nalgorithm for this problem and contrast its performance to that of non-adaptive\ndetection algorithms.\n", "title": "Are there needles in a moving haystack? Adaptive sensing for detection of dynamically evolving signals" }
null
null
null
null
true
null
16428
null
Default
null
null
null
{ "abstract": " In this paper, we propose an encoder-decoder convolutional neural network\n(CNN) architecture for estimating camera pose (orientation and location) from a\nsingle RGB-image. The architecture has a hourglass shape consisting of a chain\nof convolution and up-convolution layers followed by a regression part. The\nup-convolution layers are introduced to preserve the fine-grained information\nof the input image. Following the common practice, we train our model in\nend-to-end manner utilizing transfer learning from large scale classification\ndata. The experiments demonstrate the performance of the approach on data\nexhibiting different lighting conditions, reflections, and motion blur. The\nresults indicate a clear improvement over the previous state-of-the-art even\nwhen compared to methods that utilize sequence of test frames instead of a\nsingle frame.\n", "title": "Image-based Localization using Hourglass Networks" }
null
null
null
null
true
null
16429
null
Default
null
null
null
{ "abstract": " We consider a finite-dimensional quantum system coupled to the bosonic\nradiation field and subject to a time-periodic control operator. Assuming the\nvalidity of a certain dynamic decoupling condition we approximate the system's\ntime evolution with respect to the non-interacting dynamics. For sufficiently\nsmall coupling constants $g$ and control periods $T$ we show that a certain\ndeviation of coupled and uncoupled propagator may be estimated by\n$\\mathcal{O}(gt \\, T)$. Our approach relies on the concept of Kato stability\nand general theory on non-autonomous linear evolution equations.\n", "title": "Suppression of Decoherence of a Spin-Boson System by Time-Periodic Control" }
null
null
null
null
true
null
16430
null
Default
null
null
null
{ "abstract": " The model-based control of building heating systems for energy saving\nencounters severe physical, mathematical and calibration difficulties in the\nnumerous attempts that has been published until now. This topic is addressed\nhere via a new model-free control setting, where the need of any mathematical\ndescription disappears. Several convincing computer simulations are presented.\nComparisons with classic PI controllers and flatness-based predictive control\nare provided.\n", "title": "Energy saving for building heating via a simple and efficient model-free control design: First steps with computer simulations" }
null
null
null
null
true
null
16431
null
Default
null
null
null
{ "abstract": " We propose a formal approach for relating abstract separation logic library\nspecifications with the trace properties they enforce on interactions between a\nclient and a library. Separation logic with abstract predicates enforces a\nresource discipline that constrains when and how calls may be made between a\nclient and a library. Intuitively, this can enforce a protocol on the\ninteraction trace. This intuition is broadly used in the separation logic\ncommunity but has not previously been formalised. We provide just such a\nformalisation. Our approach is based on using wrappers which instrument library\ncode to induce execution traces for the properties under examination. By\nconsidering a separation logic extended with trace resources, we prove that\nwhen a library satisfies its separation logic specification then its wrapped\nversion satisfies the same specification and, moreover, maintains the trace\nproperties as an invariant. Consequently, any client and library implementation\nthat are correct with respect to the separation logic specification will\nsatisfy the trace properties.\n", "title": "Trace Properties from Separation Logic Specifications" }
null
null
[ "Computer Science" ]
null
true
null
16432
null
Validated
null
null
null
{ "abstract": " Owing to their capability of summarising interactions between elements of a\nsystem, networks have become a common type of data in many fields. As networks\ncan be inhomogeneous, in that different regions of the network may exhibit\ndifferent topologies, an important topic concerns their local properties. This\npaper focuses on the estimation of the local degree distribution of a vertex in\nan inhomogeneous network. The contributions are twofold: we propose an\nestimator based on local weighted averaging, and we set up a Monte Carlo\ncross-validation procedure to pick the parameters of this estimator. Under a\nspecific modelling assumption we derive an oracle inequality that shows how the\nmodel parameters affect the precision of the estimator. We illustrate our\nmethod by several numerical experiments, on both real and synthetic data,\nshowing in particular that the approach considerably improves upon the natural,\nempirical estimator.\n", "title": "Estimation of Local Degree Distributions via Local Weighted Averaging and Monte Carlo Cross-Validation" }
null
null
null
null
true
null
16433
null
Default
null
null
null
{ "abstract": " The Atacama Millimeter/submillimeter Array (ALMA) Phasing Project (APP) has\ndeveloped and deployed the hardware and software necessary to coherently sum\nthe signals of individual ALMA antennas and record the aggregate sum in Very\nLong Baseline Interferometry (VLBI) Data Exchange Format. These beamforming\ncapabilities allow the ALMA array to collectively function as the equivalent of\na single large aperture and participate in global VLBI arrays. The inclusion of\nphased ALMA in current VLBI networks operating at (sub)millimeter wavelengths\nprovides an order of magnitude improvement in sensitivity, as well as\nenhancements in u-v coverage and north-south angular resolution. The\navailability of a phased ALMA enables a wide range of new ultra-high angular\nresolution science applications, including the resolution of supermassive black\nholes on event horizon scales and studies of the launch and collimation of\nastrophysical jets. It also provides a high-sensitivity aperture that may be\nused for investigations such as pulsar searches at high frequencies. This paper\nprovides an overview of the ALMA Phasing System design, implementation, and\nperformance characteristics.\n", "title": "The ALMA Phasing System: A Beamforming Capability for Ultra-High-Resolution Science at (Sub)Millimeter Wavelengths" }
null
null
null
null
true
null
16434
null
Default
null
null
null
{ "abstract": " We simulate a rotating 2D BEC to study the melting of a vortex lattice in\npresence of random impurities. Impurities are introduced either through a\nprotocol in which vortex lattice is produced in an impurity potential or first\ncreating the vortex lattice in the absence of random pinning and then cranking\nup the (co-rotating) impurity potential. We find that for a fixed strength,\npinning of vortices at randomly distributed impurities leads to the new states\nof vortex lattice. It is unearthed that the vortex lattice follow a two-step\nmelting via loss of positional and orientational order. Also, the comparisons\nbetween the states obtained in two protocols show that the vortex lattice\nstates are metastable states when impurities are introduced after the formation\nof an ordered vortex lattice. We also show the existence of metastable states\nwhich depend on the history of how the vortex lattice is created.\n", "title": "Signatures of two-step impurity mediated vortex lattice melting in Bose-Einstein Condensates" }
null
null
null
null
true
null
16435
null
Default
null
null
null
{ "abstract": " In coronary CT angiography, a series of CT images are taken at different\nlevels of radiation dose during the examination. Although this reduces the\ntotal radiation dose, the image quality during the low-dose phases is\nsignificantly degraded. To address this problem, here we propose a novel\nsemi-supervised learning technique that can remove the noises of the CT images\nobtained in the low-dose phases by learning from the CT images in the routine\ndose phases. Although a supervised learning approach is not possible due to the\ndifferences in the underlying heart structure in two phases, the images in the\ntwo phases are closely related so that we propose a cycle-consistent\nadversarial denoising network to learn the non-degenerate mapping between the\nlow and high dose cardiac phases. Experimental results showed that the proposed\nmethod effectively reduces the noise in the low-dose CT image while the\npreserving detailed texture and edge information. Moreover, thanks to the\ncyclic consistency and identity loss, the proposed network does not create any\nartificial features that are not present in the input images. Visual grading\nand quality evaluation also confirm that the proposed method provides\nsignificant improvement in diagnostic quality.\n", "title": "Cycle Consistent Adversarial Denoising Network for Multiphase Coronary CT Angiography" }
null
null
null
null
true
null
16436
null
Default
null
null
null
{ "abstract": " Let $\\mathcal{A}$ be a finite-dimensional subspace of\n$C(\\mathcal{X};\\mathbb{R})$, where $\\mathcal{X}$ is a locally compact Hausdorff\nspace, and $\\mathsf{A}=\\{f_1,\\dots,f_m\\}$ a basis of $\\mathcal{A}$. A sequence\n$s=(s_j)_{j=1}^m$ is called a moment sequence if $s_j=\\int f_j(x) \\, d\\mu(x)$,\n$j=1,\\dots,m$, for some positive Radon measure $\\mu$ on $\\mathcal{X}$. Each\nmoment sequence $s$ has a finitely atomic representing measure $\\mu$. The\nsmallest possible number of atoms is called the Carathéodory number\n$\\mathcal{C}_{\\mathsf{A}}(s)$. The largest number $\\mathcal{C}_{\\mathsf{A}}(s)$\namong all moment sequences $s$ is the Carathéodory number\n$\\mathcal{C}_{\\mathsf{A}}$. In this paper the Carathéodory numbers\n$\\mathcal{C}_{\\mathsf{A}}(s)$ and $\\mathcal{C}_{\\mathsf{A}}$ are studied. In\nthe case of differentiable functions methods from differential geometry are\nused. The main emphasis is on real polynomials. For a large class of spaces of\npolynomials in one variable the number $\\mathcal{C}_{\\mathsf{A}}$ is\ndetermined. In the multivariate case we obtain some lower bounds and we use\nresults on zeros of positive polynomials to derive upper bounds for the\nCarathéodory numbers.\n", "title": "The multidimensional truncated Moment Problem: Carathéodory Numbers" }
null
null
[ "Mathematics" ]
null
true
null
16437
null
Validated
null
null
null
{ "abstract": " The World Wide Web (WWW) has fundamentally changed the ways billions of\npeople are able to access information. Thus, understanding how people seek\ninformation online is an important issue of study. Wikipedia is a hugely\nimportant part of information provision on the web, with hundreds of millions\nof users browsing and contributing to its network of knowledge. The study of\nnavigational behaviour on Wikipedia, due to the site's popularity and breadth\nof content, can reveal more general information seeking patterns that may be\napplied beyond Wikipedia and the Web. Our work addresses the relative\nshortcomings of existing literature in relating how information structure\ninfluences patterns of navigation online. We study aggregated clickstream data\nfor articles on the English Wikipedia in the form of a weighted, directed\nnavigational network. We introduce two parameters that describe how articles\nact to source and spread traffic through the network, based on their in/out\nstrength and entropy. From these, we construct a navigational phase space where\ndifferent article types occupy different, distinct regions, indicating how the\nstructure of information online has differential effects on patterns of\nnavigation. Finally, we go on to suggest applications for this analysis in\nidentifying and correcting deficiencies in the Wikipedia page network that may\nalso be adapted to more general information networks.\n", "title": "Inspiration, Captivation, and Misdirection: Emergent Properties in Networks of Online Navigation" }
null
null
null
null
true
null
16438
null
Default
null
null
null
{ "abstract": " Shan-Chen model is a numerical scheme to simulate multiphase fluid flows\nusing Lattice Boltzmann approach. The original Shan-Chen model suffers from\ninability to accurately predict behavior of air bubbles interacting in a\nnon-aqueous fluid. In the present study, we extended the Shan-Chen model to\ntake the effect of the attraction-repulsion barriers among bubbles in to\naccount. The proposed model corrects the interaction and coalescence criterion\nof the original Shan-Chen scheme in order to have a more accurate simulation of\nbubbles morphology in a metal foam. The model is based on forming a thin film\n(narrow channel) between merging bubbles during growth. Rupturing of the film\noccurs when an oscillation in velocity and pressure arises inside the channel\nfollowed by merging of the bubbles. Comparing numerical results obtained from\nproposed model with mettallorgraphy images for aluminum A356 demonstrated a\ngood consistency in mean bubble size and bubbles distribution\n", "title": "Multiphase Aluminum A356 Foam Formation Process Simulation Using Lattice Boltzmann Method" }
null
null
null
null
true
null
16439
null
Default
null
null
null
{ "abstract": " In this thesis, we study the deformation problem of coisotropic submanifolds\nin Jacobi manifolds. In particular we attach two algebraic invariants to any\ncoisotropic submanifold $S$ in a Jacobi manifold, namely the\n$L_\\infty[1]$-algebra and the BFV-complex of $S$. Our construction generalizes\nand unifies analogous constructions in symplectic, Poisson, and locally\nconformal symplectic geometry. As a new special case we also attach an\n$L_\\infty[1]$-algebra and a BFV-complex to any coisotropic submanifold in a\ncontact manifold. The $L_\\infty[1]$-algebra of $S$ controls the formal\ncoisotropic deformation problem of $S$, even under Hamiltonian equivalence. The\nBFV-complex of $S$ controls the non-formal coisotropic deformation problem of\n$S$, even under both Hamiltonian and Jacobi equivalence. In view of these\nresults, we exhibit, in the contact setting, two examples of coisotropic\nsubmanifolds whose coisotropic deformation problem is obstructed.\n", "title": "Deformations of coisotropic submanifolds in Jacobi manifolds" }
null
null
null
null
true
null
16440
null
Default
null
null
null
{ "abstract": " The complexity of a learning task is increased by transformations in the\ninput space that preserve class identity. Visual object recognition for example\nis affected by changes in viewpoint, scale, illumination or planar\ntransformations. While drastically altering the visual appearance, these\nchanges are orthogonal to recognition and should not be reflected in the\nrepresentation or feature encoding used for learning. We introduce a framework\nfor weakly supervised learning of image embeddings that are robust to\ntransformations and selective to the class distribution, using sets of\ntransforming examples (orbit sets), deep parametrizations and a novel\norbit-based loss. The proposed loss combines a discriminative, contrastive part\nfor orbits with a reconstruction error that learns to rectify orbit\ntransformations. The learned embeddings are evaluated in distance metric-based\ntasks, such as one-shot classification under geometric transformations, as well\nas face verification and retrieval under more realistic visual variability. Our\nresults suggest that orbit sets, suitably computed or observed, can be used for\nefficient, weakly-supervised learning of semantically relevant image\nembeddings.\n", "title": "Discriminate-and-Rectify Encoders: Learning from Image Transformation Sets" }
null
null
null
null
true
null
16441
null
Default
null
null
null
{ "abstract": " A general greedy approach to construct coverings of compact metric spaces by\nmetric balls is given and analyzed. The analysis is a continuous version of\nChvatal's analysis of the greedy algorithm for the weighted set cover problem.\nThe approach is demonstrated in an exemplary manner to construct efficient\ncoverings of the n-dimensional sphere and n-dimensional Euclidean space to give\nshort and transparent proofs of several best known bounds obtained from\ndeterministic constructions in the literature on sphere coverings.\n", "title": "Covering compact metric spaces greedily" }
null
null
null
null
true
null
16442
null
Default
null
null
null
{ "abstract": " The formalism of the reduced density matrix is pursued in both length and\nvelocity gauges of the perturbation to the crystal Hamiltonian. The covariant\nderivative is introduced as a convenient representation of the position\noperator. This allow us to write compact expressions for the reduced density\nmatrix in any order of the perturbation which simplifies the calculations of\nnonlinear optical responses; as an example, we compute the first and third\norder contributions of the monolayer graphene. Expressions obtained in both\ngauges share the same formal structure, allowing a comparison of the effects of\ntruncation to a finite set of bands. This truncation breaks the equivalence\nbetween the two approaches: its proper implementation can be done directly in\nthe expressions derived in the length gauge, but require a revision of the\nequations of motion of the reduced density matrix in the velocity gauge.\n", "title": "Gauge covariances and nonlinear optical responses" }
null
null
[ "Physics" ]
null
true
null
16443
null
Validated
null
null
null
{ "abstract": " We report muon spin relaxation ($\\mu$SR) measurements of optimally-doped and\noverdoped Bi$_{2+x}$Sr$_{2-x}$CaCu$_2$O$_{8+\\delta}$ (Bi2212) single crystals\nthat reveal the presence of a weak temperature-dependent quasi-static internal\nmagnetic field of electronic origin in the superconducting (SC) and pseudogap\n(PG) phases. In both samples the internal magnetic field persists up to 160~K,\nbut muon diffusion prevents following the evolution of the field to higher\ntemperatures. We consider the evidence from our measurments in support of PG\norder parameter candidates, namely, electronic loop currents and\nmagnetoelectric quadrupoles.\n", "title": "Quasi-Static Internal Magnetic Field Detected in the Pseudogap Phase of Bi$_{2+x}$Sr$_{2-x}$CaCu$_2$O$_{8+δ}$ by $μ$SR" }
null
null
null
null
true
null
16444
null
Default
null
null
null
{ "abstract": " Advances in unsupervised learning enable reconstruction and generation of\nsamples from complex distributions, but this success is marred by the\ninscrutability of the representations learned. We propose an\ninformation-theoretic approach to characterizing disentanglement and dependence\nin representation learning using multivariate mutual information, also called\ntotal correlation. The principle of total Cor-relation Ex-planation (CorEx) has\nmotivated successful unsupervised learning applications across a variety of\ndomains, but under some restrictive assumptions. Here we relax those\nrestrictions by introducing a flexible variational lower bound to CorEx.\nSurprisingly, we find that this lower bound is equivalent to the one in\nvariational autoencoders (VAE) under certain conditions. This\ninformation-theoretic view of VAE deepens our understanding of hierarchical VAE\nand motivates a new algorithm, AnchorVAE, that makes latent codes more\ninterpretable through information maximization and enables generation of richer\nand more realistic samples.\n", "title": "Auto-Encoding Total Correlation Explanation" }
null
null
null
null
true
null
16445
null
Default
null
null
null
{ "abstract": " Open bisimilarity is the original notion of bisimilarity to be introduced for\nthe pi-calculus that is a congruence. In open bisimilarity, free names in\nprocesses are treated as variables that may be instantiated lazily; in contrast\nto early and late bisimilarity where free names are constants. We build on the\nestablished line of work, due to Milner, Parrow, and Walker, on classical modal\nlogics characterising early and late bisimilarity for the $\\pi$-calculus. The\nimportant insight is, to characterise open bisimilarity, we move to the setting\nof intuitionistic modal logics. The intuitionistic modal logic introduced,\ncalled OM, is such that modalities are closed under (respectful) substitutions,\ninducing a property known as intuitionistic hereditary. Intuitionistic\nhereditary reflects the lazy instantiation of names in open bisimilarity. The\nsoundness proof for open bisimilarity with respect to the modal logic is\nmechanised in Abella. The constructive content of the completeness proof\nprovides an algorithm for generating distinguishing formulae, where such\nformulae are useful as a certificate explaining why two processes are not open\nbisimilar. We draw attention to the fact that open bisimilarity is not the only\nnotion of bisimilarity that is a congruence: for name-passing calculi there is\na classical/intuitionistic spectrum of bisimilarities.\n", "title": "A Characterisation of Open Bisimilarity using an Intuitionistic Modal Logic" }
null
null
null
null
true
null
16446
null
Default
null
null
null
{ "abstract": " Cities across the United States are undergoing great transformation and urban\ngrowth. Data and data analysis has become an essential element of urban\nplanning as cities use data to plan land use and development. One great\nchallenge is to use the tools of data science to promote equity along with\ngrowth. The city of Atlanta is an example site of large-scale urban renewal\nthat aims to engage in development without displacement. On the Westside of\ndowntown Atlanta, the construction of the new Mercedes-Benz Stadium and the\nconversion of an underutilized rail-line into a multi-use trail may result in\nincreased property values. In response to community residents' concerns and a\ncommitment to development without displacement, the city and philanthropic\npartners announced an Anti-Displacement Tax Fund to subsidize future property\ntax increases of owner occupants for the next twenty years. To achieve greater\ntransparency, accountability, and impact, residents expressed a desire for a\ntool that would help them determine eligibility and quantify this commitment.\nIn support of this goal, we use machine learning techniques to analyze\nhistorical tax assessment and predict future tax assessments. We then apply\neligibility estimates to our predictions to estimate the total cost for the\nfirst seven years of the program. These forecasts are also incorporated into an\ninteractive tool for community residents to determine their eligibility for the\nfund and the expected increase in their home value over the next seven years.\n", "title": "Using data science as a community advocacy tool to promote equity in urban renewal programs: An analysis of Atlanta's Anti-Displacement Tax Fund" }
null
null
[ "Computer Science" ]
null
true
null
16447
null
Validated
null
null
null
{ "abstract": " Bayesian statistical models allow us to formalise our knowledge about the\nworld and reason about our uncertainty, but there is a need for better\nprocedures to accurately encode its complexity. One way to do so is through\ncompositional models, which are formed by combining blocks consisting of\nsimpler models. One can increase the complexity of the compositional model by\neither stacking more blocks or by using a not-so-simple model as a building\nblock. This thesis is an example of the latter. One first aim is to expand the\nchoice of Bayesian nonparametric (BNP) blocks for constructing tractable\ncompositional models. So far, most of the models that have a Bayesian\nnonparametric component use a Dirichlet Process or a Pitman-Yor process because\nof the availability of tractable and compact representations. This thesis shows\nhow to overcome certain intractabilities in order to obtain analogous compact\nrepresentations for the class of Poisson-Kingman priors which includes the\nDirichlet and Pitman-Yor processes.\nA major impediment to the widespread use of Bayesian nonparametric building\nblocks is that inference is often costly, intractable or difficult to carry\nout. This is an active research area since dealing with the model's infinite\ndimensional component forbids the direct use of standard simulation-based\nmethods. The main contribution of this thesis is a variety of inference schemes\nthat tackle this problem: Markov chain Monte Carlo and Sequential Monte Carlo\nmethods, which are exact inference schemes since they target the true\nposterior. The contributions of this thesis, in a larger context, provide\ngeneral purpose exact inference schemes in the flavour or probabilistic\nprogramming: the user is able to choose from a variety of models, focusing only\non the modelling part. Indeed, if the wide enough class of Poisson-Kingman\npriors is used as one of our blocks, this objective is achieved.\n", "title": "General Bayesian inference schemes in infinite mixture models" }
null
null
null
null
true
null
16448
null
Default
null
null
null
{ "abstract": " Flow-based generative models (Dinh et al., 2014) are conceptually attractive\ndue to tractability of the exact log-likelihood, tractability of exact\nlatent-variable inference, and parallelizability of both training and\nsynthesis. In this paper we propose Glow, a simple type of generative flow\nusing an invertible 1x1 convolution. Using our method we demonstrate a\nsignificant improvement in log-likelihood on standard benchmarks. Perhaps most\nstrikingly, we demonstrate that a generative model optimized towards the plain\nlog-likelihood objective is capable of efficient realistic-looking synthesis\nand manipulation of large images. The code for our model is available at\nthis https URL\n", "title": "Glow: Generative Flow with Invertible 1x1 Convolutions" }
null
null
[ "Statistics" ]
null
true
null
16449
null
Validated
null
null
null
{ "abstract": " By virtue of a suitable approximation argument, we prove a Pohozaev identity\nfor nonlinear nonlocal problems on $\\mathbb{R}^N$ involving the fractional\n$p-$Laplacian operator. Furthermore we provide an application of the identity\nto show that some relevant levels of the energy functional associated with the\nproblem coincide.\n", "title": "Pohozaev identity for the fractional $p-$Laplacian on $\\mathbb{R}^N$" }
null
null
null
null
true
null
16450
null
Default
null
null
null
{ "abstract": " This is an expository survey on recent sum-product results in finite fields.\nWe present a number of sum-product or \"expander\" results that say that if\n$|A| > p^{2/3}$ then some set determined by sums and product of elements of $A$\nis nearly as large as possible, and if $|A|<p^{2/3}$ then the set in question\nis significantly larger that $A$. These results are based on a point-plane\nincidence bound of Rudnev, and are quantitatively stronger than a wave of\nearlier results following Bourgain, Katz, and Tao's breakthrough sum-product\nresult.\nIn addition, we present two geometric results: an incidence bound due to\nStevens and de Zeeuw, and bound on collinear triples, and an example of an\nexpander that breaks the threshold of $p^{2/3}$ required by the other results.\nWe have simplified proofs wherever possible, and hope that this survey may\nserve as a compact guide to recent advances in arithmetic combinatorics over\nfinite fields. We do not claim originality for any of the results.\n", "title": "A Second Wave of Expanders over Finite Fields" }
null
null
null
null
true
null
16451
null
Default
null
null
null
{ "abstract": " Using the method of Elias-Hogancamp and combinatorics of toric braids we give\nan explicit formula for the triply graded Khovanov-Rozansky homology of an\narbitrary torus knot, thereby proving some of the conjectures of\nAganagic-Shakirov, Cherednik, Gorsky-Negut and Oblomkov-Rasmussen-Shende.\n", "title": "Homology of torus knots" }
null
null
[ "Mathematics" ]
null
true
null
16452
null
Validated
null
null
null
{ "abstract": " For any $r\\geq 1$ and $\\mathbf{n} \\in \\mathbb{Z}_{\\geq0}^r \\setminus\n\\{\\mathbf0\\}$ we construct a poset $W_{\\mathbf{n}}$ called a 2-associahedron.\nThe 2-associahedra arose in symplectic geometry, where they are expected to\ncontrol maps between Fukaya categories of different symplectic manifolds. We\nprove that the completion $\\widehat{W_{\\mathbf{n}}}$ is an abstract polytope of\ndimension $|\\mathbf{n}|+r-3$. There are forgetful maps $W_{\\mathbf{n}} \\to\nK_r$, where $K_r$ is the $(r-2)$-dimensional associahedron, and the\n2-associahedra specialize to the associahedra (in two ways) and to the\nmultiplihedra. In an appendix, we work out the 2- and 3-dimensional\nassociahedra in detail.\n", "title": "2-associahedra" }
null
null
null
null
true
null
16453
null
Default
null
null
null
{ "abstract": " Colloidal migration in temperature gradient is referred to as thermophoresis.\nIn contrast to particles with spherical shape, we show that elongated colloids\nmay have a thermophoretic response that varies with the colloid orientation.\nRemarkably, this can translate into a non-vanishing thermophoretic force in the\ndirection perpendicular to the temperature gradient. Oppositely to the friction\nforce, the thermophoretic force of a rod oriented with the temperature gradient\ncan be larger or smaller than when oriented perpendicular to it. The precise\nanisotropic thermophoretic behavior clearly depends on the colloidal rod aspect\nratio, and also on its surface details, which provides an interesting\ntunability to the devices constructed based on this principle. By means of\nmesoscale hydrodynamic simulations, we characterize this effect for different\ntypes of rod-like colloids.\n", "title": "Anisotropic thermophoresis" }
null
null
[ "Physics" ]
null
true
null
16454
null
Validated
null
null
null
{ "abstract": " This paper proposes an ultra-wideband (UWB) aided localization and mapping\nsystem that leverages on inertial sensor and depth camera. Inspired by the fact\nthat visual odometry (VO) system, regardless of its accuracy in the short term,\nstill faces challenges with accumulated errors in the long run or under\nunfavourable environments, the UWB ranging measurements are fused to remove the\nvisual drift and improve the robustness. A general framework is developed which\nconsists of three parallel threads, two of which carry out the visual-inertial\nodometry (VIO) and UWB localization respectively. The other mapping thread\nintegrates visual tracking constraints into a pose graph with the proposed\nsmooth and virtual range constraints, such that an optimization is performed to\nprovide robust trajectory estimation. Experiments show that the proposed system\nis able to create dense drift-free maps in real-time even running on an\nultra-low power processor in featureless environments.\n", "title": "Ultra-Wideband Aided Fast Localization and Mapping System" }
null
null
null
null
true
null
16455
null
Default
null
null
null
{ "abstract": " Sensor fusion is a fundamental process in robotic systems as it extends the\nperceptual range and increases robustness in real-world operations. Current\nmulti-sensor deep learning based semantic segmentation approaches do not\nprovide robustness to under-performing classes in one modality, or require a\nspecific architecture with access to the full aligned multi-sensor training\ndata. In this work, we analyze statistical fusion approaches for semantic\nsegmentation that overcome these drawbacks while keeping a competitive\nperformance. The studied approaches are modular by construction, allowing to\nhave different training sets per modality and only a much smaller subset is\nneeded to calibrate the statistical models. We evaluate a range of statistical\nfusion approaches and report their performance against state-of-the-art\nbaselines on both real-world and simulated data. In our experiments, the\napproach improves performance in IoU over the best single modality segmentation\nresults by up to 5%. We make all implementations and configurations publicly\navailable.\n", "title": "Modular Sensor Fusion for Semantic Segmentation" }
null
null
null
null
true
null
16456
null
Default
null
null
null
{ "abstract": " Growth, electronic and magnetic properties of $\\gamma'$-Fe$_{4}$N atomic\nlayers on Cu(001) are studied by scanning tunneling microscopy/spectroscopy and\nx-ray absorption spectroscopy/magnetic circular dichroism. A continuous film of\nordered trilayer $\\gamma'$-Fe$_{4}$N is obtained by Fe deposition under N$_{2}$\natmosphere onto monolayer Fe$_{2}$N/Cu(001), while the repetition of a\nbombardment with 0.5 keV N$^{+}$ ions during growth cycles results in imperfect\nbilayer $\\gamma'$-Fe$_{4}$N. The increase in the sample thickness causes the\nchange of the surface electronic structure, as well as the enhancement in the\nspin magnetic moment of Fe atoms reaching $\\sim$ 1.4 $\\mu_{\\mathrm B}$/atom in\nthe trilayer sample. The observed thickness-dependent properties of the system\nare well interpreted by layer-resolved density of states calculated using first\nprinciples, which demonstrates the strongly layer-dependent electronic states\nwithin each surface, subsurface, and interfacial plane of the\n$\\gamma'$-Fe$_{4}$N atomic layers on Cu(001).\n", "title": "Thickness-dependent electronic and magnetic properties of $γ'$-Fe$_{\\mathrm 4}$N atomic layers on Cu(001)" }
null
null
null
null
true
null
16457
null
Default
null
null
null
{ "abstract": " NAND flash memory is ubiquitous in everyday life today because its capacity\nhas continuously increased and cost has continuously decreased over decades.\nThis positive growth is a result of two key trends: (1) effective process\ntechnology scaling, and (2) multi-level (e.g., MLC, TLC) cell data coding.\nUnfortunately, the reliability of raw data stored in flash memory has also\ncontinued to become more difficult to ensure, because these two trends lead to\n(1) fewer electrons in the flash memory cell (floating gate) to represent the\ndata and (2) larger cell-to-cell interference and disturbance effects. Without\nmitigation, worsening reliability can reduce the lifetime of NAND flash memory.\nAs a result, flash memory controllers in solid-state drives (SSDs) have become\nmuch more sophisticated: they incorporate many effective techniques to ensure\nthe correct interpretation of noisy data stored in flash memory cells.\nIn this article, we review recent advances in SSD error characterization,\nmitigation, and data recovery techniques for reliability and lifetime\nimprovement. We provide rigorous experimental data from state-of-the-art MLC\nand TLC NAND flash devices on various types of flash memory errors, to motivate\nthe need for such techniques. Based on the understanding developed by the\nexperimental characterization, we describe several mitigation and recovery\ntechniques, including (1) cell-to-cell interference mitigation, (2) optimal\nmulti-level cell sensing, (3) error correction using state-of-the-art\nalgorithms and methods, and (4) data recovery when error correction fails. We\nquantify the reliability improvement provided by each of these techniques.\nLooking forward, we briefly discuss how flash memory and these techniques could\nevolve into the future.\n", "title": "Error Characterization, Mitigation, and Recovery in Flash Memory Based Solid-State Drives" }
null
null
null
null
true
null
16458
null
Default
null
null
null
{ "abstract": " A common practice in most of deep convolutional neural architectures is to\nemploy fully-connected layers followed by Softmax activation to minimize\ncross-entropy loss for the sake of classification. Recent studies show that\nsubstitution or addition of the Softmax objective to the cost functions of\nsupport vector machines or linear discriminant analysis is highly beneficial to\nimprove the classification performance in hybrid neural networks. We propose a\nnovel paradigm to link the optimization of several hybrid objectives through\nunified backpropagation. This highly alleviates the burden of extensive\nboosting for independent objective functions or complex formulation of\nmultiobjective gradients. Hybrid loss functions are linked by basic probability\nassignment from evidence theory. We conduct our experiments for a variety of\nscenarios and standard datasets to evaluate the advantage of our proposed\nunification approach to deliver consistent improvements into the classification\nperformance of deep convolutional neural networks.\n", "title": "Unified Backpropagation for Multi-Objective Deep Learning" }
null
null
null
null
true
null
16459
null
Default
null
null
null
{ "abstract": " Recent advances in weakly supervised classification allow us to train a\nclassifier only from positive and unlabeled (PU) data. However, existing PU\nclassification methods typically require an accurate estimate of the\nclass-prior probability, which is a critical bottleneck particularly for\nhigh-dimensional data. This problem has been commonly addressed by applying\nprincipal component analysis in advance, but such unsupervised dimension\nreduction can collapse underlying class structure. In this paper, we propose a\nnovel representation learning method from PU data based on the\ninformation-maximization principle. Our method does not require class-prior\nestimation and thus can be used as a preprocessing method for PU\nclassification. Through experiments, we demonstrate that our method combined\nwith deep neural networks highly improves the accuracy of PU class-prior\nestimation, leading to state-of-the-art PU classification performance.\n", "title": "Information-Theoretic Representation Learning for Positive-Unlabeled Classification" }
null
null
null
null
true
null
16460
null
Default
null
null
null
{ "abstract": " The halting probability of a Turing machine,also known as Chaitin's Omega, is\nan algorithmically random number with many interesting properties. Since\nChaitin's seminal work, many popular expositions have appeared, mainly focusing\non the metamathematical or philosophical significance of Omega (or debating\nagainst it). At the same time, a rich mathematical theory exploring the\nproperties of Chaitin's Omega has been brewing in various technical papers,\nwhich quietly reveals the significance of this number to many aspects of\ncontemporary algorithmic information theory. The purpose of this survey is to\nexpose these developments and tell a story about Omega, which outlines its\nmultifaceted mathematical properties and roles in algorithmic randomness.\n", "title": "Aspects of Chaitin's Omega" }
null
null
null
null
true
null
16461
null
Default
null
null
null
{ "abstract": " When we are faced with challenging image classification tasks, we often\nexplain our reasoning by dissecting the image, and pointing out prototypical\naspects of one class or another. The mounting evidence for each of the classes\nhelps us make our final decision. In this work, we introduce a deep network\narchitecture that reasons in a similar way: the network dissects the image by\nfinding prototypical parts, and combines evidence from the prototypes to make a\nfinal classification. The model thus reasons in a way that is qualitatively\nsimilar to the way ornithologists, physicians, geologists, architects, and\nothers would explain to people on how to solve challenging image classification\ntasks. The network uses only image-level labels for training, meaning that\nthere are no labels for parts of images. We demonstrate our method on the\nCUB-200-2011 dataset and the CBIS-DDSM dataset. Our experiments show that our\ninterpretable network can achieve comparable accuracy with its analogous\nstandard non-interpretable counterpart as well as other interpretable deep\nmodels.\n", "title": "This Looks Like That: Deep Learning for Interpretable Image Recognition" }
null
null
null
null
true
null
16462
null
Default
null
null
null
{ "abstract": " Deep Learning refers to a set of machine learning techniques that utilize\nneural networks with many hidden layers for tasks, such as image\nclassification, speech recognition, language understanding. Deep learning has\nbeen proven to be very effective in these domains and is pervasively used by\nmany Internet services. In this paper, we describe different automotive uses\ncases for deep learning in particular in the domain of computer vision. We\nsurveys the current state-of-the-art in libraries, tools and infrastructures\n(e.\\,g.\\ GPUs and clouds) for implementing, training and deploying deep neural\nnetworks. We particularly focus on convolutional neural networks and computer\nvision use cases, such as the visual inspection process in manufacturing plants\nand the analysis of social media data. To train neural networks, curated and\nlabeled datasets are essential. In particular, both the availability and scope\nof such datasets is typically very limited. A main contribution of this paper\nis the creation of an automotive dataset, that allows us to learn and\nautomatically recognize different vehicle properties. We describe an end-to-end\ndeep learning application utilizing a mobile app for data collection and\nprocess support, and an Amazon-based cloud backend for storage and training.\nFor training we evaluate the use of cloud and on-premises infrastructures\n(including multiple GPUs) in conjunction with different neural network\narchitectures and frameworks. We assess both the training times as well as the\naccuracy of the classifier. Finally, we demonstrate the effectiveness of the\ntrained classifier in a real world setting during manufacturing process.\n", "title": "Deep Learning in the Automotive Industry: Applications and Tools" }
null
null
null
null
true
null
16463
null
Default
null
null
null
{ "abstract": " Multilevel converters have found many applications within renewable energy\nsystems thanks to their unique capability of generating multiple voltage\nlevels. However, these converters need multiple DC sources and the voltage\nbalancing over capacitors for these systems is cumbersome. In this work, a new\ngrid-tie multicell inverter with high level of safety has been designed,\nengineered and optimized for integrating energy storage devices to the electric\ngrid. The multilevel converter proposed in this work is capable of maintaining\nthe flying capacitors voltage in the desired value. The solar cells are the\nprimary energy sources for proposed inverter where the maximum power density is\nobtained. Finally, the performance of the inverter and its control method\nsimulated using PSCAD/EMTDC software package and good agreement achieved with\nexperimental data.\n", "title": "Design, Engineering and Optimization of a Grid-Tie Multicell Inverter for Energy Storage Applications" }
null
null
[ "Physics" ]
null
true
null
16464
null
Validated
null
null
null
{ "abstract": " This is the English translation of my old paper 'Definición y estudio de\nuna función indefinidamente diferenciable de soporte compacto', Rev. Real\nAcad. Ciencias 76 (1982) 21-38. In it a function (essentially Fabius function)\nis defined and given its main properties, including: unicity, interpretation as\na probability, partition of unity with its translates, formulas for its $n$-th\nderivates, rationality of its values at dyadic points, formulas for the\neffective computation of these values, and some arithmetical properties of\nthese values. Since I need it now for a reference, I have translated it.\n", "title": "An infinitely differentiable function with compact support: Definition and properties" }
null
null
null
null
true
null
16465
null
Default
null
null
null
{ "abstract": " In the present work, we develop a delayed Logistic growth model to study the\neffects of decontamination on the bacterial population in the ambient\nenvironment. Using the linear stability analysis, we study different case\nscenarios, where bacterial population may establish at the positive equilibrium\nor go extinct due to increased decontamination. The results are verified using\nnumerical simulation of the model.\n", "title": "Analysis of bacterial population growth using extended logistic growth model with distributed delay" }
null
null
null
null
true
null
16466
null
Default
null
null
null
{ "abstract": " We investigate a hybrid quantum-classical solution method to the\nmean-variance portfolio optimization problems. Starting from real financial\ndata statistics and following the principles of the Modern Portfolio Theory, we\ngenerate parametrized samples of portfolio optimization problems that can be\nrelated to quadratic binary optimization forms programmable in the analog\nD-Wave Quantum Annealer 2000Q. The instances are also solvable by an\nindustry-established Genetic Algorithm approach, which we use as a classical\nbenchmark. We investigate several options to run the quantum computation\noptimally, ultimately discovering that the best results in terms of expected\ntime-to-solution as a function of number of variables for the hardest instances\nset are obtained by seeding the quantum annealer with a solution candidate\nfound by a greedy local search and then performing a reverse annealing\nprotocol. The optimized reverse annealing protocol is found to be more than 100\ntimes faster than the corresponding forward quantum annealing on average.\n", "title": "Reverse Quantum Annealing Approach to Portfolio Optimization Problems" }
null
null
[ "Quantitative Finance" ]
null
true
null
16467
null
Validated
null
null
null
{ "abstract": " Modern deep transfer learning approaches have mainly focused on learning\ngeneric feature vectors from one task that are transferable to other tasks,\nsuch as word embeddings in language and pretrained convolutional features in\nvision. However, these approaches usually transfer unary features and largely\nignore more structured graphical representations. This work explores the\npossibility of learning generic latent relational graphs that capture\ndependencies between pairs of data units (e.g., words or pixels) from\nlarge-scale unlabeled data and transferring the graphs to downstream tasks. Our\nproposed transfer learning framework improves performance on various tasks\nincluding question answering, natural language inference, sentiment analysis,\nand image classification. We also show that the learned graphs are generic\nenough to be transferred to different embeddings on which the graphs have not\nbeen trained (including GloVe embeddings, ELMo embeddings, and task-specific\nRNN hidden unit), or embedding-free units such as image pixels.\n", "title": "GLoMo: Unsupervisedly Learned Relational Graphs as Transferable Representations" }
null
null
null
null
true
null
16468
null
Default
null
null
null
{ "abstract": " This paper introduces a novel method to perform transfer learning across\ndomains and tasks, formulating it as a problem of learning to cluster. The key\ninsight is that, in addition to features, we can transfer similarity\ninformation and this is sufficient to learn a similarity function and\nclustering network to perform both domain adaptation and cross-task transfer\nlearning. We begin by reducing categorical information to pairwise constraints,\nwhich only considers whether two instances belong to the same class or not.\nThis similarity is category-agnostic and can be learned from data in the source\ndomain using a similarity network. We then present two novel approaches for\nperforming transfer learning using this similarity function. First, for\nunsupervised domain adaptation, we design a new loss function to regularize\nclassification with a constrained clustering loss, hence learning a clustering\nnetwork with the transferred similarity metric generating the training inputs.\nSecond, for cross-task learning (i.e., unsupervised clustering with unseen\ncategories), we propose a framework to reconstruct and estimate the number of\nsemantic clusters, again using the clustering network. Since the similarity\nnetwork is noisy, the key is to use a robust clustering algorithm, and we show\nthat our formulation is more robust than the alternative constrained and\nunconstrained clustering approaches. Using this method, we first show state of\nthe art results for the challenging cross-task problem, applied on Omniglot and\nImageNet. Our results show that we can reconstruct semantic clusters with high\naccuracy. We then evaluate the performance of cross-domain transfer using\nimages from the Office-31 and SVHN-MNIST tasks and present top accuracy on both\ndatasets. Our approach doesn't explicitly deal with domain discrepancy. If we\ncombine with a domain adaptation loss, it shows further improvement.\n", "title": "Learning to cluster in order to transfer across domains and tasks" }
null
null
null
null
true
null
16469
null
Default
null
null
null
{ "abstract": " Perceptual aliasing is one of the main causes of failure for Simultaneous\nLocalization and Mapping (SLAM) systems operating in the wild. Perceptual\naliasing is the phenomenon where different places generate a similar visual\n(or, in general, perceptual) footprint. This causes spurious measurements to be\nfed to the SLAM estimator, which typically results in incorrect localization\nand mapping results. The problem is exacerbated by the fact that those outliers\nare highly correlated, in the sense that perceptual aliasing creates a large\nnumber of mutually-consistent outliers. Another issue stems from the fact that\nmost state-of-the-art techniques rely on a given trajectory guess (e.g., from\nodometry) to discern between inliers and outliers and this makes the resulting\npipeline brittle, since the accumulation of error may result in incorrect\nchoices and recovery from failures is far from trivial. This work provides a\nunified framework to model perceptual aliasing in SLAM and provides practical\nalgorithms that can cope with outliers without relying on any initial guess. We\npresent two main contributions. The first is a Discrete-Continuous Graphical\nModel (DC-GM) for SLAM: the continuous portion of the DC-GM captures the\nstandard SLAM problem, while the discrete portion describes the selection of\nthe outliers and models their correlation. The second contribution is a\nsemidefinite relaxation to perform inference in the DC-GM that returns\nestimates with provable sub-optimality guarantees. Experimental results on\nstandard benchmarking datasets show that the proposed technique compares\nfavorably with state-of-the-art methods while not relying on an initial guess\nfor optimization.\n", "title": "Modeling Perceptual Aliasing in SLAM via Discrete-Continuous Graphical Models" }
null
null
null
null
true
null
16470
null
Default
null
null
null
{ "abstract": " Among the more important hallmarks of human intelligence, which any\nartificial general intelligence (AGI) should have, are the following. 1. It\nmust be capable of on-line learning, including with single/few trials. 2.\nMemories/knowledge must be permanent over lifelong durations, safe from\ncatastrophic forgetting. Some confabulation, i.e., semantically plausible\nretrieval errors, may gradually accumulate over time. 3. The time to both: a)\nlearn a new item, and b) retrieve the best-matching / most relevant item(s),\ni.e., do similarity-based retrieval, must remain constant throughout the\nlifetime. 4. The system should never become full: it must remain able to store\nnew information, i.e., make new permanent memories, throughout very long\nlifetimes. No artificial computational system has been shown to have all these\nproperties. Here, we describe a neuromorphic associative memory model, Sparsey,\nwhich does, in principle, possess them all. We cite prior results supporting\npossession of hallmarks 1 and 3 and sketch an argument, hinging on strongly\nrecursive, hierarchical, part-whole compositional structure of natural data,\nthat Sparsey also possesses hallmarks 2 and 4.\n", "title": "Sparse distributed representation, hierarchy, critical periods, metaplasticity: the keys to lifelong fixed-time learning and best-match retrieval" }
null
null
[ "Quantitative Biology" ]
null
true
null
16471
null
Validated
null
null
null
{ "abstract": " The modified Gram-Schmidt (MGS) orthogonalization is one of the most\nwell-used algorithms for computing the thin QR factorization. MGS can be\nstraightforwardly extended to a non-standard inner product with respect to a\nsymmetric positive definite matrix $A$. For the thin QR factorization of an $m\n\\times n$ matrix with the non-standard inner product, a naive implementation of\nMGS requires $2n$ matrix-vector multiplications (MV) with respect to $A$. In\nthis paper, we propose $n$-MV implementations: a high accuracy (HA) type and a\nhigh performance (HP) type, of MGS. We also provide error bounds of the HA-type\nimplementation. Numerical experiments and analysis indicate that the proposed\nimplementations have competitive advantages over the naive implementation in\nterms of both computational cost and accuracy.\n", "title": "Efficient implementations of the modified Gram-Schmidt orthogonalization with a non-standard inner product" }
null
null
null
null
true
null
16472
null
Default
null
null
null
{ "abstract": " Firstly, we derive in dimension one a new covariance inequality of\n$L_{1}-L_{\\infty}$ type that characterizes the isoperimetric constant as the\nbest constant achieving the inequality. Secondly, we generalize our result to\n$L_{p}-L_{q}$ bounds for the covariance. Consequently, we recover Cheeger's\ninequality without using the co-area formula. We also prove a generalized\nweighted Hardy type inequality that is needed to derive our covariance\ninequalities and that is of independent interest. Finally, we explore some\nconsequences of our covariance inequalities for $L_{p}$-Poincaré\ninequalities and moment bounds. In particular, we obtain optimal constants in\ngeneral $L_{p}$-Poincaré inequalities for measures with finite\nisoperimetric constant, thus generalizing in dimension one Cheeger's\ninequality, which is a $L_{p}$-Poincaré inequality for $p=2$, to any real\n$p\\geq 1$.\n", "title": "On the isoperimetric constant, covariance inequalities and $L_p$-Poincaré inequalities in dimension one" }
null
null
null
null
true
null
16473
null
Default
null
null
null
{ "abstract": " We present algorithms for real and complex dot product and matrix\nmultiplication in arbitrary-precision floating-point and ball arithmetic. A\nlow-overhead dot product is implemented on the level of GMP limb arrays; it is\nabout twice as fast as previous code in MPFR and Arb at precision up to several\nhundred bits. Up to 128 bits, it is 3-4 times as fast, costing 20-30 cycles per\nterm for floating-point evaluation and 40-50 cycles per term for balls. We\nhandle large matrix multiplications even more efficiently via blocks of scaled\ninteger matrices. The new methods are implemented in Arb and significantly\nspeed up polynomial operations and linear algebra.\n", "title": "Faster arbitrary-precision dot product and matrix multiplication" }
null
null
null
null
true
null
16474
null
Default
null
null
null
{ "abstract": " Unsupervised representation learning for tweets is an important research\nfield which helps in solving several business applications such as sentiment\nanalysis, hashtag prediction, paraphrase detection and microblog ranking. A\ngood tweet representation learning model must handle the idiosyncratic nature\nof tweets which poses several challenges such as short length, informal words,\nunusual grammar and misspellings. However, there is a lack of prior work which\nsurveys the representation learning models with a focus on tweets. In this\nwork, we organize the models based on its objective function which aids the\nunderstanding of the literature. We also provide interesting future directions,\nwhich we believe are fruitful in advancing this field by building high-quality\ntweet representation learning models.\n", "title": "Improving Distributed Representations of Tweets - Present and Future" }
null
null
[ "Computer Science" ]
null
true
null
16475
null
Validated
null
null
null
{ "abstract": " Bárány, Kalai, and Meshulam recently obtained a topological Tverberg-type\ntheorem for matroids, which guarantees multiple coincidences for continuous\nmaps from a matroid complex to d-dimensional Euclidean space, if the matroid\nhas sufficiently many disjoint bases. They make a conjecture on the\nconnectivity of k-fold deleted joins of a matroid with many disjoint bases,\nwhich would yield a much tighter result - but we provide a counterexample\nalready for the case of k=2, where a tight Tverberg-type theorem would be a\ntopological Radon theorem for matroids. Nevertheless, we prove the topological\nRadon theorem for the counterexample family of matroids by an index\ncalculation, despite the failure of the connectivity-based approach.\n", "title": "Tverberg-type theorems for matroids: A counterexample and a proof" }
null
null
[ "Mathematics" ]
null
true
null
16476
null
Validated
null
null
null
{ "abstract": " In this paper, theoretical and numerical studies of perfect/nearly-perfect\nconversion of a plane wave into a surface wave are presented. The problem of\ndetermining the electromagnetic properties of an inhomogeneous lossless\nboundary which would fully transform an incident plane wave into a surface wave\npropagating along the boundary is considered. An approximate field solution\nwhich produces a slowly growing surface wave and satisfies the energy\nconservation law is discussed and numerically demonstrated. The results of the\nstudy are of great importance for the future development of such devices as\nperfect leaky-wave antennas and can potentially lead to many novel\napplications.\n", "title": "Near-Perfect Conversion of a Propagating Plane Wave into a Surface Wave Using Metasurfaces" }
null
null
null
null
true
null
16477
null
Default
null
null
null
{ "abstract": " Our infrastructure touches the day-to-day life of each of our fellow\ncitizens, and its capabilities, integrity and sustainability are crucial to the\noverall competitiveness and prosperity of our country. Unfortunately, the\ncurrent state of U.S. infrastructure is not good: the American Society of Civil\nEngineers' latest report on America's infrastructure ranked it at a D+ -- in\nneed of $3.9 trillion in new investments. This dire situation constrains the\ngrowth of our economy, threatens our quality of life, and puts our global\nleadership at risk. The ASCE report called out three actions that need to be\ntaken to address our infrastructure problem: 1) investment and planning in the\nsystem; 2) bold leadership by elected officials at the local and federal state;\nand 3) planning sustainability and resiliency in our infrastructure.\nWhile our immediate infrastructure needs are critical, it would be\nshortsighted to simply replicate more of what we have today. By doing so, we\nmiss the opportunity to create Intelligent Infrastructure that will provide the\nfoundation for increased safety and resilience, improved efficiencies and civic\nservices, and broader economic opportunities and job growth. Indeed, our\nchallenge is to proactively engage the declining, incumbent national\ninfrastructure system and not merely repair it, but to enhance it; to create an\ninternationally competitive cyber-physical system that provides an immediate\nopportunity for better services for citizens and that acts as a platform for a\n21st century, high-tech economy and beyond.\n", "title": "A National Research Agenda for Intelligent Infrastructure" }
null
null
null
null
true
null
16478
null
Default
null
null
null
{ "abstract": " Surveying 3D scenes is a common task in robotics. Systems can do so\nautonomously by iteratively obtaining measurements. This process of planning\nobservations to improve the model of a scene is called Next Best View (NBV)\nplanning.\nNBV planning approaches often use either volumetric (e.g., voxel grids) or\nsurface (e.g., triangulated meshes) representations. Volumetric approaches\ngeneralise well between scenes as they do not depend on surface geometry but do\nnot scale to high-resolution models of large scenes. Surface representations\ncan obtain high-resolution models at any scale but often require tuning of\nunintuitive parameters or multiple survey stages.\nThis paper presents a scene-model-free NBV planning approach with a density\nrepresentation. The Surface Edge Explorer (SEE) uses the density of current\nmeasurements to detect and explore observed surface boundaries. This approach\nis shown experimentally to provide better surface coverage in lower computation\ntime than the evaluated state-of-the-art volumetric approaches while moving\nequivalent distances.\n", "title": "Surface Edge Explorer (SEE): Planning Next Best Views Directly from 3D Observations" }
null
null
null
null
true
null
16479
null
Default
null
null
null
{ "abstract": " The distribution of N/O abundance ratios calculated by the detailed modelling\nof different galaxy spectra at z<4 is investigated. Supernova (SN) and long\ngamma-ray-burst (LGRB) host galaxies cover different redshift domains. N/O in\nSN hosts increases due to secondary N production towards low z (0.01)\naccompanying the growing trend of active galaxies (AGN, LINER). N/O in LGRB\nhosts decreases rapidly between z>1 and z ~0.1 following the N/H trend and\nreach the characteristic N/O ratios calculated for the HII regions in local and\nnearby galaxies. The few short period GRB (SGRB) hosts included in the galaxy\nsample show N/H <0.04 solar and O/H solar. They seem to continue the low bound\nN/H trend of SN hosts at z<0.3. The distribution of N/O as function of\nmetallicity for SN and LGRB hosts is compared with star chemical evolution\nmodels. The results show that several LGRB hosts can be explained by star\nmulti-bursting models when 12+log(O/H) <8.5, while some objects follow the\ntrend of continuous star formation models. N/O in SN hosts at log(O/H)+12 <8.5\nare not well explained by stellar chemical evolution models calculated for\nstarburst galaxies. At 12+log(O/H) >8.5 many different objects are nested close\nto O/H solar with N/O ranging between the maximum corresponding to starburst\ngalaxies and AGN and the minimum corresponding to HII regions and SGRB.\n", "title": "N/O abundance ratios in gamma-ray burst and supernova host galaxies at z<4. Comparison with AGN, starburst and HII regions" }
null
null
[ "Physics" ]
null
true
null
16480
null
Validated
null
null
null
{ "abstract": " This paper presents the recently published Cerema AWP (Adverse Weather\nPedestrian) dataset for various machine learning tasks and its exports in\nmachine learning friendly format. We explain why this dataset can be\ninteresting (mainly because it is a greatly controlled and fully annotated\nimage dataset) and present baseline results for various tasks. Moreover, we\ndecided to follow the very recent suggestions of datasheets for dataset, trying\nto standardize all the available information of the dataset, with a\ntransparency objective.\n", "title": "Baselines and a datasheet for the Cerema AWP dataset" }
null
null
null
null
true
null
16481
null
Default
null
null
null
{ "abstract": " Estimating distributions of node characteristics (labels) such as number of\nconnections or citizenship of users in a social network via edge and node\nsampling is a vital part of the study of complex networks. Due to its low cost,\nsampling via a random walk (RW) has been proposed as an attractive solution to\nthis task. Most RW methods assume either that the network is undirected or that\nwalkers can traverse edges regardless of their direction. Some RW methods have\nbeen designed for directed networks where edges coming into a node are not\ndirectly observable. In this work, we propose Directed Unbiased Frontier\nSampling (DUFS), a sampling method based on a large number of coordinated\nwalkers, each starting from a node chosen uniformly at random. It is applicable\nto directed networks with invisible incoming edges because it constructs, in\nreal-time, an undirected graph consistent with the walkers trajectories, and\ndue to the use of random jumps which prevent walkers from being trapped. DUFS\ngeneralizes previous RW methods and is suited for undirected networks and to\ndirected networks regardless of in-edges visibility. We also propose an\nimproved estimator of node label distributions that combines information from\nthe initial walker locations with subsequent RW observations. We evaluate DUFS,\ncompare it to other RW methods, investigate the impact of its parameters on\nestimation accuracy and provide practical guidelines for choosing them. In\nestimating out-degree distributions, DUFS yields significantly better estimates\nof the head of the distribution than other methods, while matching or exceeding\nestimation accuracy of the tail. Last, we show that DUFS outperforms uniform\nnode sampling when estimating distributions of node labels of the top 10%\nlargest degree nodes, even when sampling a node uniformly has the same cost as\nRW steps.\n", "title": "Characterizing Directed and Undirected Networks via Multidimensional Walks with Jumps" }
null
null
null
null
true
null
16482
null
Default
null
null
null
{ "abstract": " Traditional Recurrent Neural Networks assume vectorized data as inputs.\nHowever many data from modern science and technology come in certain structures\nsuch as tensorial time series data. To apply the recurrent neural networks for\nthis type of data, a vectorisation process is necessary, while such a\nvectorisation leads to the loss of the precise information of the spatial or\nlongitudinal dimensions. In addition, such a vectorized data is not an optimum\nsolution for learning the representation of the longitudinal data. In this\npaper, we propose a new variant of tensorial neural networks which directly\ntake tensorial time series data as inputs. We call this new variant as\nTensorial Recurrent Neural Network (TRNN). The proposed TRNN is based on tensor\nTucker decomposition.\n", "title": "Tensorial Recurrent Neural Networks for Longitudinal Data Analysis" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
16483
null
Validated
null
null
null
{ "abstract": " The multivariate probit model (MVP) is a popular classic model for studying\nbinary responses of multiple entities. Nevertheless, the computational\nchallenge of learning the MVP model, given that its likelihood involves\nintegrating over a multidimensional constrained space of latent variables,\nsignificantly limits its application in practice. We propose a flexible deep\ngeneralization of the classic MVP, the Deep Multivariate Probit Model (DMVP),\nwhich is an end-to-end learning scheme that uses an efficient parallel sampling\nprocess of the multivariate probit model to exploit GPU-boosted deep neural\nnetworks. We present both theoretical and empirical analysis of the convergence\nbehavior of DMVP's sampling process with respect to the resolution of the\ncorrelation structure. We provide convergence guarantees for DMVP and our\nempirical analysis demonstrates the advantages of DMVP's sampling compared with\nstandard MCMC-based methods. We also show that when applied to multi-entity\nmodelling problems, which are natural DMVP applications, DMVP trains faster\nthan classical MVP, by at least an order of magnitude, captures rich\ncorrelations among entities, and further improves the joint likelihood of\nentities compared with several competitive models.\n", "title": "End-to-End Learning for the Deep Multivariate Probit Model" }
null
null
null
null
true
null
16484
null
Default
null
null
null
{ "abstract": " Leclerc and Zelevinsky, motivated by the study of quasi-commuting quantum\nflag minors, introduced the notions of strongly separated and weakly separated\ncollections. These notions are closely related to the theory of cluster\nalgebras, to the combinatorics of the double Bruhat cells, and to the totally\npositive Grassmannian.\nA key feature, called the purity phenomenon, is that every maximal by\ninclusion strongly (resp., weakly) separated collection of subsets in $[n]$ has\nthe same cardinality.\nIn this paper, we extend these notions and define $\\mathcal{M}$-separated\ncollections, for any oriented matroid $\\mathcal{M}$.\nWe show that maximal by size $\\mathcal{M}$-separated collections are in\nbijection with fine zonotopal tilings (if $\\mathcal{M}$ is a realizable\noriented matroid), or with one-element liftings of $\\mathcal{M}$ in general\nposition (for an arbitrary oriented matroid).\nWe introduce the class of pure oriented matroids for which the purity\nphenomenon holds: an oriented matroid $\\mathcal{M}$ is pure if\n$\\mathcal{M}$-separated collections form a pure simplicial complex, i.e., any\nmaximal by inclusion $\\mathcal{M}$-separated collection is also maximal by\nsize.\nWe pay closer attention to several special classes of oriented matroids:\noriented matroids of rank $3$, graphical oriented matroids, and uniform\noriented matroids. We classify pure oriented matroids in these cases. An\noriented matroid of rank $3$ is pure if and only if it is a positroid (up to\nreorienting and relabeling its ground set). A graphical oriented matroid is\npure if and only if its underlying graph is an outerplanar graph, that is, a\nsubgraph of a triangulation of an $n$-gon.\nWe give a simple conjectural characterization of pure oriented matroids by\nforbidden minors and prove it for the above classes of matroids (rank $3$,\ngraphical, uniform).\n", "title": "Purity and separation for oriented matroids" }
null
null
null
null
true
null
16485
null
Default
null
null
null
{ "abstract": " For $n\\geq 4$ we show that generic closed Riemannian $n$-manifolds have no\nnontrivial totally geodesic submanifolds, answering a question of Spivak. An\nimmediate consequence is a severe restriction on the isometry group of a\ngeneric Riemannian metric. Both results are widely believed to be true, but we\nare not aware of any proofs in the literature.\n", "title": "Random Manifolds have no Totally Geodesic Submanifolds" }
null
null
null
null
true
null
16486
null
Default
null
null
null
{ "abstract": " We propose a novel design of a parallel manipulator of Stewart Gough type for\nvirtual reality application of single individuals; i.e. an omni-directional\ntreadmill is mounted on the motion platform in order to improve VR immersion by\ngiving feedback to the human body. For this purpose we modify the well-known\noctahedral manipulator in a way that it has one degree of kinematical\nredundancy; namely an equiform reconfigurability of the base. The instantaneous\nkinematics and singularities of this mechanism are studied, where especially\n\"unavoidable singularities\" are characterized. These are poses of the motion\nplatform, which can only be realized by singular configurations of the\nmechanism despite its kinematic redundancy.\n", "title": "Kinematically Redundant Octahedral Motion Platform for Virtual Reality Simulations" }
null
null
null
null
true
null
16487
null
Default
null
null
null
{ "abstract": " We introduce a novel loss max-pooling concept for handling imbalanced\ntraining data distributions, applicable as alternative loss layer in the\ncontext of deep neural networks for semantic image segmentation. Most\nreal-world semantic segmentation datasets exhibit long tail distributions with\nfew object categories comprising the majority of data and consequently biasing\nthe classifiers towards them. Our method adaptively re-weights the\ncontributions of each pixel based on their observed losses, targeting\nunder-performing classification results as often encountered for\nunder-represented object classes. Our approach goes beyond conventional\ncost-sensitive learning attempts through adaptive considerations that allow us\nto indirectly address both, inter- and intra-class imbalances. We provide a\ntheoretical justification of our approach, complementary to experimental\nanalyses on benchmark datasets. In our experiments on the Cityscapes and Pascal\nVOC 2012 segmentation datasets we find consistently improved results,\ndemonstrating the efficacy of our approach.\n", "title": "Loss Max-Pooling for Semantic Image Segmentation" }
null
null
null
null
true
null
16488
null
Default
null
null
null
{ "abstract": " In this work, we study the nonlinear traveling waves in density stratified\nfluids with depth varying shear currents. Beginning the formulation of the\nwater-wave problem due to [1], we extend the work of [4] and [18] to examine\nthe interface between two fluids of differing densities and varying linear\nshear. We derive as systems of equations depending only on variables at the\ninterface, and numerically solve for periodic traveling wave solutions using\nnumerical continuation. Here we consider only branches which bifurcate from\nsolutions where there is no slip in the tangential velocity at the interface\nfor the trivial flow. The spectral stability of these solutions is then\ndetermined using a numerical Fourier-Floquet technique. We find that the\nstrength of the linear shear in each fluid impacts the stability of the\ncorresponding traveling wave solutions. Specifically, opposing shears may\namplify or suppress instabilities.\n", "title": "Nonlinear Traveling Internal Waves in Depth-Varying Currents" }
null
null
null
null
true
null
16489
null
Default
null
null
null
{ "abstract": " Internal gravity waves play a primary role in geophysical fluids: they\ncontribute significantly to mixing in the ocean and they redistribute energy\nand momentum in the middle atmosphere. Until recently, most studies were\nfocused on plane wave solutions. However, these solutions are not a\nsatisfactory description of most geophysical manifestations of internal gravity\nwaves, and it is now recognized that internal wave beams with a confined\nprofile are ubiquitous in the geophysical context.\nWe will discuss the reason for the ubiquity of wave beams in stratified\nfluids, related to the fact that they are solutions of the nonlinear governing\nequations. We will focus more specifically on situations with a constant\nbuoyancy frequency. Moreover, in light of recent experimental and analytical\nstudies of internal gravity beams, it is timely to discuss the two main\nmechanisms of instability for those beams. i) The Triadic Resonant Instability\ngenerating two secondary wave beams. ii) The streaming instability\ncorresponding to the spontaneous generation of a mean flow.\n", "title": "Instabilities of Internal Gravity Wave Beams" }
null
null
null
null
true
null
16490
null
Default
null
null
null
{ "abstract": " Silicon-vacancy color centers in nanodiamonds are promising as fluorescent\nlabels for biological applications, with a narrow, non-bleaching emission line\nat 738\\,nm. Two-photon excitation of this fluorescence offers the possibility\nof low-background detection at significant tissue depth with high\nthree-dimensional spatial resolution. We have measured the two-photon\nfluorescence cross section of a negatively-charged silicon vacancy (SiV$^-$) in\nion-implanted bulk diamond to be $0.74(19) \\times 10^{-50}{\\rm cm^4\\;s/photon}$\nat an excitation wavelength of 1040\\,nm. In comparison to the diamond nitrogen\nvacancy (NV) center, the expected detection threshold of a two-photon excited\nSiV center is more than an order of magnitude lower, largely due to its much\nnarrower linewidth. We also present measurements of two- and three-photon\nexcitation spectra, finding an increase in the two-photon cross section with\ndecreasing wavelength, and discuss the physical interpretation of the spectra\nin the context of existing models of the SiV energy-level structure.\n", "title": "Multiphoton-Excited Fluorescence of Silicon-Vacancy Color Centers in Diamond" }
null
null
null
null
true
null
16491
null
Default
null
null
null
{ "abstract": " Online reviews provided by consumers are a valuable asset for e-Commerce\nplatforms, influencing potential consumers in making purchasing decisions.\nHowever, these reviews are of varying quality, with the useful ones buried deep\nwithin a heap of non-informative reviews. In this work, we attempt to\nautomatically identify review quality in terms of its helpfulness to the end\nconsumers. In contrast to previous works in this domain exploiting a variety of\nsyntactic and community-level features, we delve deep into the semantics of\nreviews as to what makes them useful, providing interpretable explanation for\nthe same. We identify a set of consistency and semantic factors, all from the\ntext, ratings, and timestamps of user-generated reviews, making our approach\ngeneralizable across all communities and domains. We explore review semantics\nin terms of several latent factors like the expertise of its author, his\njudgment about the fine-grained facets of the underlying product, and his\nwriting style. These are cast into a Hidden Markov Model -- Latent Dirichlet\nAllocation (HMM-LDA) based model to jointly infer: (i) reviewer expertise, (ii)\nitem facets, and (iii) review helpfulness. Large-scale experiments on five\nreal-world datasets from Amazon show significant improvement over\nstate-of-the-art baselines in predicting and ranking useful reviews.\n", "title": "Exploring Latent Semantic Factors to Find Useful Product Reviews" }
null
null
null
null
true
null
16492
null
Default
null
null
null
{ "abstract": " We compare the results of the semi-classical (SC) and quantum-mechanical (QM)\nformalisms for angular-momentum changing transitions in Rydberg atom collisions\ngiven by Vrinceanu & Flannery, J. Phys. B 34, L1 (2001), and Vrinceanu, Onofrio\n& Sadeghpour, ApJ 747, 56 (2012), with those of the SC formalism using a\nmodified Monte Carlo realization. We find that this revised SC formalism agrees\nwell with the QM results. This provides further evidence that the rates derived\nfrom the QM treatment are appropriate to be used when modelling recombination\nthrough Rydberg cascades, an important process in understanding the state of\nmaterial in the early universe. The rates for $\\Delta\\ell=\\pm1$ derived from\nthe QM formalism diverge when integrated to sufficiently large impact\nparameter, $b$. Further to the empirical limits to the $b$ integration\nsuggested by Pengelly & Seaton, MNRAS 127, 165 (1964), we suggest that the\nfundamental issue causing this divergence in the theory is that it does not\nfully cater for the finite time taken for such distant collisions to complete.\n", "title": "Thermodynamically-consistent semi-classical $\\ell$-changing rates" }
null
null
null
null
true
null
16493
null
Default
null
null
null
{ "abstract": " We introduce an algebraic Fourier transform for the quantum Toda lattice.\n", "title": "A Fourier transform for the quantum Toda lattice" }
null
null
null
null
true
null
16494
null
Default
null
null
null
{ "abstract": " Semi-supervised learning deals with the problem of how, if possible, to take\nadvantage of a huge amount of not classified data, to perform classification,\nin situations when, typically, the labelled data are few. Even though this is\nnot always possible (it depends on how useful is to know the distribution of\nthe unlabelled data in the inference of the labels), several algorithm have\nbeen proposed recently. A new algorithm is proposed, that under almost\nneccesary conditions, attains asymptotically the performance of the best\ntheoretical rule, when the size of unlabeled data tends to infinity. The set of\nnecessary assumptions, although reasonables, show that semi-parametric\nclassification only works for very well conditioned problems.\n", "title": "Semi-supervised learning" }
null
null
null
null
true
null
16495
null
Default
null
null
null
{ "abstract": " Measurements of root-zone soil moisture across spatial scales of tens to\nthousands of meters have been a challenge for many decades. The mobile\napplication of Cosmic-Ray Neutron Sensing (CRNS) is a promising approach to\nmeasure field soil moisture non-invasively by surveying large regions with a\nground-based vehicle. Recently, concerns have been raised about a potentially\nbiasing influence of local structures and roads. We employed neutron transport\nsimulations and dedicated experiments to quantify the influence of different\nroad types on the CRNS measurement. We found that the presence of roads\nintroduces a bias in the CRNS estimation of field soil moisture compared to\nnon-road scenarios. However, this effect becomes insignificant at distances\nbeyond a few meters from the road. Measurements from the road could\noverestimate the field value by up to 40 % depending on road material, width,\nand the surrounding field water content. The bias could be successfully removed\nwith an analytical correction function that accounts for these parameters.\nAdditionally, an empirical approach is proposed that can be used on-the-fly\nwithout prior knowledge of field soil moisture. Tests at different study sites\ndemonstrated good agreement between road-effect corrected measurements and\nfield soil moisture observations. However, if knowledge about the road\ncharacteristics is missing, any measurements on the road could substantially\nreduce the accuracy of this method. Our results constitute a practical\nadvancement of the mobile CRNS methodology, which is important for providing\nunbiased estimates of field-scale soil moisture to support applications in\nhydrology, remote sensing, and agriculture.\n", "title": "The Cosmic-Ray Neutron Rover - Mobile Surveys of Field Soil Moisture and the Influence of Roads" }
null
null
[ "Physics" ]
null
true
null
16496
null
Validated
null
null
null
{ "abstract": " We study discrete time linear constrained switching systems with additive\ndisturbances, in which the switching may be on the system matrices, the\ndisturbance sets, the state constraint sets or a combination of the above. In\nour general setting, a switching sequence is admissible if it is accepted by an\nautomaton. For this family of systems, stability does not necessarily imply the\nexistence of an invariant set. Nevertheless, it does imply the existence of an\ninvariant multi-set, which is a relaxation of invariance and the object of our\nwork. First, we establish basic results concerning the characterization,\napproximation and computation of the minimal and the maximal admissible\ninvariant multi-set. Second, by exploiting the topological properties of the\ndirected graph which defines the switching constraints, we propose invariant\nmulti-set constructions with several benefits. We illustrate our results in\nbenchmark problems in control.\n", "title": "Invariance in Constrained Switching" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
16497
null
Validated
null
null
null
{ "abstract": " The challenge of taking many variables into account in optimization problems\nmay be overcome under the hypothesis of low effective dimensionality. Then, the\nsearch of solutions can be reduced to the random embedding of a low dimensional\nspace into the original one, resulting in a more manageable optimization\nproblem. Specifically, in the case of time consuming black-box functions and\nwhen the budget of evaluations is severely limited, global optimization with\nrandom embeddings appears as a sound alternative to random search. Yet, in the\ncase of box constraints on the native variables, defining suitable bounds on a\nlow dimensional domain appears to be complex. Indeed, a small search domain\ndoes not guarantee to find a solution even under restrictive hypotheses about\nthe function, while a larger one may slow down convergence dramatically. Here\nwe tackle the issue of low-dimensional domain selection based on a detailed\nstudy of the properties of the random embedding, giving insight on the\naforementioned difficulties. In particular, we describe a minimal\nlow-dimensional set in correspondence with the embedded search space. We\nadditionally show that an alternative equivalent embedding procedure yields\nsimultaneously a simpler definition of the low-dimensional minimal set and\nbetter properties in practice. Finally, the performance and robustness gains of\nthe proposed enhancements for Bayesian optimization are illustrated on\nnumerical examples.\n", "title": "On the choice of the low-dimensional domain for global optimization via random embeddings" }
null
null
null
null
true
null
16498
null
Default
null
null
null
{ "abstract": " A fundamental component of the game theoretic approach to distributed control\nis the design of local utility functions. In Part I of this work we showed how\nto systematically design local utilities so as to maximize the induced worst\ncase performance. The purpose of the present manuscript is to specialize the\ngeneral results obtained in Part I to a class of monotone submodular,\nsupermodular and set covering problems. In the case of set covering problems,\nwe show how any distributed algorithm capable of computing a Nash equilibrium\ninherits a performance certificate matching the well known 1-1/e approximation\nof Nemhauser. Relative to the class of submodular maximization problems\nconsidered here, we show how the performance offered by the game theoretic\napproach improves on existing approximation algorithms. We briefly discuss the\nalgorithmic complexity of computing (pure) Nash equilibria and show how our\napproach generalizes and subsumes previously fragmented results in the area of\noptimal utility design. Two applications and corresponding numerics are\npresented: the vehicle target assignment problem and a coverage problem arising\nin distributed caching for wireless networks.\n", "title": "Distributed resource allocation through utility design - Part II: applications to submodular, supermodular and set covering problems" }
null
null
null
null
true
null
16499
null
Default
null
null
null
{ "abstract": " The idea of combining different two-dimensional (2D) crystals in van der\nWaals heterostructures (vdWHs) has led to a new paradigm for band structure\nengineering with atomic precision. Due to the weak interlayer couplings, the\nband structures of the individual 2D crystals are largely preserved upon\nformation of the heterostructure. However, regardless of the details of the\ninterlayer hybridisation, the size of the 2D crystal band gaps are always\nreduced due to the enhanced dielectric screening provided by the surrounding\nlayers. The effect can be on the order of electron volts, but its precise\nmagnitude is non-trivial to predict because of the non-local nature of the\nscreening in quasi-2D materials, and it is not captured by effective\nsingle-particle methods such as density functional theory. Here we present an\nefficient and general method for calculating the band gap renormalization of a\n2D material embedded in an arbitrary vdWH. The method evaluates the change in\nthe GW self-energy of the 2D material from the change in the screened Coulomb\ninteraction. The latter is obtained using the quantum-electrostatic\nheterostructure (QEH) model. We benchmark the G$\\Delta$W method against full\nfirst-principles GW calculations and use it to unravel the importance of\nscreening-induced band structure renormalisation in various vdWHs. A main\nresult is the observation that the size of the band gap reduction of a given 2D\nmaterial when inserted into a heterostructure scales inversely with the\npolarisability of the 2D material. Our work demonstrates that dielectric\nengineering \\emph{via} van der Waals heterostructuring represents a promising\nstrategy for tailoring the band structure of 2D materials.\n", "title": "Quasiparticle band structure engineering in van der Waals heterostructures via dielectric screening" }
null
null
null
null
true
null
16500
null
Default
null
null