text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " As a first approach to the study of systems coupling finite and infinite\ndimensional natures, this article addresses the stability of a system of\nordinary differential equations coupled with a classic heat equation using a\nLyapunov functional technique. Inspired from recent developments in the area of\ntime delay systems, a new methodology to study the stability of such a class of\ndistributed parameter systems is presented here. The idea is to use a\npolynomial approximation of the infinite dimensional state of the heat equation\nin order to build an enriched energy functional. A well known efficient\nintegral inequality (Bessel inequality) will allow to obtain stability\nconditions expressed in terms of linear matrix inequalities. We will eventually\ntest our approach on academic examples in order to illustrate the efficiency of\nour theoretical results.\n", "title": "Stability analysis of a system coupled to a heat equation" }
null
null
[ "Mathematics" ]
null
true
null
15001
null
Validated
null
null
null
{ "abstract": " Data processing pipelines represent an important slice of the astronomical\nsoftware library that include chains of processes that transform raw data into\nvaluable information via data reduction and analysis. In this work we present\nCorral, a Python framework for astronomical pipeline generation. Corral\nfeatures a Model-View-Controller design pattern on top of an SQL Relational\nDatabase capable of handling: custom data models; processing stages; and\ncommunication alerts, and also provides automatic quality and structural\nmetrics based on unit testing. The Model-View-Controller provides concept\nseparation between the user logic and the data models, delivering at the same\ntime multi-processing and distributed computing capabilities. Corral represents\nan improvement over commonly found data processing pipelines in Astronomy since\nthe design pattern eases the programmer from dealing with processing flow and\nparallelization issues, allowing them to focus on the specific algorithms\nneeded for the successive data transformations and at the same time provides a\nbroad measure of quality over the created pipeline. Corral and working examples\nof pipelines that use it are available to the community at\nthis https URL.\n", "title": "Corral Framework: Trustworthy and Fully Functional Data Intensive Parallel Astronomical Pipelines" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
15002
null
Validated
null
null
null
{ "abstract": " Stationary stellar systems with radially elongated orbits are subject to\nradial orbit instability -- an important phenomenon that structures galaxies.\nAntonov (1973) presented a formal proof of the instability for spherical\nsystems in the limit of purely radial orbits. However, such spheres have highly\ninhomogeneous density distributions with singularity $\\sim 1/r^2$, resulting in\nan inconsistency in the proof. The proof can be refined, if one considers an\norbital distribution close to purely radial, but not entirely radial, which\nallows to avoid the central singularity. For this purpose we employ\nnon-singular analogs of generalised polytropes elaborated recently in our work\nin order to derive and solve new integral equations adopted for calculation of\nunstable eigenmodes in systems with nearly radial orbits. In addition, we\nestablish a link between our and Antonov's approaches and uncover the meaning\nof infinite entities in the purely radial case. Maximum growth rates tend to\ninfinity as the system becomes more and more radially anisotropic. The\ninstability takes place both for even and odd spherical harmonics, with all\nunstable modes developing rapidly, i.e. having eigenfrequencies comparable to\nor greater than typical orbital frequencies. This invalidates orbital\napproximation in the case of systems with all orbits very close to purely\nradial.\n", "title": "Radial orbit instability in systems of highly eccentric orbits: Antonov problem reviewed" }
null
null
[ "Physics" ]
null
true
null
15003
null
Validated
null
null
null
{ "abstract": " In the past, calculation of wakefields generated by an electron bunch\npropagating in a plasma has been carried out in linear approximation, where the\nplasma perturbation can be assumed small and plasma equations of motion\nlinearized. This approximation breaks down in the blowout regime where a\nhigh-density electron driver expels plasma electrons from its path and creates\na cavity void of electrons in its wake. In this paper, we develop a technique\nthat allows to calculate short-range longitudinal and transverse wakes\ngenerated by a witness bunch being accelerated inside the cavity. Our results\ncan be used for studies of the beam loading and the hosing instability of the\nwitness bunch in PWFA and LWFA.\n", "title": "Short-range wakefields generated in the blowout regime of plasma-wakefield acceleration" }
null
null
null
null
true
null
15004
null
Default
null
null
null
{ "abstract": " Minimizing the nuclear norm of a matrix has been shown to be very efficient\nin reconstructing a low-rank sampled matrix. Furthermore, minimizing the sum of\nnuclear norms of matricizations of a tensor has been shown to be very efficient\nin recovering a low-Tucker-rank sampled tensor. In this paper, we propose to\nrecover a low-TT-rank sampled tensor by minimizing a weighted sum of nuclear\nnorms of unfoldings of the tensor. We provide numerical results to show that\nour proposed method requires significantly less number of samples to recover to\nthe original tensor in comparison with simply minimizing the sum of nuclear\nnorms since the structure of the unfoldings in the TT tensor model is\nfundamentally different from that of matricizations in the Tucker tensor model.\n", "title": "Scaled Nuclear Norm Minimization for Low-Rank Tensor Completion" }
null
null
null
null
true
null
15005
null
Default
null
null
null
{ "abstract": " This paper presents a novel approach for stability and transparency analysis\nfor bilateral teleoperation in the presence of data loss in communication\nmedia. A new model for data loss is proposed based on a set of periodic\ncontinuous pulses and its finite series representation. The passivity of the\noverall system is shown using wave variable approach including the newly\ndefined model for data loss. Simulation results are presented to show the\neffectiveness of the proposed approach.\n", "title": "Stability and Transparency Analysis of a Bilateral Teleoperation in Presence of Data Loss" }
null
null
null
null
true
null
15006
null
Default
null
null
null
{ "abstract": " Measurement of the energy eigenvalues (spectrum) of a multi-qubit system has\nrecently become possible by qubit tunneling spectroscopy (QTS). In the standard\nQTS experiments, an incoherent probe qubit is strongly coupled to one of the\nqubits of the system in such a way that its incoherent tunneling rate provides\ninformation about the energy eigenvalues of the original (source) system. In\nthis paper, we generalize QTS by coupling the probe qubit to many source\nqubits. We show that by properly choosing the couplings, one can perform\nprojective measurements of the source system energy eigenstates in an arbitrary\nbasis, thus performing quantum eigenstate tomography. As a practical example of\na limited tomography, we apply our scheme to probe the eigenstates of a kink in\na frustrated transverse Ising chain.\n", "title": "Quantum eigenstate tomography with qubit tunneling spectroscopy" }
null
null
null
null
true
null
15007
null
Default
null
null
null
{ "abstract": " Fix a quadratic order over the ring of integers. An embedding of the\nquadratic order into a quaternionic order naturally gives an integral binary\nhermitian form over the quadratic order. We show that, in certain cases, this\ncorrespondence is a discriminant preserving bijection between the isomorphism\nclasses of embeddings and integral binary hermitian forms.\n", "title": "Binary hermitian forms and optimal embeddings" }
null
null
null
null
true
null
15008
null
Default
null
null
null
{ "abstract": " Neural autoregressive models are explicit density estimators that achieve\nstate-of-the-art likelihoods for generative modeling. The D-dimensional data\ndistribution is factorized into an autoregressive product of one-dimensional\nconditional distributions according to the chain rule. Data completion is a\nmore involved task than data generation: the model must infer missing variables\nfor any partially observed input vector. Previous work introduced an\norder-agnostic training procedure for data completion with autoregressive\nmodels. Missing variables in any partially observed input vector can be imputed\nefficiently by choosing an ordering where observed dimensions precede\nunobserved ones and by computing the autoregressive product in this order. In\nthis paper, we provide evidence that the order-agnostic (OA) training procedure\nis suboptimal for data completion. We propose an alternative procedure (OA++)\nthat reaches better performance in fewer computations. It can handle all data\ncompletion queries while training fewer one-dimensional conditional\ndistributions than the OA procedure. In addition, these one-dimensional\nconditional distributions are trained proportionally to their expected usage at\ninference time, reducing overfitting. Finally, our OA++ procedure can exploit\nprior knowledge about the distribution of inference completion queries, as\nopposed to OA. We support these claims with quantitative experiments on\nstandard datasets used to evaluate autoregressive generative models.\n", "title": "An Improved Training Procedure for Neural Autoregressive Data Completion" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
15009
null
Validated
null
null
null
{ "abstract": " A stochastic orbital approach to the resolution of identity (RI)\napproximation for 4-index 2-electron electron repulsion integrals (ERIs) is\npresented. The stochastic RI-ERIs are then applied to M\\o ller-Plesset\nperturbation theory (MP2) utilizing a \\textit{multiple stochastic orbital\napproach}. The introduction of multiple stochastic orbitals results in an $N^3$\nscaling for both the stochastic RI-ERIs and stochastic RI-MP2. We demonstrate\nthat this method exhibits a small prefactor and an observed scaling of\n$N^{2.4}$ for a range of water clusters, already outperforming MP2 for clusters\nwith as few as 21 water molecules.\n", "title": "A Stochastic Formulation of the Resolution of Identity: Application to Second Order Møller-Plesset Perturbation Theory" }
null
null
null
null
true
null
15010
null
Default
null
null
null
{ "abstract": " Game maps are useful for human players, general-game-playing agents, and\ndata-driven procedural content generation. These maps are generally made by\nhand-assembling manually-created screenshots of game levels. Besides being\ntedious and error-prone, this approach requires additional effort for each new\ngame and level to be mapped. The results can still be hard for humans or\ncomputational systems to make use of, privileging visual appearance over\nsemantic information. We describe a software system, Mappy, that produces a\ngood approximation of a linked map of rooms given a Nintendo Entertainment\nSystem game program and a sequence of button inputs exploring its world. In\naddition to visual maps, Mappy outputs grids of tiles (and how they change over\ntime), positions of non-tile objects, clusters of similar rooms that might in\nfact be the same room, and a set of links between these rooms. We believe this\nis a necessary step towards developing larger corpora of high-quality\nsemantically-annotated maps for PCG via machine learning and other\napplications.\n", "title": "Automatic Mapping of NES Games with Mappy" }
null
null
null
null
true
null
15011
null
Default
null
null
null
{ "abstract": " The paper is focused on the problem of estimating the probability $p$ of\nindividual contaminated sample, under group testing. The precision of the\nestimator is given by the probability of proportional closeness, a concept\ndefined in the Introduction. Two-stage and sequential sampling procedures are\ncharacterized. An adaptive procedure is examined.\n", "title": "Proportional Closeness Estimation of Probability of Contamination Under Group Testing" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
15012
null
Validated
null
null
null
{ "abstract": " Conventional text classification models make a bag-of-words assumption\nreducing text into word occurrence counts per document. Recent algorithms such\nas word2vec are capable of learning semantic meaning and similarity between\nwords in an entirely unsupervised manner using a contextual window and doing so\nmuch faster than previous methods. Each word is projected into vector space\nsuch that similar meaning words such as \"strong\" and \"powerful\" are projected\ninto the same general Euclidean space. Open questions about these embeddings\ninclude their utility across classification tasks and the optimal properties\nand source of documents to construct broadly functional embeddings. In this\nwork, we demonstrate the usefulness of pre-trained embeddings for\nclassification in our task and demonstrate that custom word embeddings, built\nin the domain and for the tasks, can improve performance over word embeddings\nlearnt on more general data including news articles or Wikipedia.\n", "title": "Utility of General and Specific Word Embeddings for Classifying Translational Stages of Research" }
null
null
null
null
true
null
15013
null
Default
null
null
null
{ "abstract": " The AKARI IRC All-sky survey provided more than twenty thousand thermal\ninfrared observations of over five thousand asteroids. Diameters and albedos\nwere obtained by fitting an empirically calibrated version of the standard\nthermal model to these data. After the publication of the flux catalogue in\nOctober 2016, our aim here is to present the AKARI IRC all-sky survey data and\ndiscuss valuable scientific applications in the field of small-body physical\nproperties studies. As an example, we update the catalogue of asteroid\ndiameters and albedos based on AKARI using the near-Earth asteroid thermal\nmodel (NEATM). We fit the NEATM to derive asteroid diameters and, whenever\npossible, infrared beaming parameters. We obtained a total of 8097 diameters\nand albedos for 5170 asteroids, and we fitted the beaming parameter for almost\ntwo thousand of them. When it was not possible to fit the beaming parameter, we\nused a straight line fit to our sample's beaming parameter-versus-phase angle\nplot to set the default value for each fit individually instead of using a\nsingle average value. Our diameters agree with stellar-occultation-based\ndiameters well within the accuracy expected for the model. They also match the\nprevious AKARI-based catalogue at phase angles lower than 50 degrees, but we\nfind a systematic deviation at higher phase angles, at which near-Earth and\nMars-crossing asteroids were observed. The AKARI IRC All-sky survey provides\nobservations at different observation geometries, rotational coverages and\naspect angles. For example, by comparing in more detail a few asteroids for\nwhich dimensions were derived from occultations, we discuss how the multiple\nobservations per object may already provide three-dimensional information about\nelongated objects even based on an idealised model like the NEATM.\n", "title": "The AKARI IRC asteroid flux catalogue: updated diameters and albedos" }
null
null
[ "Physics" ]
null
true
null
15014
null
Validated
null
null
null
{ "abstract": " We demonstrate that a prior influence on the posterior distribution of\ncovariance matrix vanishes as sample size grows. The assumptions on a prior are\nexplicit and mild. The results are valid for a finite sample and admit the\ndimension $p$ growing with the sample size $n$. We exploit the described fact\nto derive the finite sample Bernstein - von Mises theorem for functionals of\ncovariance matrix (e.g. eigenvalues) and to find the posterior distribution of\nthe Frobenius distance between spectral projector and empirical spectral\nprojector. This can be useful for constructing sharp confidence sets for the\ntrue value of the functional or for the true spectral projector.\n", "title": "Finite sample Bernstein - von Mises theorems for functionals and spectral projectors of covariance matrix" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
15015
null
Validated
null
null
null
{ "abstract": " We consider the multicomponent Widom-Rowlison with Metropolis dynamics, which\ndescribes the evolution of a particle system where $M$ different types of\nparticles interact subject to certain hard-core constraints. Focusing on the\nscenario where the spatial structure is modeled by finite square lattices, we\nstudy the asymptotic behavior of this interacting particle system in the\nlow-temperature regime, analyzing the tunneling times between its $M$\nmaximum-occupancy configurations, and the mixing time of the corresponding\nMarkov chain. In particular, we develop a novel combinatorial method that,\nexploiting geometrical properties of the Widom-Rowlinson configurations on\nfinite square lattices, leads to the identification of the timescale at which\ntransitions between maximum-occupancy configurations occur and shows how this\ndepends on the chosen boundary conditions and the square lattice dimensions.\n", "title": "Low-temperature behavior of the multicomponent Widom-Rowlison model on finite square lattices" }
null
null
null
null
true
null
15016
null
Default
null
null
null
{ "abstract": " Graph theory provides a language for studying the structure of relations, and\nit is often used to study interactions over time too. However, it poorly\ncaptures the both temporal and structural nature of interactions, that calls\nfor a dedicated formalism. In this paper, we generalize graph concepts in order\nto cope with both aspects in a consistent way. We start with elementary\nconcepts like density, clusters, or paths, and derive from them more advanced\nconcepts like cliques, degrees, clustering coefficients, or connected\ncomponents. We obtain a language to directly deal with interactions over time,\nsimilar to the language provided by graphs to deal with relations. This\nformalism is self-consistent: usual relations between different concepts are\npreserved. It is also consistent with graph theory: graph concepts are special\ncases of the ones we introduce. This makes it easy to generalize higher-level\nobjects such as quotient graphs, line graphs, k-cores, and centralities. This\npaper also considers discrete versus continuous time assumptions, instantaneous\nlinks, and extensions to more complex cases.\n", "title": "Stream Graphs and Link Streams for the Modeling of Interactions over Time" }
null
null
null
null
true
null
15017
null
Default
null
null
null
{ "abstract": " Metric search is concerned with the efficient evaluation of queries in metric\nspaces. In general,a large space of objects is arranged in such a way that,\nwhen a further object is presented as a query, those objects most similar to\nthe query can be efficiently found. Most mechanisms rely upon the triangle\ninequality property of the metric governing the space. The triangle inequality\nproperty is equivalent to a finite embedding property, which states that any\nthree points of the space can be isometrically embedded in two-dimensional\nEuclidean space. In this paper, we examine a class of semimetric space which is\nfinitely four-embeddable in three-dimensional Euclidean space. In mathematics\nthis property has been extensively studied and is generally known as the\nfour-point property. All spaces with the four-point property are metric spaces,\nbut they also have some stronger geometric guarantees. We coin the term\nsupermetric space as, in terms of metric search, they are significantly more\ntractable. Supermetric spaces include all those governed by Euclidean, Cosine,\nJensen-Shannon and Triangular distances, and are thus commonly used within many\ndomains. In previous work we have given a generic mathematical basis for the\nsupermetric property and shown how it can improve indexing performance for a\ngiven exact search structure. Here we present a full investigation into its use\nwithin a variety of different hyperplane partition indexing structures, and go\non to show some more of its flexibility by examining a search structure whose\npartition and exclusion conditions are tailored, at each node, to suit the\nindividual reference points and data set present there. Among the results\ngiven, we show a new best performance for exact search using a well-known\nbenchmark.\n", "title": "Supermetric Search" }
null
null
null
null
true
null
15018
null
Default
null
null
null
{ "abstract": " In this paper we study solutions, possibly unbounded and sign-changing, of\nthe following problem:\n-\\D_{\\lambda} u=|x|_{\\lambda}^a |u|^{p-1}u, in R^n,\\;n\\geq 1,\\; p>1, and a\n\\geq 0, where \\D_{\\lambda} is a strongly degenerate elliptic operator, the\nfunctions \\lambda=(\\lambda_1, ..., \\lambda_k) : R^n \\rightarrow R^k, satisfies\nsome certain conditions, and |.|_{\\lambda} the homogeneous norm associated to\nthe \\D_{\\lambda}-Laplacian.\nWe prove various Liouville-type theorems for smooth solutions under the\nassumption that they are stable or stable outside a compact set of R^n. First,\nwe establish the standard integralestimates via stability property to derive\nthe nonexistence results for stable solutions. Next, by mean of the Pohozaev\nidentity, we deduce the Liouville-type theorem for solutions stable outside a\ncompact set.\n", "title": "Liouville-type theorems with finite Morse index for Δ_λ-Laplace operator" }
null
null
null
null
true
null
15019
null
Default
null
null
null
{ "abstract": " In this paper, we present a new and significant theoretical discovery. If the\nabsolute height difference between base station (BS) antenna and user equipment\n(UE) antenna is larger than zero, then the network performance in terms of both\nthe coverage probability and the area spectral efficiency (ASE) will\ncontinuously decrease toward zero as the BS density increases for ultra-dense\n(UD) small cell networks (SCNs). Such findings are completely different from\nthe conclusions in existing works, both quantitatively and qualitatively. In\nparticular, this performance behavior has a tremendous impact on the deployment\nof UD SCNs in the 5th-generation (5G) era. Network operators may invest large\namounts of money in deploying more network infrastructure to only obtain an\neven less network capacity. Our study results reveal that one way to address\nthis issue is to lower the SCN BS antenna height to the UE antenna height.\nHowever, this requires a revolutionized approach of BS architecture and\ndeployment, which is explored in this paper too.\n", "title": "Performance Impact of Base Station Antenna Heights in Dense Cellular Networks" }
null
null
null
null
true
null
15020
null
Default
null
null
null
{ "abstract": " The question of selecting the \"best\" amongst different choices is a common\nproblem in statistics. In drug development, our motivating setting, the\nquestion becomes, for example: what is the dose that gives me a pre-specified\nrisk of toxicity or which treatment gives the best response rate. Motivated by\na recent development in the weighted information measures theory, we propose an\nexperimental design based on a simple and intuitive criterion which governs arm\nselection in the experiment with multinomial outcomes. The criterion leads to\naccurate arm selection without any parametric or monotonicity assumption. The\nasymptotic properties of the design are studied for different allocation rules\nand the small sample size behaviour is evaluated in simulations in the context\nof Phase I and Phase II clinical trials with binary endpoints. We compare the\nproposed design to currently used alternatives and discuss its practical\nimplementation.\n", "title": "An information-theoretic approach for selecting arms in clinical trials" }
null
null
null
null
true
null
15021
null
Default
null
null
null
{ "abstract": " Imaging assays of cellular function, especially those using fluorescent\nstains, are ubiquitous in the biological and medical sciences. Despite advances\nin computer vision, such images are often analyzed using only manual or\nrudimentary automated processes. Watershed-based segmentation is an effective\ntechnique for identifying objects in images; it outperforms commonly used image\nanalysis methods, but requires familiarity with computer-vision techniques to\nbe applied successfully. In this report, we present and implement a\nwatershed-based image analysis and classification algorithm in a GUI, enabling\na broad set of users to easily understand the algorithm and adjust the\nparameters to their specific needs. As an example, we implement this algorithm\nto find and classify cells in a complex imaging assay for mitochondrial\nfunction. In a second example, we demonstrate a workflow using manual\ncomparisons and receiver operator characteristics to optimize the algorithm\nparameters for finding live and dead cells in a standard viability assay.\nOverall, this watershed-based algorithm is more advanced than traditional\nthresholding and can produce optimized, automated results. By incorporating\nassociated pre-processing steps in the GUI, the algorithm is also easily\nadjusted, rendering it user-friendly.\n", "title": "A watershed-based algorithm to segment and classify cells in fluorescence microscopy images" }
null
null
null
null
true
null
15022
null
Default
null
null
null
{ "abstract": " Using the language of Riordan arrays, we study a one-parameter family of\northogonal polynomials that we call the restricted Chebyshev-Boubaker\npolynomials. We characterize these polynomials in terms of the three term\nrecurrences that they satisfy, and we study certain central sequences defined\nby their coefficient arrays. We give an integral representation for their\nmoments, and we show that the Hankel transforms of these moments have a simple\nform. We show that the (sequence) Hankel transform of the row sums of the\ncorresponding moment matrix is defined by a family of polynomials closely\nrelated to the Chebyshev polynomials of the second kind, and that these row\nsums are in fact the moments of another family of orthogonal polynomials.\n", "title": "On the restricted Chebyshev-Boubaker polynomials" }
null
null
null
null
true
null
15023
null
Default
null
null
null
{ "abstract": " We observe and explain theoretically a dramatic evolution of the\nDzyaloshinskii-Moriya interaction in the series of isostructural weak\nferromagnets, MnCO$_3$, FeBO$_3$, CoCO$_3$ and NiCO$_3$. The sign of the\ninteraction is encoded in the phase of x-ray magnetic diffraction amplitude,\nobserved through interference with resonant quadrupole scattering. We find very\ngood quantitative agreement with first-principles electronic structure\ncalculations, reproducing both sign and magnitude through the series, and\npropose a simplified `toy model' to explain the change in sign with 3 d shell\nfilling. The model gives a clue for qualitative understanding of the evolution\nof the DMI in Mott and charge transfer insulators.\n", "title": "Band filling control of the Dzyaloshinskii-Moriya interaction in weakly ferromagnetic insulators" }
null
null
null
null
true
null
15024
null
Default
null
null
null
{ "abstract": " In portfolio analysis, the traditional approach of replacing population\nmoments with sample counterparts may lead to suboptimal portfolio choices. I\nshow that optimal portfolio weights can be estimated using a machine learning\n(ML) framework, where the outcome to be predicted is a constant and the vector\nof explanatory variables is the asset returns. It follows that ML specifically\ntargets estimation risk when estimating portfolio weights, and that\n\"off-the-shelf\" ML algorithms can be used to estimate the optimal portfolio in\nthe presence of parameter uncertainty. The framework nests the traditional\napproach and recently proposed shrinkage approaches as special cases. By\nrelying on results from the ML literature, I derive new insights for existing\napproaches and propose new estimation methods. Based on simulation studies and\nseveral datasets, I find that ML significantly reduces estimation risk compared\nto both the traditional approach and the equal weight strategy.\n", "title": "Reducing Estimation Risk in Mean-Variance Portfolios with Machine Learning" }
null
null
null
null
true
null
15025
null
Default
null
null
null
{ "abstract": " This paper studies the characteristics and applicability of the CutFEM\napproach as the core of a robust topology optimization framework for 3D laminar\nincompressible flow and species transport problems at low Reynolds number (Re <\n200). CutFEM is a methodology for discretizing partial differential equations\non complex geometries by immersed boundary techniques. In this study, the\ngeometry of the fluid domain is described by an explicit level set method,\nwhere the parameters of a level set function are defined as functions of the\noptimization variables. The fluid behavior is modeled by the incompressible\nNavier-Stokes equations. Species transport is modeled by an advection-diffusion\nequation. The governing equations are discretized in space by a generalized\nextended finite element method. Face-oriented ghost-penalty terms are added for\nstability reasons and to improve the conditioning of the system. The boundary\nconditions are enforced weakly via Nit\\-sc\\-he's method. The emergence of\nisolated volumes of fluid surrounded by solid during the optimization process\nleads to a singular analysis problem. An auxiliary indicator field is modeled\nto identify these volumes and to impose a constraint on the average pressure.\nNumerical results for 3D, steady-state and transient problems demonstrate that\nthe CutFEM analyses are sufficiently accurate, and the optimized designs agree\nwell with results from prior studies solved in 2D or by density approaches.\n", "title": "CutFEM topology optimization of 3D laminar incompressible flow problems" }
null
null
null
null
true
null
15026
null
Default
null
null
null
{ "abstract": " Many neural systems display avalanche behavior characterized by uninterrupted\nsequences of neuronal firing whose distributions of size and durations are\nheavy-tailed. Theoretical models of such systems suggest that these dynamics\nsupport optimal information transmission and storage. However, the unknown role\nof network structure precludes an understanding of how variations in network\ntopology manifest in neural dynamics and either support or impinge upon\ninformation processing. Here, using a generalized spiking model, we develop a\nmechanistic understanding of how network topology supports information\nprocessing through network dynamics. First, we show how network topology\ndetermines network dynamics by analytically and numerically demonstrating that\nnetwork topology can be designed to propagate stimulus patterns for long\ndurations. We then identify strongly connected cycles as empirically observable\nnetwork motifs that are prevalent in such networks. Next, we show that within a\nnetwork, mathematical intuitions from network control theory are tightly linked\nwith dynamics initiated by node-specific stimulation and can identify stimuli\nthat promote long-lasting cascades. Finally, we use these network-based metrics\nand control-based stimuli to demonstrate that long-lasting cascade dynamics\nfacilitate delayed recovery of stimulus patterns from network activity, as\nmeasured by mutual information. Collectively, our results provide evidence that\ncortical networks are structured with architectural motifs that support\nlong-lasting propagation and recovery of a few crucial patterns of stimulation,\nespecially those consisting of activity in highly controllable neurons.\nBroadly, our results imply that avalanching neural networks could contribute to\ncognitive faculties that require persistent activation of neuronal patterns,\nsuch as working memory or attention.\n", "title": "Network topology of neural systems supporting avalanche dynamics predicts stimulus propagation and recovery" }
null
null
[ "Quantitative Biology" ]
null
true
null
15027
null
Validated
null
null
null
{ "abstract": " Ontology alignment is widely-used to find the correspondences between\ndifferent ontologies in diverse fields.After discovering the alignments,several\nperformance scores are available to evaluate them.The scores typically require\nthe identified alignment and a reference containing the underlying actual\ncorrespondences of the given ontologies.The current trend in the alignment\nevaluation is to put forward a new score(e.g., precision, weighted precision,\netc.)and to compare various alignments by juxtaposing the obtained scores.\nHowever,it is substantially provocative to select one measure among others for\ncomparison.On top of that, claiming if one system has a better performance than\none another cannot be substantiated solely by comparing two scalars.In this\npaper,we propose the statistical procedures which enable us to theoretically\nfavor one system over one another.The McNemar's test is the statistical means\nby which the comparison of two ontology alignment systems over one matching\ntask is drawn.The test applies to a 2x2 contingency table which can be\nconstructed in two different ways based on the alignments,each of which has\ntheir own merits/pitfalls.The ways of the contingency table construction and\nvarious apposite statistics from the McNemar's test are elaborated in minute\ndetail.In the case of having more than two alignment systems for comparison,\nthe family-wise error rate is expected to happen. Thus, the ways of preventing\nsuch an error are also discussed.A directed graph visualizes the outcome of the\nMcNemar's test in the presence of multiple alignment systems.From this graph,\nit is readily understood if one system is better than one another or if their\ndifferences are imperceptible.The proposed statistical methodologies are\napplied to the systems participated in the OAEI 2016 anatomy track, and also\ncompares several well-known similarity metrics for the same matching problem.\n", "title": "Comparison of ontology alignment systems across single matching task via the McNemar's test" }
null
null
null
null
true
null
15028
null
Default
null
null
null
{ "abstract": " We consider the design and modeling of metasurfaces that couple energy from\nguided waves to propagating wavefronts. This is a first step towards a\ncomprehensive, multiscale modeling platform for metasurface antennas-large\narrays of metamaterial elements embedded in a waveguide structure that radiates\ninto free-space--in which the detailed electromagnetic responses of\nmetamaterial elements are replaced by polarizable dipoles. We present two\nmethods to extract the effective polarizability of a metamaterial element\nembedded in a one- or two-dimensional waveguide. The first method invokes\nsurface equivalence principles, averaging over the effective surface currents\nand charges within an element to obtain the effective dipole moments; the\nsecond method is based on computing the coefficients of the scattered waves\nwithin the waveguide, from which the effective polarizability can be inferred.\nWe demonstrate these methods on several variants of waveguide-fed metasurface\nelements, finding excellent agreement between the two, as well as with\nanalytical expressions derived for irises with simpler geometries. Extending\nthe polarizability extraction technique to higher order multipoles, we confirm\nthe validity of the dipole approximation for common metamaterial elements. With\nthe effective polarizabilities of the metamaterial elements accurately\ndetermined, the radiated fields generated by a metasurface antenna (inside and\noutside the antenna) can be found self-consistently by including the\ninteractions between polarizable dipoles. The dipole description provides an\nalternative language and computational framework for engineering metasurface\nantennas, holograms, lenses, beam-forming arrays, and other electrically large,\nwaveguide-fed metasurface structures.\n", "title": "Polarizability Extraction for Waveguide-Fed Metasurfaces" }
null
null
null
null
true
null
15029
null
Default
null
null
null
{ "abstract": " We introduce the Connection Scan Algorithm (CSA) to efficiently answer\nqueries to timetable information systems. The input consists, in the simplest\nsetting, of a source position and a desired target position. The output consist\nis a sequence of vehicles such as trains or buses that a traveler should take\nto get from the source to the target. We study several problem variations such\nas the earliest arrival and profile problems. We present algorithm variants\nthat only optimize the arrival time or additionally optimize the number of\ntransfers in the Pareto sense. An advantage of CSA is that is can easily adjust\nto changes in the timetable, allowing the easy incorporation of known vehicle\ndelays. We additionally introduce the Minimum Expected Arrival Time (MEAT)\nproblem to handle possible, uncertain, future vehicle delays. We present a\nsolution to the MEAT problem that is based upon CSA. Finally, we extend CSA\nusing the multilevel overlay paradigm to answer complex queries on nation-wide\nintegrated timetables with trains and buses.\n", "title": "Connection Scan Algorithm" }
null
null
null
null
true
null
15030
null
Default
null
null
null
{ "abstract": " We consider two stage estimation with a non-parametric first stage and a\ngeneralized method of moments second stage, in a simpler setting than\n(Chernozhukov et al. 2016). We give an alternative proof of the theorem given\nin (Chernozhukov et al. 2016) that orthogonal second stage moments, sample\nsplitting and $n^{1/4}$-consistency of the first stage, imply\n$\\sqrt{n}$-consistency and asymptotic normality of second stage estimates. Our\nproof is for a variant of their estimator, which is based on the empirical\nversion of the moment condition (Z-estimator), rather than a minimization of a\nnorm of the empirical vector of moments (M-estimator). This note is meant\nprimarily for expository purposes, rather than as a new technical contribution.\n", "title": "A Proof of Orthogonal Double Machine Learning with $Z$-Estimators" }
null
null
null
null
true
null
15031
null
Default
null
null
null
{ "abstract": " This paper shows that the Conditional Quantile Treatment Effect on the\nTreated can be identified using a combination of (i) a conditional\nDistributional Difference in Differences assumption and (ii) an assumption on\nthe conditional dependence between the change in untreated potential outcomes\nand the initial level of untreated potential outcomes for the treated group.\nThe second assumption recovers the unknown dependence from the observed\ndependence for the untreated group. We also consider estimation and inference\nin the case where all of the covariates are discrete. We propose a uniform\ninference procedure based on the exchangeable bootstrap and show its validity.\nWe conclude the paper by estimating the effect of state-level changes in the\nminimum wage on the distribution of earnings for subgroups defined by race,\ngender, and education.\n", "title": "Quantile Treatment Effects in Difference in Differences Models under Dependence Restrictions and with only Two Time Periods" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
15032
null
Validated
null
null
null
{ "abstract": " We present a principled technique for reducing the matrix size in some\napplications of Coppersmith's lattice method for finding roots of modular\npolynomial equations. It relies on an analysis of the actual performance of\nCoppersmith's attack for smaller parameter sizes, which can be thought of as\n\"focus group\" testing. When applied to the small-exponent RSA problem, it\nreduces lattice dimensions and consequently running times (sometimes by factors\nof two or more). We also argue that existing metrics (such as enabling\ncondition bounds) are not as important as often thought for measuring the true\nperformance of attacks based on Coppersmith's method. Finally, experiments are\ngiven to indicate that certain lattice reductive algorithms (such as\nNguyen-Stehlé's L2) may be particularly well-suited for Coppersmith's method.\n", "title": "Coppersmith's lattices and \"focus groups\": an attack on small-exponent RSA" }
null
null
null
null
true
null
15033
null
Default
null
null
null
{ "abstract": " The formation of deuterated molecules is favoured at low temperatures and\nhigh densities. Therefore, the deuteration fraction D$_{frac}$ is expected to\nbe enhanced in cold, dense prestellar cores and to decrease after protostellar\nbirth. Previous studies have shown that the deuterated forms of species such as\nN2H+ (formed in the gas phase) and CH3OH (formed on grain surfaces) can be used\nas evolutionary indicators and to constrain their dominant formation processes\nand time-scales. Formaldehyde (H2CO) and its deuterated forms can be produced\nboth in the gas phase and on grain surfaces. However, the relative importance\nof these two chemical pathways is unclear. Comparison of the deuteration\nfraction of H2CO with respect to that of N2H+, NH3 and CH3OH can help us to\nunderstand its formation processes and time-scales. With the new SEPIA Band 5\nreceiver on APEX, we have observed the J=3-2 rotational lines of HDCO and D2CO\nat 193 GHz and 175 GHz toward three massive star forming regions hosting\nobjects at different evolutionary stages: two High-mass Starless Cores (HMSC),\ntwo High-mass Protostellar Objects (HMPOs), and one Ultracompact HII region\n(UCHII). By using previously obtained H2CO J=3-2 data, the deuteration\nfractions HDCO/H2CO and D2CO/HDCO are estimated. Our observations show that\nsingly-deuterated H2CO is detected toward all sources and that the deuteration\nfraction of H2CO increases from the HMSC to the HMPO phase and then sharply\ndecreases in the latest evolutionary stage (UCHII). The doubly-deuterated form\nof H2CO is detected only in the earlier evolutionary stages with D2CO/H2CO\nshowing a pattern that is qualitatively consistent with that of HDCO/H2CO,\nwithin current uncertainties. Our initial results show that H2CO may display a\nsimilar D$_{frac}$ pattern as that of CH3OH in massive young stellar objects.\nThis finding suggests that solid state reactions dominate its formation.\n", "title": "Gas vs. solid phase deuterated chemistry: HDCO and D$_2$CO in massive star-forming regions" }
null
null
null
null
true
null
15034
null
Default
null
null
null
{ "abstract": " Models involving branched structures are employed to describe several\nsupply-demand systems such as the structure of the nerves of a leaf, the system\nof roots of a tree and the nervous or cardiovascular systems. Given a flow\n(traffic path) that transports a given measure $\\mu^-$ onto a target measure\n$\\mu^+$, along a 1-dimensional network, the transportation cost per unit length\nis supposed in these models to be proportional to a concave power $\\alpha \\in\n(0,1)$ of the intensity of the flow.\nIn this paper we address an open problem in the book \"Optimal transportation\nnetworks\" by Bernot, Caselles and Morel and we improve the stability for\noptimal traffic paths in the Euclidean space $\\mathbb{R}^d$, with respect to\nvariations of the given measures $(\\mu^-,\\mu^+)$, which was known up to now\nonly for $\\alpha>1-\\frac1d$. We prove it for exponents $\\alpha>1-\\frac1{d-1}$\n(in particular, for every $\\alpha \\in (0,1)$ when $d=2$), for a fairly large\nclass of measures $\\mu^+$ and $\\mu^-$.\n", "title": "Improved stability of optimal traffic paths" }
null
null
[ "Mathematics" ]
null
true
null
15035
null
Validated
null
null
null
{ "abstract": " With the recent success of embeddings in natural language processing,\nresearch has been conducted into applying similar methods to code analysis.\nMost works attempt to process the code directly or use a syntactic tree\nrepresentation, treating it like sentences written in a natural language.\nHowever, none of the existing methods are sufficient to comprehend program\nsemantics robustly, due to structural features such as function calls,\nbranching, and interchangeable order of statements. In this paper, we propose a\nnovel processing technique to learn code semantics, and apply it to a variety\nof program analysis tasks. In particular, we stipulate that a robust\ndistributional hypothesis of code applies to both human- and machine-generated\nprograms. Following this hypothesis, we define an embedding space, inst2vec,\nbased on an Intermediate Representation (IR) of the code that is independent of\nthe source programming language. We provide a novel definition of contextual\nflow for this IR, leveraging both the underlying data- and control-flow of the\nprogram. We then analyze the embeddings qualitatively using analogies and\nclustering, and evaluate the learned representation on three different\nhigh-level tasks. We show that even without fine-tuning, a single RNN\narchitecture and fixed inst2vec embeddings outperform specialized approaches\nfor performance prediction (compute device mapping, optimal thread coarsening);\nand algorithm classification from raw code (104 classes), where we set a new\nstate-of-the-art.\n", "title": "Neural Code Comprehension: A Learnable Representation of Code Semantics" }
null
null
null
null
true
null
15036
null
Default
null
null
null
{ "abstract": " In this paper, we study electron wavepacket dynamics in electric and magnetic\nfields. We rigorously derive the semiclassical equations of electron dynamics\nin electric and magnetic fields. We do it both for free electron and electron\nin a periodic potential. We do this by introducing time varying wavevectors\n$k(t)$. In the presence of magnetic field, our wavepacket reproduces the\nclassical cyclotron orbits once the origin of the Schröedinger equation is\ncorrectly chosen to be center of cyclotron orbit. In the presence of both\nelectric and magnetic fields, our equations for wavepacket dynamics differ from\nclassical Lorentz force equations. We show that in a periodic potential, on\napplication of electric field, the electron wave function adiabatically follows\nthe wavefunction of a time varying Bloch wavevector $k(t)$, with its energies\nsuitably shifted with time. We derive the effective mass equation and discuss\nconduction in conductors and insulators.\n", "title": "Electron conduction in solid state via time varying wavevectors" }
null
null
null
null
true
null
15037
null
Default
null
null
null
{ "abstract": " We consider co-rotational wave maps from the $(1+d)$-dimensional Minkowski\nspace into the $d$-sphere for $d\\geq 3$ odd. This is an energy-supercritical\nmodel which is known to exhibit finite-time blowup via self-similar solutions.\nBased on a method developed by the second author and Schörkhuber, we prove\nthe asymptotic nonlinear stability of the \"ground-state\" self-similar solution.\n", "title": "On blowup of co-rotational wave maps in odd space dimensions" }
null
null
null
null
true
null
15038
null
Default
null
null
null
{ "abstract": " ROXs 12 (2MASS J16262803-2526477) is a young star hosting a directly imaged\ncompanion near the deuterium-burning limit. We present a suite of\nspectroscopic, imaging, and time-series observations to characterize the\nphysical and environmental properties of this system. Moderate-resolution\nnear-infrared spectroscopy of ROXs 12 B from Gemini-North/NIFS and Keck/OSIRIS\nreveals signatures of low surface gravity including weak alkali absorption\nlines and a triangular $H$-band pseudo-continuum shape. No signs of Pa$\\beta$\nemission are evident. As a population, however, we find that about half (46\n$\\pm$ 14\\%) of young ($\\lesssim$15 Myr) companions with masses $\\lesssim$20\n$M_\\mathrm{Jup}$ possess actively accreting subdisks detected via Pa$\\beta$\nline emission, which represents a lower limit on the prevalence of\ncircumplanetary disks in general as some are expected to be in a quiescent\nphase of accretion. The bolometric luminosity of the companion and age of the\nhost star (6$^{+4}_{-2}$ Myr) imply a mass of 17.5 $\\pm$ 1.5 $M_\\mathrm{Jup}$\nfor ROXs 12 B based on hot-start evolutionary models. We identify a wide (5100\nAU) tertiary companion to this system, 2MASS J16262774-2527247, which is\nheavily accreting and exhibits stochastic variability in its $K2$ light curve.\nBy combining $v$sin$i_*$ measurements with rotation periods from $K2$, we\nconstrain the line-of-sight inclinations of ROXs 12 A and 2MASS\nJ16262774-2527247 and find that they are misaligned by\n60$^{+7}_{-11}$$^{\\circ}$. In addition, the orbital axis of ROXs 12 B is likely\nmisaligned from the spin axis of its host star ROXs 12 A, suggesting that ROXs\n12 B formed akin to fragmenting binary stars or in an equatorial disk that was\ntorqued by the wide stellar tertiary.\n", "title": "The Young Substellar Companion ROXs 12 B: Near-Infrared Spectrum, System Architecture, and Spin-Orbit Misalignment" }
null
null
null
null
true
null
15039
null
Default
null
null
null
{ "abstract": " We study the emergence of dissipation in an atomic Josephson junction between\nweakly-coupled superfluid Fermi gases. We find that vortex-induced phase\nslippage is the dominant microscopic source of dissipation across the BEC-BCS\ncrossover. We explore different dynamical regimes by tuning the bias chemical\npotential between the two superfluid reservoirs. For small excitations, we\nobserve dissipation and phase coherence to coexist, with a resistive current\nfollowed by well-defined Josephson oscillations. We link the junction transport\nproperties to the phase-slippage mechanism, finding that vortex nucleation is\nprimarily responsible for the observed trends of conductance and critical\ncurrent. For large excitations, we observe the irreversible loss of coherence\nbetween the two superfluids, and transport cannot be described only within an\nuncorrelated phase-slip picture. Our findings open new directions for\ninvestigating the interplay between dissipative and superfluid transport in\nstrongly correlated Fermi systems, and general concepts in out-of-equlibrium\nquantum systems.\n", "title": "Connecting dissipation and phase slips in a Josephson junction between fermionic superfluids" }
null
null
null
null
true
null
15040
null
Default
null
null
null
{ "abstract": " Satellite conjunction analysis is the assessment of collision risk during a\nclose encounter between a satellite and another object in orbit. A\ncounterintuitive phenomenon has emerged in the conjunction analysis literature:\nprobability dilution, in which lower quality data paradoxically appear to\nreduce the risk of collision. We show that probability dilution is a symptom of\na fundamental deficiency in epistemic probability distributions. In\nprobabilistic representations of statistical inference, there are always false\npropositions that have a high probability of being assigned a high degree of\nbelief. We call this deficiency false confidence. In satellite conjunction\nanalysis, it results in a severe and persistent underestimation of collision\nrisk exposure.\nWe introduce the Martin--Liu validity criterion as a benchmark by which to\nidentify statistical methods that are free from false confidence. If expressed\nusing belief functions, such inferences will necessarily be non-additive. In\nsatellite conjunction analysis, we show that $K \\sigma$ uncertainty ellipsoids\nsatisfy the validity criterion. Performing collision avoidance maneuvers based\non ellipsoid overlap will ensure that collision risk is capped at the\nuser-specified level. Further, this investigation into satellite conjunction\nanalysis provides a template for recognizing and resolving false confidence\nissues as they occur in other problems of statistical inference.\n", "title": "Satellite conjunction analysis and the false confidence theorem" }
null
null
null
null
true
null
15041
null
Default
null
null
null
{ "abstract": " Recently, a proposal has been advanced to detect unconstitutional partisan\ngerrymandering with a simple formula called the efficiency gap. The efficiency\ngap is now working its way towards a possible landmark case in the Supreme\nCourt. This note explores some of its mathematical properties in light of the\nfact that it reduces to a straight proportional comparison of votes to seats.\nThough we offer several critiques, we assess that EG can still be a useful\ncomponent of a courtroom analysis. But a famous formula can take on a life of\nits own and this one will need to be watched closely.\n", "title": "A formula goes to court: Partisan gerrymandering and the efficiency gap" }
null
null
null
null
true
null
15042
null
Default
null
null
null
{ "abstract": " Affine $\\lambda$-terms are $\\lambda$-terms in which each bound variable\noccurs at most once and linear $\\lambda$-terms are $\\lambda$-terms in which\neach bound variables occurs once. and only once. In this paper we count the\nnumber of closed affine $\\lambda$-terms of size $n$, closed linear\n$\\lambda$-terms of size $n$, affine $\\beta$-normal forms of size $n$ and linear\n$\\beta$-normal forms of ise $n$, for different ways of measuring the size of\n$\\lambda$-terms. From these formulas, we show how we can derive programs for\ngenerating all the terms of size $n$ for each class. For this we use a specific\ndata structure, which are contexts taking into account all the holes at levels\nof abstractions.\n", "title": "Quantitative aspects of linear and affine closed lambda terms" }
null
null
null
null
true
null
15043
null
Default
null
null
null
{ "abstract": " Chemotherapeutic response of cancer cells to a given compound is one of the\nmost fundamental information one requires to design anti-cancer drugs. Recent\nadvances in producing large drug screens against cancer cell lines provided an\nopportunity to apply machine learning methods for this purpose. In addition to\ncytotoxicity databases, considerable amount of drug-induced gene expression\ndata has also become publicly available. Following this, several methods that\nexploit omics data were proposed to predict drug activity on cancer cells.\nHowever, due to the complexity of cancer drug mechanisms, none of the existing\nmethods are perfect. One possible direction, therefore, is to combine the\nstrengths of both the methods and the databases for improved performance. We\ndemonstrate that integrating a large number of predictions by the proposed\nmethod improves the performance for this task. The predictors in the ensemble\ndiffer in several aspects such as the method itself, the number of tasks method\nconsiders (multi-task vs. single-task) and the subset of data considered\n(sub-sampling). We show that all these different aspects contribute to the\nsuccess of the final ensemble. In addition, we attempt to use the drug screen\ndata together with two novel signatures produced from the drug-induced gene\nexpression profiles of cancer cell lines. Finally, we evaluate the method\npredictions by in vitro experiments in addition to the tests on data sets.The\npredictions of the methods, the signatures and the software are available from\n\\url{this http URL}.\n", "title": "Drug response prediction by ensemble learning and drug-induced gene expression signatures" }
null
null
[ "Statistics", "Quantitative Biology" ]
null
true
null
15044
null
Validated
null
null
null
{ "abstract": " We propose a technique for calculating and understanding the eigenvalue\ndistribution of sums of random matrices from the known distribution of the\nsummands. The exact problem is formidably hard. One extreme approximation to\nthe true density amounts to classical probability, in which the matrices are\nassumed to commute; the other extreme is related to free probability, in which\nthe eigenvectors are assumed to be in generic positions and sufficiently large.\nIn practice, free probability theory can give a good approximation of the\ndensity.\nWe develop a technique based on eigenvector localization/delocalization that\nworks very well for important problems of interest where free probability is\nnot sufficient, but certain uniformity properties apply. The\nlocalization/delocalization property appears in a convex combination parameter\nthat notably, is independent of any eigenvalue properties and yields accurate\neigenvalue density approximations.\nWe demonstrate this technique on a number of examples as well as discuss a\nmore general technique when the uniformity properties fail to apply.\n", "title": "Eigenvalue approximation of sums of Hermitian matrices from eigenvector localization/delocalization" }
null
null
[ "Physics", "Mathematics" ]
null
true
null
15045
null
Validated
null
null
null
{ "abstract": " In the present paper, new classes of wavelet functions are presented in the\nframework of Clifford analysis. Firstly, some classes of orthogonal polynomials\nare provided based on 2-parameters weight functions. Such classes englobe the\nwell known ones of Jacobi and Gegenbauer polynomials when relaxing one of the\nparameters. The discovered polynomial sets are next applied to introduce new\nwavelet functions. Reconstruction formula as well as Fourier-Plancherel rules\nhave been proved.\n", "title": "Some Ultraspheroidal Monogenic Clifford Gegenbauer Jacobi Polynomials and Associated Wavelets" }
null
null
null
null
true
null
15046
null
Default
null
null
null
{ "abstract": " Soft microrobots based on photoresponsive materials and controlled by light\nfields can generate a variety of different gaits. This inherent flexibility can\nbe exploited to maximize their locomotion performance in a given environment\nand used to adapt them to changing conditions. Albeit, because of the lack of\naccurate locomotion models, and given the intrinsic variability among\nmicrorobots, analytical control design is not possible. Common data-driven\napproaches, on the other hand, require running prohibitive numbers of\nexperiments and lead to very sample-specific results. Here we propose a\nprobabilistic learning approach for light-controlled soft microrobots based on\nBayesian Optimization (BO) and Gaussian Processes (GPs). The proposed approach\nresults in a learning scheme that is data-efficient, enabling gait optimization\nwith a limited experimental budget, and robust against differences among\nmicrorobot samples. These features are obtained by designing the learning\nscheme through the comparison of different GP priors and BO settings on a\nsemi-synthetic data set. The developed learning scheme is validated in\nmicrorobot experiments, resulting in a 115% improvement in a microrobot's\nlocomotion performance with an experimental budget of only 20 tests. These\nencouraging results lead the way toward self-adaptive microrobotic systems\nbased on light-controlled soft microrobots and probabilistic learning control.\n", "title": "Gait learning for soft microrobots controlled by light fields" }
null
null
null
null
true
null
15047
null
Default
null
null
null
{ "abstract": " We investigate, using the density matrix renormalization group, the evolution\nof the Nagaoka state with $t'$ hoppings that frustrate the hole kinetic energy\nin the $U=\\infty$ Hubbard model on the anisotropic triangular lattice and the\nsquare lattice with second-nearest neighbor hoppings. We find that the Nagaoka\nferromagnet survives up to a rather small $t'_c/t \\sim 0.2.$ At this critical\nvalue, there is a transition to an antiferromagnetic phase, that depends on the\nlattice: a ${\\bf Q}=(Q,0)$ spiral order, that continuously evolves with $t'$,\nfor the triangular lattice, and the usual ${\\bf Q}=(\\pi,\\pi)$ Néel order for\nthe square lattice. Remarkably, the local magnetization takes its classical\nvalue for all considered $t'$ ($t'/t \\le 1$). Our results show that the\nrecently found classical kinetic antiferromagnetism, a perfect counterpart of\nNagaoka ferromagnetism, is a generic phenomenon in these kinetically frustrated\nelectronic systems.\n", "title": "Evolution of Nagaoka phase with kinetic energy frustrating hoppings" }
null
null
null
null
true
null
15048
null
Default
null
null
null
{ "abstract": " Advances in Machine Learning (ML) have led to its adoption as an integral\ncomponent in many applications, including banking, medical diagnosis, and\ndriverless cars. To further broaden the use of ML models, cloud-based services\noffered by Microsoft, Amazon, Google, and others have developed ML-as-a-service\ntools as black-box systems. However, ML classifiers are vulnerable to\nadversarial examples: inputs that are maliciously modified can cause the\nclassifier to provide adversary-desired outputs. Moreover, it is known that\nadversarial examples generated on one classifier are likely to cause another\nclassifier to make the same mistake, even if the classifiers have different\narchitectures or are trained on disjoint datasets. This property, which is\nknown as transferability, opens up the possibility of attacking black-box\nsystems by generating adversarial examples on a substitute classifier and\ntransferring the examples to the target classifier. Therefore, the key to\nprotect black-box learning systems against the adversarial examples is to block\ntheir transferability. To this end, we propose a training method that, as the\ninput is more perturbed, the classifier smoothly outputs lower confidence on\nthe original label and instead predicts that the input is \"invalid\". In\nessence, we augment the output class set with a NULL label and train the\nclassifier to reject the adversarial examples by classifying them as NULL. In\nexperiments, we apply a wide range of attacks based on adversarial examples on\nthe black-box systems. We show that a classifier trained with the proposed\nmethod effectively resists against the adversarial examples, while maintaining\nthe accuracy on clean data.\n", "title": "Blocking Transferability of Adversarial Examples in Black-Box Learning Systems" }
null
null
null
null
true
null
15049
null
Default
null
null
null
{ "abstract": " The Wolynes theory of electronically nonadiabatic reaction rates [P. G.\nWolynes, J. Chem. Phys. 87, 6559 (1987)] is based on a saddle point\napproximation to the time integral of a reactive flux autocorrelation function\nin the nonadiabatic (golden rule) limit. The dominant saddle point is on the\nimaginary time axis at $t_{\\rm sp}=i\\lambda_{\\rm sp}\\hbar$, and provided\n$\\lambda_{\\rm sp}$ lies in the range $-\\beta/2\\le\\lambda_{\\rm sp}\\le\\beta/2$,\nit is straightforward to evaluate the rate constant using information obtained\nfrom an imaginary time path integral calculation. However, if $\\lambda_{\\rm\nsp}$ lies outside this range, as it does in the Marcus inverted regime, the\npath integral diverges. This has led to claims in the literature that Wolynes\ntheory cannot describe the correct behaviour in the inverted regime. Here we\nshow how the imaginary time correlation function obtained from a path integral\ncalculation can be analytically continued to $\\lambda_{\\rm sp}<-\\beta/2$, and\nthe continuation used to evaluate the rate in the inverted regime. Comparisons\nwith exact golden rule results for a spin-boson model and a more demanding\n(asymmetric and anharmonic) model of electronic predissociation show that the\ntheory it is just as accurate in the inverted regime as it is in the normal\nregime.\n", "title": "Analytic continuation of Wolynes theory into the Marcus inverted regime" }
null
null
null
null
true
null
15050
null
Default
null
null
null
{ "abstract": " Density-functional theory (DFT) has revolutionized computational prediction\nof atomic-scale properties from first principles in physics, chemistry and\nmaterials science. Continuing development of new methods is necessary for\naccurate predictions of new classes of materials and properties, and for\nconnecting to nano- and mesoscale properties using coarse-grained theories.\nJDFTx is a fully-featured open-source electronic DFT software designed\nspecifically to facilitate rapid development of new theories, models and\nalgorithms. Using an algebraic formulation as an abstraction layer, compact\nC++11 code automatically performs well on diverse hardware including GPUs. This\ncode hosts the development of joint density-functional theory (JDFT) that\ncombines electronic DFT with classical DFT and continuum models of liquids for\nfirst-principles calculations of solvated and electrochemical systems. In\naddition, the modular nature of the code makes it easy to extend and interface\nwith, facilitating the development of multi-scale toolkits that connect to ab\ninitio calculations, e.g. photo-excited carrier dynamics combining electron and\nphonon calculations with electromagnetic simulations.\n", "title": "JDFTx: software for joint density-functional theory" }
null
null
[ "Physics" ]
null
true
null
15051
null
Validated
null
null
null
{ "abstract": " Since the seminal observation of room-temperature laser emission from ZnO\nthin films and nanowires, numerous attempts have been carried out for detailed\nunderstanding of the lasing mechanism in ZnO. In spite of the extensive efforts\nperformed over the last decades, the origin of optical gain at room temperature\nis still a matter of considerable discussion,. We show that ZnO microcrystals\nwith a size of a few micrometers exhibit purely excitonic lasing at room\ntemperature without showing any symptoms of electron-hole plasma emission. We\nthen present the distinct experimental evidence that the room-temperature\nexcitonic lasing is achieved not by exciton-exciton scattering, as has been\ngenerally believed, but by exciton-electron scattering. As the temperature is\nlowered below ~150 K, the lasing mechanism is shifted from the exciton-electron\nscattering to the exciton-exciton scattering. We also argue that the ease of\ncarrier diffusion plays a significant role in showing room-temperature\nexcitonic lasing.\n", "title": "Experimental realization of purely excitonic lasing in ZnO microcrystals at room temperature: transition from exciton-exciton to exciton-electron scattering" }
null
null
null
null
true
null
15052
null
Default
null
null
null
{ "abstract": " Spin- and angle-resolved photoemission spectroscopy is used to reveal that a\nlarge spin polarization is observable in the bulk centrosymmetric transition\nmetal dichalcogenide MoS2. It is found that the measured spin polarization can\nbe reversed by changing the handedness of incident circularly-polarized light.\nCalculations based on a three-step model of photoemission show that the valley\nand layer-locked spin-polarized electronic states can be selectively addressed\nby circularly-polarized light, therefore providing a novel route to probe these\nhidden spin-polarized states in inversion-symmetric systems as predicted by\nZhang et al. [Nature Physics 10, 387 (2014)].\n", "title": "Selective probing of hidden spin-polarized states in inversion-symmetric bulk MoS2" }
null
null
null
null
true
null
15053
null
Default
null
null
null
{ "abstract": " Modern machine learning techniques can be used to construct powerful models\nfor difficult collider physics problems. In many applications, however, these\nmodels are trained on imperfect simulations due to a lack of truth-level\ninformation in the data, which risks the model learning artifacts of the\nsimulation. In this paper, we introduce the paradigm of classification without\nlabels (CWoLa) in which a classifier is trained to distinguish statistical\nmixtures of classes, which are common in collider physics. Crucially, neither\nindividual labels nor class proportions are required, yet we prove that the\noptimal classifier in the CWoLa paradigm is also the optimal classifier in the\ntraditional fully-supervised case where all label information is available.\nAfter demonstrating the power of this method in an analytical toy example, we\nconsider a realistic benchmark for collider physics: distinguishing quark-\nversus gluon-initiated jets using mixed quark/gluon training samples. More\ngenerally, CWoLa can be applied to any classification problem where labels or\nclass proportions are unknown or simulations are unreliable, but statistical\nmixtures of the classes are available.\n", "title": "Classification without labels: Learning from mixed samples in high energy physics" }
null
null
null
null
true
null
15054
null
Default
null
null
null
{ "abstract": " Trending topics in microblogs such as Twitter are valuable resources to\nunderstand social aspects of real-world events. To enable deep analyses of such\ntrends, semantic annotation is an effective approach; yet the problem of\nannotating microblog trending topics is largely unexplored by the research\ncommunity. In this work, we tackle the problem of mapping trending Twitter\ntopics to entities from Wikipedia. We propose a novel model that complements\ntraditional text-based approaches by rewarding entities that exhibit a high\ntemporal correlation with topics during their burst time period. By exploiting\ntemporal information from the Wikipedia edit history and page view logs, we\nhave improved the annotation performance by 17-28\\%, as compared to the\ncompetitive baselines.\n", "title": "Semantic Annotation for Microblog Topics Using Wikipedia Temporal Information" }
null
null
null
null
true
null
15055
null
Default
null
null
null
{ "abstract": " Various optical methods for measuring positions of micro-objects in 3D have\nbeen reported in the literature. Nevertheless, majority of them are not\nsuitable for real-time operation, which is needed, for example, for feedback\nposition control. In this paper, we present a method for real-time estimation\nof the position of micro-objects in 3D; the method is based on twin-beam\nillumination and it requires only a very simple hardware setup whose essential\npart is a standard image sensor without any lens. Performance of the proposed\nmethod is tested during a micro-manipulation task in which the estimated\nposition served as a feedback for the controller. The experiments show that the\nestimate is accurate to within ~3 um in the lateral position and ~7 um in the\naxial distance with the refresh rate of 10 Hz. Although the experiments are\ndone using spherical objects, the presented method could be modified to handle\nnon-spherical objects as well.\n", "title": "Twin-beam real-time position estimation of micro-objects in 3D" }
null
null
null
null
true
null
15056
null
Default
null
null
null
{ "abstract": " We explore the correlations between velocity and metallicity and the possible\ndistinct chemical signatures of the velocity over-densities of the local\nGalactic neighbourhood. We use the large spectroscopic survey RAVE and the\nGeneva Copenhagen Survey. We compare the metallicity distribution of regions in\nthe velocity plane ($v_R,v_\\phi$) with that of their symmetric counterparts\n($-v_R,v_\\phi$). We expect similar metallicity distributions if there are no\ntracers of a sub-population (e.g., a dispersed cluster, accreted stars), if the\ndisk of the Galaxy is axisymmetric, and if the orbital effects of the spiral\narms and the bar are weak. We find that the metallicity-velocity space of the\nsolar neighbourhood is highly patterned. A large fraction of the velocity plane\nshows differences in the metallicity distribution when comparing symmetric\n$v_R$ regions. The typical differences in the median metallicity are of $0.05$\ndex with a statistical significance of at least $95\\%$, and with values up to\n$0.6$ dex. For low azimuthal velocity $v_\\phi$, stars moving outwards in the\nGalaxy have on average higher metallicity than those moving inwards. These\ninclude stars in the Hercules and Hyades moving groups and other velocity\nbranch-like structures. For higher $v_\\phi$, the stars moving inwards have\nhigher metallicity than those moving outwards. The most likely interpretation\nof the metallicity asymmetry is that it is due to the orbital effects of the\nbar and the radial metallicity gradient of the disk. We present a simulation\nthat supports this idea. We have also discovered a positive gradient in\n$v_\\phi$ with respect to metallicity at high metallicities, apart from the two\nknown positive and negative gradients for the thick and thin disks,\nrespectively.\n", "title": "Asymmetric metallicity patterns in the stellar velocity space with RAVE" }
null
null
null
null
true
null
15057
null
Default
null
null
null
{ "abstract": " A new type of End-to-End system for text-dependent speaker verification is\npresented in this paper. Previously, using the phonetically\ndiscriminative/speaker discriminative DNNs as feature extractors for speaker\nverification has shown promising results. The extracted frame-level (DNN\nbottleneck, posterior or d-vector) features are equally weighted and aggregated\nto compute an utterance-level speaker representation (d-vector or i-vector). In\nthis work we use speaker discriminative CNNs to extract the noise-robust\nframe-level features. These features are smartly combined to form an\nutterance-level speaker vector through an attention mechanism. The proposed\nattention model takes the speaker discriminative information and the phonetic\ninformation to learn the weights. The whole system, including the CNN and\nattention model, is joint optimized using an end-to-end criterion. The training\nalgorithm imitates exactly the evaluation process --- directly mapping a test\nutterance and a few target speaker utterances into a single verification score.\nThe algorithm can automatically select the most similar impostor for each\ntarget speaker to train the network. We demonstrated the effectiveness of the\nproposed end-to-end system on Windows $10$ \"Hey Cortana\" speaker verification\ntask.\n", "title": "End-to-End Attention based Text-Dependent Speaker Verification" }
null
null
null
null
true
null
15058
null
Default
null
null
null
{ "abstract": " Process mining allows analysts to exploit logs of historical executions of\nbusiness processes to extract insights regarding the actual performance of\nthese processes. One of the most widely studied process mining operations is\nautomated process discovery. An automated process discovery method takes as\ninput an event log, and produces as output a business process model that\ncaptures the control-flow relations between tasks that are observed in or\nimplied by the event log. Various automated process discovery methods have been\nproposed in the past two decades, striking different tradeoffs between\nscalability, accuracy and complexity of the resulting models. However, these\nmethods have been evaluated in an ad-hoc manner, employing different datasets,\nexperimental setups, evaluation measures and baselines, often leading to\nincomparable conclusions and sometimes unreproducible results due to the use of\nclosed datasets. This article provides a systematic review and comparative\nevaluation of automated process discovery methods, using an open-source\nbenchmark and covering twelve publicly-available real-life event logs, twelve\nproprietary real-life event logs, and nine quality metrics. The results\nhighlight gaps and unexplored tradeoffs in the field, including the lack of\nscalability of some methods and a strong divergence in their performance with\nrespect to the different quality metrics used.\n", "title": "Automated Discovery of Process Models from Event Logs: Review and Benchmark" }
null
null
[ "Computer Science" ]
null
true
null
15059
null
Validated
null
null
null
{ "abstract": " Let $P_1,\\dots, P_n$ and $Q_1,\\dots, Q_n$ be convex polytopes in\n$\\mathbb{R}^n$ such that $P_i\\subset Q_i$. It is well-known that the mixed\nvolume has the monotonicity property: $V(P_1,\\dots,P_n)\\leq V(Q_1,\\dots,Q_n)$.\nWe give two criteria for when this inequality is strict in terms of essential\ncollections of faces as well as mixed polyhedral subdivisions. This geometric\nresult allows us to characterize sparse polynomial systems with Newton\npolytopes $P_1,\\dots,P_n$ whose number of isolated solutions equals the\nnormalized volume of the convex hull of $P_1\\cup\\dots\\cup P_n$. In addition, we\nobtain an analog of Cramer's rule for sparse polynomial systems.\n", "title": "Criteria for strict monotonicity of the mixed volume of convex polytopes" }
null
null
[ "Mathematics" ]
null
true
null
15060
null
Validated
null
null
null
{ "abstract": " Consider a coloring of a graph such that each vertex is assigned a fraction\nof each color, with the total amount of colors at each vertex summing to $1$.\nWe define the fractional defect of a vertex $v$ to be the sum of the overlaps\nwith each neighbor of $v$, and the fractional defect of the graph to be the\nmaximum of the defects over all vertices. Note that this coincides with the\nusual definition of defect if every vertex is monochromatic. We provide results\non the minimum fractional defect of $2$-colorings of some graphs.\n", "title": "Colorings with Fractional Defect" }
null
null
null
null
true
null
15061
null
Default
null
null
null
{ "abstract": " In several literatures, the authors give a new thinking of measurement theory\nsystem based on error non-classification philosophy, which completely\noverthrows the existing measurement concept system of precision, trueness and\naccuracy. In this paper, by focusing on the issues of error's regularities and\neffect characteristics, the authors will do a thematic interpretation, and\nprove that the error's regularities actually come from different cognitive\nperspectives, are also unable to be used for classifying errors, and that the\nerror's effect characteristics actually depend on artificial condition rules of\nrepeated measurement, and are still unable to be used for classifying errors.\nThus, from the perspectives of error's regularities and effect characteristics,\nthe existing error classification philosophy is still incorrect; and an\nuncertainty concept system, which must be interpreted by the error\nnon-classification philosophy, naturally becomes the only way out of\nmeasurement theory.\n", "title": "The new concepts of measurement error's regularities and effect characteristics" }
null
null
null
null
true
null
15062
null
Default
null
null
null
{ "abstract": " The Belief Propagation approximation, or cavity method, has been recently\napplied to several combinatorial optimization problems in its zero-temperature\nimplementation, the max-sum algorithm. In particular, recent developments to\nsolve the edge-disjoint paths problem and the prize-collecting Steiner tree\nproblem on graphs have shown remarkable results for several classes of graphs\nand for benchmark instances. Here we propose a generalization of these\ntechniques for two variants of the Steiner trees packing problem where multiple\n\"interacting\" trees have to be sought within a given graph. Depending on the\ninteraction among trees we distinguish the vertex-disjoint Steiner trees\nproblem, where trees cannot share nodes, from the edge-disjoint Steiner trees\nproblem, where edges cannot be shared by trees but nodes can be members of\nmultiple trees. Several practical problems of huge interest in network design\ncan be mapped into these two variants, for instance, the physical design of\nVery Large Scale Integration (VLSI) chips. The formalism described here relies\non two components edge-variables that allows us to formulate a massage-passing\nalgorithm for the V-DStP and two algorithms for the E-DStP differing in the\nscaling of the computational time with respect to some relevant parameters. We\nwill show that one of the two formalisms used for the edge-disjoint variant\nallow us to map the max-sum update equations into a weighted maximum matching\nproblem over proper bipartite graphs. We developed a heuristic procedure based\non the max-sum equations that shows excellent performance in synthetic networks\n(in particular outperforming standard multi-step greedy procedures by large\nmargins) and on large benchmark instances of VLSI for which the optimal\nsolution is known, on which the algorithm found the optimum in two cases and\nthe gap to optimality was never larger than 4 %.\n", "title": "The cavity approach for Steiner trees packing problems" }
null
null
[ "Computer Science" ]
null
true
null
15063
null
Validated
null
null
null
{ "abstract": " Observational learning is a type of learning that occurs as a function of\nobserving, retaining and possibly replicating or imitating the behaviour of\nanother agent. It is a core mechanism appearing in various instances of social\nlearning and has been found to be employed in several intelligent species,\nincluding humans. In this paper, we investigate to what extent the explicit\nmodelling of other agents is necessary to achieve observational learning\nthrough machine learning. Especially, we argue that observational learning can\nemerge from pure Reinforcement Learning (RL), potentially coupled with memory.\nThrough simple scenarios, we demonstrate that an RL agent can leverage the\ninformation provided by the observations of an other agent performing a task in\na shared environment. The other agent is only observed through the effect of\nits actions on the environment and never explicitly modeled. Two key aspects\nare borrowed from observational learning: i) the observer behaviour needs to\nchange as a result of viewing a 'teacher' (another agent) and ii) the observer\nneeds to be motivated somehow to engage in making use of the other agent's\nbehaviour. The later is naturally modeled by RL, by correlating the learning\nagent's reward with the teacher agent's behaviour.\n", "title": "Observational Learning by Reinforcement Learning" }
null
null
null
null
true
null
15064
null
Default
null
null
null
{ "abstract": " Packet parsing is a key step in SDN-aware devices. Packet parsers in SDN\nnetworks need to be both reconfigurable and fast, to support the evolving\nnetwork protocols and the increasing multi-gigabit data rates. The combination\nof packet processing languages with FPGAs seems to be the perfect match for\nthese requirements. In this work, we develop an open-source FPGA-based\nconfigurable architecture for arbitrary packet parsing to be used in SDN\nnetworks. We generate low latency and high-speed streaming packet parsers\ndirectly from a packet processing program. Our architecture is pipelined and\nentirely modeled using templated C++ classes. The pipeline layout is derived\nfrom a parser graph that corresponds a P4 code after a series of graph\ntransformation rounds. The RTL code is generated from the C++ description using\nXilinx Vivado HLS and synthesized with Xilinx Vivado. Our architecture achieves\n100 Gb/s data rate in a Xilinx Virtex-7 FPGA while reducing the latency by 45%\nand the LUT usage by 40% compared to the state-of-the-art.\n", "title": "P4-compatible High-level Synthesis of Low Latency 100 Gb/s Streaming Packet Parsers in FPGAs" }
null
null
[ "Computer Science" ]
null
true
null
15065
null
Validated
null
null
null
{ "abstract": " We address the question concerning the birational geometry of the strata of\nholomorphic and quadratic differentials. We show strata of holomorphic and\nquadratic differentials to be uniruled in small genus by constructing rational\ncurves via pencils on K3 and del Pezzo surfaces respectively. Restricting to\ngenus $3\\leq g\\leq6$, we construct projective bundles over a rational varieties\nthat dominate the holomorphic strata with length at most $g-1$, hence showing\nin addition that these strata are unirational.\n", "title": "Uniruledness of Strata of Holomorphic Differentials in Small Genus" }
null
null
null
null
true
null
15066
null
Default
null
null
null
{ "abstract": " We present a microscopic theory for the Raman response of a clean multiband\nsuperconductor accounting for the effects of vertex corrections and long-range\nCoulomb interaction. The measured Raman intensity, $R(\\Omega)$, is proportional\nto the imaginary part of the fully renormalized particle-hole correlator with\nRaman form-factors $\\gamma(\\vec k)$. In a BCS superconductor, a bare Raman\nbubble is non-zero for any $\\gamma(\\vec k)$ and diverges at $\\Omega = 2\\Delta\n+0$, where $\\Delta$ is the largest gap along the Fermi surface. However, for\n$\\gamma(\\vec k) =$ const, the full $R(\\Omega)$ is expected to vanish due to\nparticle number conservation. It was long thought that this vanishing is due to\nthe singular screening by long-range Coulomb interaction. We argue that this\nvanishing actually holds due to vertex corrections from the same short-range\ninteraction that gives rise to superconductivity. We further argue that\nlong-range Coulomb interaction does not affect the Raman signal for $any$\n$\\gamma(\\vec k)$. We argue that vertex corrections eliminate the divergence at\n$2\\Delta$ and replace it with a maximum at a somewhat larger frequency. We also\nargue that vertex corrections give rise to sharp peaks in $R(\\Omega)$ at\n$\\Omega < 2\\Delta$, when $\\Omega$ coincides with the frequency of one of\ncollective modes in a superconductor, e.g, Leggett mode, Bardasis-Schrieffer\nmode, or an excitonic mode.\n", "title": "Conservation laws, vertex corrections, and screening in Raman spectroscopy" }
null
null
null
null
true
null
15067
null
Default
null
null
null
{ "abstract": " We show that the distribution of symmetry of a naturally reductive nilpotent\nLie group coincides with the invariant distribution induced by the set of fixed\nvectors of the isotropy. This extends a known result on compact naturally\nreductive spaces. We also address the study of the quotient by the foliation of\nsymmetry.\n", "title": "The distribution of symmetry of a naturally reductive nilpotent Lie group" }
null
null
null
null
true
null
15068
null
Default
null
null
null
{ "abstract": " We fabricate high-mobility p-type few-layer WSe2 field-effect transistors and\nsurprisingly observe a series of quantum Hall (QH) states following an\nunconventional sequence predominated by odd-integer states under a moderate\nstrength magnetic field. By tilting the magnetic field, we discover Landau\nlevel (LL) crossing effects at ultra-low coincident angles, revealing that the\nZeeman energy is about three times as large as the cyclotron energy near the\nvalence band top at {\\Gamma} valley. This result implies the significant roles\nplayed by the exchange interactions in p-type few-layer WSe2, in which\nitinerant or QH ferromagnetism likely occurs. Evidently, the {\\Gamma} valley of\nfew-layer WSe2 offers a unique platform with unusually heavy hole-carriers and\na substantially enhanced g-factor for exploring strongly correlated phenomena.\n", "title": "Odd-integer quantum Hall states and giant spin susceptibility in p-type few-layer WSe2" }
null
null
null
null
true
null
15069
null
Default
null
null
null
{ "abstract": " In the field of reinforcement learning there has been recent progress towards\nsafety and high-confidence bounds on policy performance. However, to our\nknowledge, no practical methods exist for determining high-confidence policy\nperformance bounds in the inverse reinforcement learning setting---where the\ntrue reward function is unknown and only samples of expert behavior are given.\nWe propose a sampling method based on Bayesian inverse reinforcement learning\nthat uses demonstrations to determine practical high-confidence upper bounds on\nthe $\\alpha$-worst-case difference in expected return between any evaluation\npolicy and the optimal policy under the expert's unknown reward function. We\nevaluate our proposed bound on both a standard grid navigation task and a\nsimulated driving task and achieve tighter and more accurate bounds than a\nfeature count-based baseline. We also give examples of how our proposed bound\ncan be utilized to perform risk-aware policy selection and risk-aware policy\nimprovement. Because our proposed bound requires several orders of magnitude\nfewer demonstrations than existing high-confidence bounds, it is the first\npractical method that allows agents that learn from demonstration to express\nconfidence in the quality of their learned policy.\n", "title": "Efficient Probabilistic Performance Bounds for Inverse Reinforcement Learning" }
null
null
null
null
true
null
15070
null
Default
null
null
null
{ "abstract": " In this thesis, we study two problems based on clustering algorithms. In the\nfirst problem, we study the role of visual attributes using an agglomerative\nclustering algorithm to whittle down the search area where the number of\nclasses is high to improve the performance of clustering. We observe that as we\nadd more attributes, the clustering performance increases overall. In the\nsecond problem, we study the role of clustering in aggregating templates in a\n1:N open set protocol using multi-shot video as a probe. We observe that by\nincreasing the number of clusters, the performance increases with respect to\nthe baseline and reaches a peak, after which increasing the number of clusters\ncauses the performance to degrade. Experiments are conducted using recently\nintroduced unconstrained IARPA Janus IJB-A, CS2, and CS3 face recognition\ndatasets.\n", "title": "Face Identification and Clustering" }
null
null
null
null
true
null
15071
null
Default
null
null
null
{ "abstract": " Comparative molecular dynamics simulations of a hexamer cluster of the protic\nionic liquid ethylammonium nitrate are performed using density functional\ntheory (DFT) and density functional-based tight binding (DFTB) methods. The\nfocus is on assessing the performance of the DFTB approach to describe the\ndynamics and infrared spectroscopic signatures of hydrogen bonding between the\nions. Average geometries and geometric correlations are found to be rather\nsimilar. The same holds true for the far-infrared spectral region. Differences\nare more pronounced for the NH- and CH-stretching band, where DFTB predicts a\nbroader intensity distribution. DFTB completely fails to describe the\nfingerprint range shaped by nitrate anion vibrations. Finally, charge\nfluctuations within the H-bonds are characterized yielding moderate\ndependencies on geometry. On the basis of these results, DFTB is recommend for\nthe simulation of H-bond properties of this type of ionic liquids.\n", "title": "Properties of Hydrogen Bonds in the Protic Ionic Liquid Ethylammonium Nitrate. DFT versus DFTB Molecular Dynamics" }
null
null
null
null
true
null
15072
null
Default
null
null
null
{ "abstract": " We study the non-stationary stochastic multiarmed bandit (MAB) problem and\npropose two generic algorithms, namely, the limited memory deterministic\nsequencing of exploration and exploitation (LM-DSEE) and the Sliding-Window\nUpper Confidence Bound# (SW-UCB#). We rigorously analyze these algorithms in\nabruptly-changing and slowly-varying environments and characterize their\nperformance. We show that the expected cumulative regret for these algorithms\nunder either of the environments is upper bounded by sublinear functions of\ntime, i.e., the time average of the regret asymptotically converges to zero. We\ncomplement our analytic results with numerical illustrations.\n", "title": "On Abruptly-Changing and Slowly-Varying Multiarmed Bandit Problems" }
null
null
null
null
true
null
15073
null
Default
null
null
null
{ "abstract": " A main goal of NASA's Kepler Mission is to establish the frequency of\npotentially habitable Earth-size planets (eta Earth). Relatively few such\ncandidates identified by the mission can be confirmed to be rocky via dynamical\nmeasurement of their mass. Here we report an effort to validate 18 of them\nstatistically using the BLENDER technique, by showing that the likelihood they\nare true planets is far greater than that of a false positive. Our analysis\nincorporates follow-up observations including high-resolution optical and\nnear-infrared spectroscopy, high-resolution imaging, and information from the\nanalysis of the flux centroids of the Kepler observations themselves. While\nmany of these candidates have been previously validated by others, the\nconfidence levels reported typically ignore the possibility that the planet may\ntransit a different star than the target along the same line of sight. If that\nwere the case, a planet that appears small enough to be rocky may actually be\nconsiderably larger and therefore less interesting from the point of view of\nhabitability. We take this into consideration here, and are able to validate 15\nof our candidates at a 99.73% (3 sigma) significance level or higher, and the\nother three at slightly lower confidence. We characterize the GKM host stars\nusing available ground-based observations and provide updated parameters for\nthe planets, with sizes between 0.8 and 2.9 Earth radii. Seven of them\n(KOI-0438.02, 0463.01, 2418.01, 2626.01, 3282.01, 4036.01, and 5856.01) have a\nbetter than 50% chance of being smaller than 2 Earth radii and being in the\nhabitable zone of their host stars.\n", "title": "Validation of small Kepler transiting planet candidates in or near the habitable zone" }
null
null
null
null
true
null
15074
null
Default
null
null
null
{ "abstract": " Convolutional Neural Networks (CNNs) has shown a great success in many areas\nincluding complex image classification tasks. However, they need a lot of\nmemory and computational cost, which hinders them from running in relatively\nlow-end smart devices such as smart phones. We propose a CNN compression method\nbased on CP-decomposition and Tensor Power Method. We also propose an iterative\nfine tuning, with which we fine-tune the whole network after decomposing each\nlayer, but before decomposing the next layer. Significant reduction in memory\nand computation cost is achieved compared to state-of-the-art previous work\nwith no more accuracy loss.\n", "title": "CP-decomposition with Tensor Power Method for Convolutional Neural Networks Compression" }
null
null
null
null
true
null
15075
null
Default
null
null
null
{ "abstract": " We present a position paper advocating the notion that Stoic philosophy and\nethics can inform the development of ethical A.I. systems. This is in sharp\ncontrast to most work on building ethical A.I., which has focused on\nUtilitarian or Deontological ethical theories. We relate ethical A.I. to\nseveral core Stoic notions, including the dichotomy of control, the four\ncardinal virtues, the ideal Sage, Stoic practices, and Stoic perspectives on\nemotion or affect. More generally, we put forward an ethical view of A.I. that\nfocuses more on internal states of the artificial agent rather than on external\nactions of the agent. We provide examples relating to near-term A.I. systems as\nwell as hypothetical superintelligent agents.\n", "title": "Stoic Ethics for Artificial Agents" }
null
null
null
null
true
null
15076
null
Default
null
null
null
{ "abstract": " We provide expressions for the nonperturbative matching of the effective\nfield theory describing dark matter interactions with quarks and gluons to the\neffective theory of nonrelativistic dark matter interacting with\nnonrelativistic nucleons. We give the leading and subleading order expressions\nin chiral counting. In general, a single partonic operator already matches onto\nseveral nonrelativistic operators at leading order in chiral counting. Thus,\nkeeping only one operator at the time in the nonrelativistic effective theory\ndoes not properly describe the scattering in direct detection. Moreover, the\nmatching of the axial--axial partonic level operator, as well as the matching\nof the operators coupling DM to the QCD anomaly term, naively include momentum\nsuppressed terms. However, these are still of leading chiral order due to pion\npoles and can be numerically important. We illustrate the impact of these\neffects with several examples.\n", "title": "From quarks to nucleons in dark matter direct detection" }
null
null
null
null
true
null
15077
null
Default
null
null
null
{ "abstract": " We develop refined Strichartz estimates at $L^2$ regularity for a class of\ntime-dependent Schrödinger operators. Such refinements begin to\ncharacterize the near-optimizers of the Strichartz estimate, and play a pivotal\npart in the global theory of mass-critical NLS. On one hand, the harmonic\nanalysis is quite subtle in the $L^2$-critical setting due to an enormous group\nof symmetries, while on the other hand, the spacetime Fourier analysis employed\nby the existing approaches to the constant-coefficient equation are not adapted\nto nontranslation-invariant situations, especially with potentials as large as\nthose considered in this article.\nUsing phase space techniques, we reduce to proving certain analogues of\n(adjoint) bilinear Fourier restriction estimates. Then we extend Tao's bilinear\nrestriction estimate for paraboloids to more general Schrödinger operators.\nAs a particular application, the resulting inverse Strichartz theorem and\nprofile decompositions constitute a key harmonic analysis input for studying\nlarge data solutions to the $L^2$-critical NLS with a harmonic oscillator\npotential in dimensions $\\ge 2$. This article builds on recent work of Killip,\nVisan, and the author in one space dimension.\n", "title": "Sharpened Strichartz estimates and bilinear restriction for the mass-critical quantum harmonic oscillator" }
null
null
null
null
true
null
15078
null
Default
null
null
null
{ "abstract": " We present a test for determining if a substochastic matrix is convergent. By\nestablishing a duality between weakly chained diagonally dominant (w.c.d.d.)\nL-matrices and convergent substochastic matrices, we show that this test can be\ntrivially extended to determine whether a weakly diagonally dominant (w.d.d.)\nmatrix is a nonsingular M-matrix. The test's runtime is linear in the order of\nthe input matrix if it is sparse and quadratic if it is dense. This is a\npartial strengthening of the cubic test in [J. M. Peña., A stable test to\ncheck if a matrix is a nonsingular M-matrix, Math. Comp., 247, 1385-1392,\n2004]. As a by-product of our analysis, we prove that a nonsingular w.d.d.\nM-matrix is a w.c.d.d. L-matrix, a fact whose converse has been known since at\nleast 1964. We point out that this strengthens some recent results on\nM-matrices in the literature.\n", "title": "A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-matrix" }
null
null
null
null
true
null
15079
null
Default
null
null
null
{ "abstract": " Search engines play an important role in our everyday lives by assisting us\nin finding the information we need. When we input a complex query, however,\nresults are often far from satisfactory. In this work, we introduce a query\nreformulation system based on a neural network that rewrites a query to\nmaximize the number of relevant documents returned. We train this neural\nnetwork with reinforcement learning. The actions correspond to selecting terms\nto build a reformulated query, and the reward is the document recall. We\nevaluate our approach on three datasets against strong baselines and show a\nrelative improvement of 5-20% in terms of recall. Furthermore, we present a\nsimple method to estimate a conservative upper-bound performance of a model in\na particular environment and verify that there is still large room for\nimprovements.\n", "title": "Task-Oriented Query Reformulation with Reinforcement Learning" }
null
null
[ "Computer Science" ]
null
true
null
15080
null
Validated
null
null
null
{ "abstract": " The reverse space-time (RST) Sine-Gordon, Sinh-Gordon and nonlinear\nSchrödinger equations were recently introduced and shown to be integrable\ninfinite-dimensional dynamical systems. The inverse scattering transform (IST)\nfor rapidly decaying data was also constructed. In this paper, IST for these\nequations with nonzero boundary conditions (NZBCs) at infinity is presented.\nThe NZBC problem is more complicated due to the associated branching structure\nof the associated linear eigenfunctions. With constant amplitude at infinity,\nfour cases are analyzed; they correspond to two different signs of nonlinearity\nand two different values of the phase at infinity. Special soliton solutions\nare discussed and explicit 1-soliton and 2-soliton solutions are found. In\nterms of IST, the difference between the RST Sine-Gordon/Sinh-Gordon equations\nand the RST NLS equation is the time dependence of the scattering data.\nSpatially dependent boundary conditions are also briefly considered.\n", "title": "Inverse scattering transform for the nonlocal reverse space-time Sine-Gordon, Sinh-Gordon and nonlinear Schrödinger equations with nonzero boundary conditions" }
null
null
null
null
true
null
15081
null
Default
null
null
null
{ "abstract": " We present a study on the impact of Mn$^{3+}$ substitution in the\ngeometrically frustrated Ising garnet Ho$_3$Ga$_5$O$_{12}$ using bulk magnetic\nmeasurements and low temperature powder neutron diffraction. We find that the\ntransition temperature, $T_N$ = 5.8 K, for Ho$_3$MnGa$_4$O$_{12}$ is raised by\nalmost 20 when compared to Ho$_3$Ga$_5$O$_{12}$. Powder neutron diffraction on\nHo$_3$Mn$_x$Ga$_{5-x}$O$_{12}$ ($x$ = 0.5, 1) below $T_N$ shows the formation\nof a long range ordered ordered state with $\\mathbf{k}$ = (0,0,0). Ho$^{3+}$\nspins are aligned antiferromagnetically along the six crystallographic axes\nwith no resultant moment while the Mn$^{3+}$ spins are oriented along the body\ndiagonals, such that there is a net moment along [111]. The magnetic structure\ncan be visualised as ten-membered rings of corner-sharing triangles of\nHo$^{3+}$ spins with the Mn$^{3+}$ spins ferromagnetically coupled to each\nindividual Ho$^{3+}$ spin in the triangle. Substitution of Mn$^{3+}$ completely\nrelieves the magnetic frustration with $f = \\theta_{CW}/T_N \\approx 1.1$ for\nHo$_3$MnGa$_4$O$_{12}$.\n", "title": "Relieving the frustration through Mn$^{3+}$ substitution in Holmium Gallium Garnet" }
null
null
null
null
true
null
15082
null
Default
null
null
null
{ "abstract": " Given a sample of bids from independent auctions, this paper examines the\nquestion of inference on auction fundamentals (e.g. valuation distributions,\nwelfare measures) under weak assumptions on information structure. The question\nis important as it allows us to learn about the valuation distribution in a\nrobust way, i.e., without assuming that a particular information structure\nholds across observations. We leverage the recent contributions of\n\\cite{Bergemann2013} in the robust mechanism design literature that exploit the\nlink between Bayesian Correlated Equilibria and Bayesian Nash Equilibria in\nincomplete information games to construct an econometrics framework for\nlearning about auction fundamentals using observed data on bids. We showcase\nour construction of identified sets in private value and common value auctions.\nOur approach for constructing these sets inherits the computational simplicity\nof solving for correlated equilibria: checking whether a particular valuation\ndistribution belongs to the identified set is as simple as determining whether\na {\\it linear} program is feasible. A similar linear program can be used to\nconstruct the identified set on various welfare measures and counterfactual\nobjects. For inference and to summarize statistical uncertainty, we propose\nnovel finite sample methods using tail inequalities that are used to construct\nconfidence regions on sets. We also highlight methods based on Bayesian\nbootstrap and subsampling. A set of Monte Carlo experiments show adequate\nfinite sample properties of our inference procedures. We illustrate our methods\nusing data from OCS auctions.\n", "title": "Inference on Auctions with Weak Assumptions on Information" }
null
null
null
null
true
null
15083
null
Default
null
null
null
{ "abstract": " Contemporary web pages with increasingly sophisticated interfaces rival\ntraditional desktop applications for interface complexity and are often called\nweb applications or RIA (Rich Internet Applications). They often require the\nexecution of JavaScript in a web browser and can call AJAX requests to\ndynamically generate the content, reacting to user interaction. From the\nautomatic data acquisition point of view, thus, it is essential to be able to\ncorrectly render web pages and mimic user actions to obtain relevant data from\nthe web page content. Briefly, to obtain data through existing Web interfaces\nand transform it into structured form, contemporary wrappers should be able to:\n1) interact with sophisticated interfaces of web applications; 2) precisely\nacquire relevant data; 3) scale with the number of crawled web pages or states\nof web application; 4) have an embeddable programming API for integration with\nexisting web technologies. OXPath is a state-of-the-art technology, which is\ncompliant with these requirements and demonstrated its efficiency in\ncomprehensive experiments. OXPath integrates Firefox for correct rendering of\nweb pages and extends XPath 1.0 for the DOM node selection, interaction, and\nextraction. It provides means for converting extracted data into different\nformats, such as XML, JSON, CSV, and saving data into relational databases.\nThis tutorial explains main features of the OXPath language and the setup of\na suitable working environment. The guidelines for using OXPath are provided in\nthe form of prototypical examples.\n", "title": "Introduction to OXPath" }
null
null
null
null
true
null
15084
null
Default
null
null
null
{ "abstract": " In this work, we conducted a survey on different registration algorithms and\ninvestigated their suitability for hyperspectral historical image registration\napplications. After the evaluation of different algorithms, we choose an\nintensity based registration algorithm with a curved transformation model. For\nthe transformation model, we select cubic B-splines since they should be\ncapable to cope with all non-rigid deformations in our hyperspectral images.\nFrom a number of similarity measures, we found that residual complexity and\nlocalized mutual information are well suited for the task at hand. In our\nevaluation, both measures show an acceptable performance in handling all\ndifficulties, e.g., capture range, non-stationary and spatially varying\nintensity distortions or multi-modality that occur in our application.\n", "title": "Image Registration for the Alignment of Digitized Historical Documents" }
null
null
null
null
true
null
15085
null
Default
null
null
null
{ "abstract": " Person Re-Identification (person re-id) is a crucial task as its applications\nin visual surveillance and human-computer interaction. In this work, we present\na novel joint Spatial and Temporal Attention Pooling Network (ASTPN) for\nvideo-based person re-identification, which enables the feature extractor to be\naware of the current input video sequences, in a way that interdependency from\nthe matching items can directly influence the computation of each other's\nrepresentation. Specifically, the spatial pooling layer is able to select\nregions from each frame, while the attention temporal pooling performed can\nselect informative frames over the sequence, both pooling guided by the\ninformation from distance matching. Experiments are conduced on the iLIDS-VID,\nPRID-2011 and MARS datasets and the results demonstrate that this approach\noutperforms existing state-of-art methods. We also analyze how the joint\npooling in both dimensions can boost the person re-id performance more\neffectively than using either of them separately.\n", "title": "Jointly Attentive Spatial-Temporal Pooling Networks for Video-based Person Re-Identification" }
null
null
null
null
true
null
15086
null
Default
null
null
null
{ "abstract": " Let $S=\\{x_1,x_2,\\dots,x_n\\}$ be a set of distinct positive integers, and let\n$f$ be an arithmetical function. The GCD matrix $(S)_f$ on $S$ associated with\n$f$ is defined as the $n\\times n$ matrix having $f$ evaluated at the greatest\ncommon divisor of $x_i$ and $x_j$ as its $ij$ entry. The LCM matrix $[S]_f$ is\ndefined similarly. We consider inertia, positive definiteness and $\\ell_p$ norm\nof GCD and LCM matrices and their unitary analogs. Proofs are based on matrix\nfactorizations and convolutions of arithmetical functions.\n", "title": "Inertia, positive definiteness and $\\ell_p$ norm of GCD and LCM matrices and their unitary analogs" }
null
null
null
null
true
null
15087
null
Default
null
null
null
{ "abstract": " This paper investigates two strategies to reduce the communication delay in\nfuture wireless networks: traffic dispersion and network densification. A\nhybrid scheme that combines these two strategies is also considered. The\nprobabilistic delay and effective capacity are used to evaluate performance.\nFor probabilistic delay, the violation probability of delay, i.e., the\nprobability that the delay exceeds a given tolerance level, is characterized in\nterms of upper bounds, which are derived by applying stochastic network\ncalculus theory. In addition, to characterize the maximum affordable arrival\ntraffic for mmWave systems, the effective capacity, i.e., the service\ncapability with a given quality-of-service (QoS) requirement, is studied. The\nderived bounds on the probabilistic delay and effective capacity are validated\nthrough simulations. These numerical results show that, for a given average\nsystem gain, traffic dispersion, network densification, and the hybrid scheme\nexhibit different potentials to reduce the end-to-end communication delay. For\ninstance, traffic dispersion outperforms network densification, given high\naverage system gain and arrival rate, while it could be the worst option,\notherwise. Furthermore, it is revealed that, increasing the number of\nindependent paths and/or relay density is always beneficial, while the\nperformance gain is related to the arrival rate and average system gain,\njointly. Therefore, a proper transmission scheme should be selected to optimize\nthe delay performance, according to the given conditions on arrival traffic and\nsystem service capability.\n", "title": "Low-Latency Millimeter-Wave Communications: Traffic Dispersion or Network Densification?" }
null
null
null
null
true
null
15088
null
Default
null
null
null
{ "abstract": " Suppose $\\Omega, A \\subseteq \\RR\\setminus\\Set{0}$ are two sets, both of mixed\nsign, that $\\Omega$ is Lebesgue measurable and $A$ is a discrete set. We study\nthe problem of when $A \\cdot \\Omega$ is a (multiplicative) tiling of the real\nline, that is when almost every real number can be uniquely written as a\nproduct $a\\cdot \\omega$, with $a \\in A$, $\\omega \\in \\Omega$. We study both the\nstructure of the set of multiples $A$ and the structure of the tile $\\Omega$.\nWe prove strong results in both cases. These results are somewhat analogous to\nthe known results about the structure of translational tiling of the real line.\nThere is, however, an extra layer of complexity due to the presence of sign in\nthe sets $A$ and $\\Omega$, which makes multiplicative tiling roughly equivalent\nto translational tiling on the larger group $\\ZZ_2 \\times \\RR$.\n", "title": "The structure of multiplicative tilings of the real line" }
null
null
null
null
true
null
15089
null
Default
null
null
null
{ "abstract": " Modern investigation in economics and in other sciences requires the ability\nto store, share, and replicate results and methods of experiments that are\noften multidisciplinary and yield a massive amount of data. Given the\nincreasing complexity and growing interaction across diverse bodies of\nknowledge it is becoming imperative to define a platform to properly support\ncollaborative research and track origin, accuracy and use of data. This paper\nstarts by defining a set of methods leveraging scientific principles and\nadvocating the importance of those methods in multidisciplinary, computer\nintensive fields like computational finance. The next part of this paper\ndefines a class of systems called scientific support systems, vis-a-vis usages\nin other research fields such as bioinformatics, physics and engineering. We\noutline a basic set of fundamental concepts, and list our goals and motivation\nfor leveraging such systems to enable large-scale investigation, \"crowd powered\nscience\", in economics. The core of this paper provides an outline of FRACTI in\nfive steps. First we present definitions related to scientific support systems\nintrinsic to finance and describe common characteristics of financial use\ncases. The second step concentrates on what can be exchanged through the\ndefinition of shareable entities called contributions. The third step is the\ndescription of a classification system for building blocks of the conceptual\nframework, called facets. The fourth step introduces the meta-model that will\nenable provenance tracking and representation of data fragments and simulation.\nFinally we describe intended cases of use to highlight main strengths of\nFRACTI: application of the scientific method for investigation in computational\nfinance, large-scale collaboration and simulation.\n", "title": "Supporting Crowd-Powered Science in Economics: FRACTI, a Conceptual Framework for Large-Scale Collaboration and Transparent Investigation in Financial Markets" }
null
null
null
null
true
null
15090
null
Default
null
null
null
{ "abstract": " In this work, we have characterized changes in the dynamics of a\ntwo-dimensional relativistic standard map in the presence of dissipation and\nspecially when it is submitted to thermal effects modeled by a Gaussian noise\nreservoir. By the addition of thermal noise in the dissipative relativistic\nstandard map (DRSM) it is possible to suppress typical stable periodic\nstructures (SPSs) embedded in the chaotic domains of parameter space for large\nenough temperature strengths. Smaller SPSs are first affected by thermal\neffects, starting from their borders, as a function of temperature. To estimate\nthe necessary temperature strength capable to destroy those SPSs we use the\nlargest Lyapunov exponent to obtain the critical temperature ($T_C$) diagrams.\nFor critical temperatures the chaotic behavior takes place with the suppression\nof periodic motion, although, the temperature strengths considered in this work\nare not so large to convert the deterministic features of the underlying system\ninto a stochastic ones.\n", "title": "The effect of temperature on generic stable periodic structures in the parameter space of dissipative relativistic standard map" }
null
null
[ "Physics" ]
null
true
null
15091
null
Validated
null
null
null
{ "abstract": " Given a field $F$ of $\\operatorname{char}(F)=2$, we define $u^n(F)$ to be the\nmaximal dimension of an anisotropic form in $I_q^n F$. For $n=1$ it recaptures\nthe definition of $u(F)$. We study the relations between this value and the\nsymbol length of $H_2^n(F)$, denoted by $sl_2^n(F)$. We show for any $n \\geq 2$\nthat if $2^n \\leq u^n(F) \\leq u^2(F) < \\infty$ then $sl_2^n(F) \\leq\n\\prod_{i=2}^n (\\frac{u^i(F)}{2}+1-2^{i-1})$. As a result, if $u(F)$ is finite\nthen $sl_2^n(F)$ is finite for any $n$, a fact which was previously proven when\n$\\operatorname{char}(F) \\neq 2$ by Saltman and Krashen. We also show that if\n$sl_2^n(F)=1$ then $u^n(F)$ is either $2^n$ or $2^{n+1}$.\n", "title": "The $u^n$-invariant and the Symbol Length of $H_2^n(F)$" }
null
null
[ "Mathematics" ]
null
true
null
15092
null
Validated
null
null
null
{ "abstract": " Being able to recognize emotions in human users is considered a highly\ndesirable trait in Human-Robot Interaction (HRI) scenarios. However, most\ncontemporary approaches rarely attempt to apply recognized emotional features\nin an active manner to modulate robot decision-making and dialogue for the\nbenefit of the user. In this position paper, we propose a method of\nincorporating recognized emotions into a Reinforcement Learning (RL) based\ndialogue management module that adapts its dialogue responses in order to\nattempt to make cognitive training tasks, like the 2048 Puzzle Game, more\nenjoyable for the users.\n", "title": "An Affective Robot Companion for Assisting the Elderly in a Cognitive Game Scenario" }
null
null
null
null
true
null
15093
null
Default
null
null
null
{ "abstract": " Sufficient statistics are derived for the population size and parameters of\ncommonly used closed population mark-recapture models. Rao-Blackwellization\ndetails for improving estimators that are not functions of the statistics are\npresented. As Rao-Blackwellization entails enumerating all sample reorderings\nconsistent with the sufficient statistic, Markov chain Monte Carlo resampling\nprocedures are provided to approximate the computationally intensive\nestimators. Simulation studies demonstrate that significant improvements can be\nmade with the strategy. Supplementary materials for this article are available\nonline.\n", "title": "Rao-Blackwellization to give Improved Estimates in Multi-List Studies" }
null
null
[ "Statistics" ]
null
true
null
15094
null
Validated
null
null
null
{ "abstract": " In this paper, we study the Bernstein polynomial model for estimating the\nmultivariate distribution functions and densities with bounded support. As a\nmixture model of multivariate beta distributions, the maximum (approximate)\nlikelihood estimate can be obtained using EM algorithm. A change-point method\nof choosing optimal degrees of the proposed Bernstein polynomial model is\npresented. Under some conditions the optimal rate of convergence in the mean\n$\\chi^2$-divergence of new density estimator is shown to be nearly parametric.\nThe method is illustrated by an application to a real data set. Finite sample\nperformance of the proposed method is also investigated by simulation study and\nis shown to be much better than the kernel density estimate but close to the\nparametric ones.\n", "title": "Bernstein Polynomial Model for Nonparametric Multivariate Density" }
null
null
[ "Statistics" ]
null
true
null
15095
null
Validated
null
null
null
{ "abstract": " The \\emph{longest common extension} (\\emph{LCE}) problem is to preprocess a\ngiven string $w$ of length $n$ so that the length of the longest common prefix\nbetween suffixes of $w$ that start at any two given positions is answered\nquickly. In this paper, we present a data structure of $O(z \\tau^2 +\n\\frac{n}{\\tau})$ words of space which answers LCE queries in $O(1)$ time and\ncan be built in $O(n \\log \\sigma)$ time, where $1 \\leq \\tau \\leq \\sqrt{n}$ is a\nparameter, $z$ is the size of the Lempel-Ziv 77 factorization of $w$ and\n$\\sigma$ is the alphabet size. This is an \\emph{encoding} data structure, i.e.,\nit does not access the input string $w$ when answering queries and thus $w$ can\nbe deleted after preprocessing. On top of this main result, we obtain further\nresults using (variants of) our LCE data structure, which include the\nfollowing:\n- For highly repetitive strings where the $z\\tau^2$ term is dominated by\n$\\frac{n}{\\tau}$, we obtain a \\emph{constant-time and sub-linear space} LCE\nquery data structure.\n- Even when the input string is not well compressible via Lempel-Ziv 77\nfactorization, we still can obtain a \\emph{constant-time and sub-linear space}\nLCE data structure for suitable $\\tau$ and for $\\sigma \\leq 2^{o(\\log n)}$.\n- The time-space trade-off lower bounds for the LCE problem by Bille et al.\n[J. Discrete Algorithms, 25:42-50, 2014] and by Kosolobov [CoRR,\nabs/1611.02891, 2016] can be \"surpassed\" in some cases with our LCE data\nstructure.\n", "title": "Small-space encoding LCE data structure with constant-time queries" }
null
null
null
null
true
null
15096
null
Default
null
null
null
{ "abstract": " Materials design and development typically takes several decades from the\ninitial discovery to commercialization with the traditional trial and error\ndevelopment approach. With the accumulation of data from both experimental and\ncomputational results, data based machine learning becomes an emerging field in\nmaterials discovery, design and property prediction. This manuscript reviews\nthe history of materials science as a disciplinary the most common machine\nlearning method used in materials science, and specifically how they are used\nin materials discovery, design, synthesis and even failure detection and\nanalysis after materials are deployed in real application. Finally, the\nlimitations of machine learning for application in materials science and\nchallenges in this emerging field is discussed.\n", "title": "Machine learning application in the life time of materials" }
null
null
null
null
true
null
15097
null
Default
null
null
null
{ "abstract": " We propose a unified framework to speed up the existing stochastic matrix\nfactorization (SMF) algorithms via variance reduction. Our framework is general\nand it subsumes several well-known SMF formulations in the literature. We\nperform a non-asymptotic convergence analysis of our framework and derive\ncomputational and sample complexities for our algorithm to converge to an\n$\\epsilon$-stationary point in expectation. In addition, extensive experiments\nfor a wide class of SMF formulations demonstrate that our framework\nconsistently yields faster convergence and a more accurate output dictionary\nvis-à-vis state-of-the-art frameworks.\n", "title": "A Unified Framework for Stochastic Matrix Factorization via Variance Reduction" }
null
null
null
null
true
null
15098
null
Default
null
null
null
{ "abstract": " Power prediction demand is vital in power system and delivery engineering\nfields. By efficiently predicting the power demand, we can forecast the total\nenergy to be consumed in a certain city or district. Thus, exact resources\nrequired to produce the demand power can be allocated. In this paper, a\nStochastic Gradient Boosting (aka Treeboost) model is used to predict the short\nterm power demand for the Emirate of Sharjah in the United Arab Emirates (UAE).\nResults show that the proposed model gives promising results in comparison to\nthe model used by Sharjah Electricity and Water Authority (SEWA).\n", "title": "Short Term Power Demand Prediction Using Stochastic Gradient Boosting" }
null
null
null
null
true
null
15099
null
Default
null
null
null
{ "abstract": " There has been growing interest in developing accurate models that can also\nbe explained to humans. Unfortunately, if there exist multiple distinct but\naccurate models for some dataset, current machine learning methods are unlikely\nto find them: standard techniques will likely recover a complex model that\ncombines them. In this work, we introduce a way to identify a maximal set of\ndistinct but accurate models for a dataset. We demonstrate empirically that, in\nsituations where the data supports multiple accurate classifiers, we tend to\nrecover simpler, more interpretable classifiers rather than more complex ones.\n", "title": "Learning Qualitatively Diverse and Interpretable Rules for Classification" }
null
null
null
null
true
null
15100
null
Default
null
null