text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Intersystem crossing is a radiationless process that can take place in a\nmolecule irradiated by UV-Vis light, thereby playing an important role in many\nenvironmental, biological and technological processes. This paper reviews\ndifferent methods to describe intersystem crossing dynamics, paying attention\nto semiclassical trajectory theories, which are especially interesting because\nthey can be applied to large systems with many degrees of freedom. In\nparticular, a general trajectory surface hopping methodology recently developed\nby the authors, which is able to include non-adiabatic and spin-orbit couplings\nin excited-state dynamics simulations, is explained in detail. This method,\ntermed SHARC, can in principle include any arbitrary coupling, what makes it\ngenerally applicable to photophysical and photochemical problems, also those\nincluding explicit laser fields. A step-by-step derivation of the main\nequations of motion employed in surface hopping based on the fewest-switches\nmethod of Tully, adapted for the inclusion of spin-orbit interactions, is\nprovided. Special emphasis is put on describing the different possible choices\nof the electronic bases in which spin-orbit can be included in surface hopping,\nhighlighting the advantages and inconsistencies of the different approaches.\n",
"title": "A general method to describe intersystem crossing dynamics in trajectory surface hopping"
}
| null | null | null | null | true | null |
10701
| null |
Default
| null | null |
null |
{
"abstract": " Denial of service attacks are especially pertinent to the internet of things\nas devices have less computing power, memory and security mechanisms to defend\nagainst them. The task of mitigating these attacks must therefore be redirected\nfrom the device onto a network monitor. Network intrusion detection systems can\nbe used as an effective and efficient technique in internet of things systems\nto offload computation from the devices and detect denial of service attacks\nbefore they can cause harm. However the solution of implementing a network\nintrusion detection system for internet of things networks is not without\nchallenges due to the variability of these systems and specifically the\ndifficulty in collecting data. We propose a model-hybrid approach to model the\nscale of the internet of things system and effectively train network intrusion\ndetection systems. Through bespoke datasets generated by the model, the IDS is\nable to predict a wide spectrum of real-world attacks, and as demonstrated by\nan experiment construct more predictive datasets at a fraction of the time of\nother more standard techniques.\n",
"title": "Generating Synthetic Data for Real World Detection of DoS Attacks in the IoT"
}
| null | null | null | null | true | null |
10702
| null |
Default
| null | null |
null |
{
"abstract": " This paper reports on a data-driven, interaction-aware motion prediction\napproach for pedestrians in environments cluttered with static obstacles. When\nnavigating in such workspaces shared with humans, robots need accurate motion\npredictions of the surrounding pedestrians. Human navigation behavior is mostly\ninfluenced by their surrounding pedestrians and by the static obstacles in\ntheir vicinity. In this paper we introduce a new model based on Long-Short Term\nMemory (LSTM) neural networks, which is able to learn human motion behavior\nfrom demonstrated data. To the best of our knowledge, this is the first\napproach using LSTMs, that incorporates both static obstacles and surrounding\npedestrians for trajectory forecasting. As part of the model, we introduce a\nnew way of encoding surrounding pedestrians based on a 1d-grid in polar angle\nspace. We evaluate the benefit of interaction-aware motion prediction and the\nadded value of incorporating static obstacles on both simulation and real-world\ndatasets by comparing with state-of-the-art approaches. The results show, that\nour new approach outperforms the other approaches while being very\ncomputationally efficient and that taking into account static obstacles for\nmotion predictions significantly improves the prediction accuracy, especially\nin cluttered environments.\n",
"title": "A Data-driven Model for Interaction-aware Pedestrian Motion Prediction in Object Cluttered Environments"
}
| null | null | null | null | true | null |
10703
| null |
Default
| null | null |
null |
{
"abstract": " SKIROC2 is an ASIC to readout the silicon pad detectors for the\nelectromagnetic calorimeter in the International Linear Collider.\nCharacteristics of SKIROC2 and the new version of SKIROC2A, packaged with BGA,\nare measured with testboards and charge injection. The results on the\nsignal-to-noise ratio of both trigger and ADC output, threshold tuning\ncapability and timing resolution are presented.\n",
"title": "Performance study of SKIROC2 and SKIROC2A with BGA testboard"
}
| null | null | null | null | true | null |
10704
| null |
Default
| null | null |
null |
{
"abstract": " Dominance by annual plants has traditionally been considered a brief early\nstage of ecological succession preceding inevitable dominance by competitive\nperennials. A more recent, alternative view suggests that interactions between\nannuals and perennials can result in priority effects, causing annual dominance\nto persist if they are initially more common. Such priority effects would\ncomplicate restoration of native perennial grasslands that have been invaded by\nexotic annuals. However, the conditions under which these priority effects\noccur remain unknown. Using a simple simulation model, we show that long-term\n(500 years) priority effects are possible as long as the plants have low\nfecundity and show an establishment-longevity tradeoff, with annuals having\ncompetitive advantage over perennial seedlings. We also show that short-term\n(up to 50 years) priority effects arise solely due to low fitness difference in\ncases where perennials dominate in the long term. These results provide a\ntheoretical basis for predicting when restoration of annual-invaded grasslands\nrequires active removal of annuals and timely reintroduction of perennials.\n",
"title": "Priority effects between annual and perennial plants"
}
| null | null | null | null | true | null |
10705
| null |
Default
| null | null |
null |
{
"abstract": " In systems and synthetic biology, much research has focused on the behavior\nand design of single pathways, while, more recently, experimental efforts have\nfocused on how cross-talk (coupling two or more pathways) or inhibiting\nmolecular function (isolating one part of the pathway) affects systems-level\nbehavior. However, the theory for tackling these larger systems in general has\nlagged behind. Here, we analyze how joining networks (e.g., cross-talk) or\ndecomposing networks (e.g., inhibition or knock-outs) affects three properties\nthat reaction networks may possess---identifiability (recoverability of\nparameter values from data), steady-state invariants (relationships among\nspecies concentrations at steady state, used in model selection), and\nmultistationarity (capacity for multiple steady states, which correspond to\nmultiple cell decisions). Specifically, we prove results that clarify, for a\nnetwork obtained by joining two smaller networks, how properties of the smaller\nnetworks can be inferred from or can imply similar properties of the original\nnetwork. Our proofs use techniques from computational algebraic geometry,\nincluding elimination theory and differential algebra.\n",
"title": "Joining and decomposing reaction networks"
}
| null | null | null | null | true | null |
10706
| null |
Default
| null | null |
null |
{
"abstract": " We resolve the thermal motion of a high-stress silicon nitride nanobeam at\nfrequencies far below its fundamental flexural resonance (3.4 MHz) using\ncavity-enhanced optical interferometry. Over two decades, the displacement\nspectrum is well-modeled by that of a damped harmonic oscillator driven by a\n$1/f$ thermal force, suggesting that the loss angle of the beam material is\nfrequency-independent. The inferred loss angle at 3.4 MHz, $\\phi = 4.5\\cdot\n10^{-6}$, agrees well with the quality factor ($Q$) of the fundamental beam\nmode ($\\phi = Q^{-1}$). In conjunction with $Q$ measurements made on higher\norder flexural modes, and accounting for the mode dependence of stress-induced\nloss dilution, we find that the intrinsic (undiluted) loss angle of the beam\nchanges by less than a factor of 2 between 50 kHz and 50 MHz. We discuss the\nimpact of such \"structural damping\" on experiments in quantum optomechanics, in\nwhich the thermal force acting on a mechanical oscillator coupled to an optical\ncavity is overwhelmed by radiation pressure shot noise. As an illustration, we\nshow that structural damping reduces the bandwidth of ponderomotive squeezing.\n",
"title": "Evidence for structural damping in a high-stress silicon nitride nanobeam and its implications for quantum optomechanics"
}
| null | null | null | null | true | null |
10707
| null |
Default
| null | null |
null |
{
"abstract": " Geodesic distance matrices can reveal shape properties that are largely\ninvariant to non-rigid deformations, and thus are often used to analyze and\nrepresent 3-D shapes. However, these matrices grow quadratically with the\nnumber of points. Thus for large point sets it is common to use a low-rank\napproximation to the distance matrix, which fits in memory and can be\nefficiently analyzed using methods such as multidimensional scaling (MDS). In\nthis paper we present a novel sparse method for efficiently representing\ngeodesic distance matrices using biharmonic interpolation. This method exploits\nknowledge of the data manifold to learn a sparse interpolation operator that\napproximates distances using a subset of points. We show that our method is 2x\nfaster and uses 20x less memory than current leading methods for solving MDS on\nlarge point sets, with similar quality. This enables analyses of large point\nsets that were previously infeasible.\n",
"title": "Efficient, sparse representation of manifold distance matrices for classical scaling"
}
| null | null | null | null | true | null |
10708
| null |
Default
| null | null |
null |
{
"abstract": " In this work, the structural stability and the electronic properties of\nLiNiBO 3 and LiFe x Ni (1-x) BO 3 are studied using first principle\ncalculations based on density functional theory. The calculated structural\nparameters are in good agreement with the available theoretical data. The most\nstable phases of the Fe substituted systems are predicted from the formation\nenergy hull generated using the cluster expansion method. The 66% of Fe\nsubstitution at the Ni site gives the most stable structure among all the Fe\nsubstituted systems. The bonding mechanisms of the considered systems are\ndiscussed based on the density of states (DOS) and charge density plot. The\ndetailed analysis of the stability, electronic structure, and the bonding\nmechanisms suggests that the systems can be a promising cathode material for Li\nion battery applications.\n",
"title": "A First Principle Study on Iron Substituted LiNi(BO3) to use as Cathode Material for Li-ion Batteries"
}
| null | null | null | null | true | null |
10709
| null |
Default
| null | null |
null |
{
"abstract": " We construct Knörrer type equivalences outside of the hypersurface case,\nnamely, between singularity categories of cyclic quotient surface singularities\nand certain finite dimensional local algebras. This generalises Knörrer's\nequivalence for singularities of Dynkin type A (between Krull dimensions $2$\nand $0$) and yields many new equivalences between singularity categories of\nfinite dimensional algebras.\nOur construction uses noncommutative resolutions of singularities, relative\nsingularity categories, and an idea of Hille & Ploog yielding strongly\nquasi-hereditary algebras which we describe explicitly by building on Wemyss's\nwork on reconstruction algebras. Moreover, K-theory gives obstructions to\ngeneralisations of our main result.\n",
"title": "Noncommutative Knörrer type equivalences via noncommutative resolutions of singularities"
}
| null | null | null | null | true | null |
10710
| null |
Default
| null | null |
null |
{
"abstract": " We classify torsion actions of free wreath products of arbitrary compact\nquantum groups and use this to prove that if $\\mathbb{G}$ is a torsion-free\ncompact quantum group satisfying the strong Baum-Connes property, then\n$\\mathbb{G}\\wr_{\\ast}S_{N}^{+}$ also satisfies the strong Baum-Connes property.\nWe then compute the K-theory of free wreath products of classical and quantum\nfree groups by $SO_{q}(3)$.\n",
"title": "Torsion and K-theory for some free wreath products"
}
| null | null |
[
"Mathematics"
] | null | true | null |
10711
| null |
Validated
| null | null |
null |
{
"abstract": " We propose two semiparametric versions of the debiased Lasso procedure for\nthe model $Y_i = X_i\\beta_0 + g_0(Z_i) + \\epsilon_i$, where $\\beta_0$ is high\ndimensional but sparse (exactly or approximately). Both versions are shown to\nhave the same asymptotic normal distribution and do not require the minimal\nsignal condition for statistical inference of any component in $\\beta_0$. Our\nmethod also works when $Z_i$ is high dimensional provided that the function\nclasses $E(X_{ij} |Z_i)$s and $E(Y_i|Z_i)$ belong to exhibit certain sparsity\nfeatures, e.g., a sparse additive decomposition structure. We further develop a\nsimultaneous hypothesis testing procedure based on multiplier bootstrap. Our\ntesting method automatically takes into account of the dependence structure\nwithin the debiased estimates, and allows the number of tested components to be\nexponentially high.\n",
"title": "High Dimensional Inference in Partially Linear Models"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
10712
| null |
Validated
| null | null |
null |
{
"abstract": " Small-cell deployment in licensed and unlicensed spectrum is considered to be\none of the key approaches to cope with the ongoing wireless data demand\nexplosion. Compared to traditional cellular base stations with large\ntransmission power, small-cells typically have relatively low transmission\npower, which makes them attractive for some spectrum bands that have strict\npower regulations, for example, the 3.5GHz band [1]. In this paper we consider\na heterogeneous wireless network consisting of one or more service providers\n(SPs). Each SP operates in both macro-cells and small-cells, and provides\nservice to two types of users: mobile and fixed. Mobile users can only\nassociate with macro-cells whereas fixed users can connect to either macro- or\nsmall-cells. The SP charges a price per unit rate for each type of service.\nEach SP is given a fixed amount of bandwidth and splits it between macro- and\nsmall-cells. Motivated by bandwidth regulations, such as those for the 3.5Gz\nband, we assume a minimum amount of bandwidth has to be set aside for\nsmall-cells. We study the optimal pricing and bandwidth allocation strategies\nin both monopoly and competitive scenarios. In the monopoly scenario the\nstrategy is unique. In the competitive scenario there exists a unique Nash\nequilibrium, which depends on the regulatory constraints. We also analyze the\nsocial welfare achieved, and compare it to that without the small-cell\nbandwidth constraints. Finally, we discuss implications of our results on the\neffectiveness of the minimum bandwidth constraint on influencing small-cell\ndeployments.\n",
"title": "The Impact of Small-Cell Bandwidth Requirements on Strategic Operators"
}
| null | null | null | null | true | null |
10713
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies the numerical approximation of solution of the Dirichlet\nproblem for the fully nonlinear Monge-Ampere equation. In this approach, we\ntake the advantage of reformulation the Monge-Ampere problem as an optimization\nproblem, to which we associate a well defined functional whose minimum provides\nus with the solution to the Monge-Ampere problem after resolving a Poisson\nproblem by the finite element Galerkin method. We present some numerical\nexamples, for which a good approximation is obtained in 68 iterations.\n",
"title": "Optimisation approach for the Monge-Ampere equation"
}
| null | null | null | null | true | null |
10714
| null |
Default
| null | null |
null |
{
"abstract": " We study the effect of adaptive mesh refinement on a parallel domain\ndecomposition solver of a linear system of algebraic equations. These concepts\nneed to be combined within a parallel adaptive finite element software. A\nprototype implementation is presented for this purpose. It uses adaptive mesh\nrefinement with one level of hanging nodes. Two and three-level versions of the\nBalancing Domain Decomposition based on Constraints (BDDC) method are used to\nsolve the arising system of algebraic equations. The basic concepts are\nrecalled and components necessary for the combination are studied in detail. Of\nparticular interest is the effect of disconnected subdomains, a typical output\nof the employed mesh partitioning based on space-filling curves, on the\nconvergence and solution time of the BDDC method. It is demonstrated using a\nlarge set of experiments that while both refined meshes and disconnected\nsubdomains have a negative effect on the convergence of BDDC, the number of\niterations remains acceptable. In addition, scalability of the three-level BDDC\nsolver remains good on up to a few thousands of processor cores. The largest\npresented problem using adaptive mesh refinement has over 10^9 unknowns and is\nsolved on 2048 cores.\n",
"title": "Coupling parallel adaptive mesh refinement with a nonoverlapping domain decomposition solver"
}
| null | null | null | null | true | null |
10715
| null |
Default
| null | null |
null |
{
"abstract": " Today's artificial assistants are typically prompted to perform tasks through\ndirect, imperative commands such as \\emph{Set a timer} or \\emph{Pick up the\nbox}. However, to progress toward more natural exchanges between humans and\nthese assistants, it is important to understand the way non-imperative\nutterances can indirectly elicit action of an addressee. In this paper, we\ninvestigate command types in the setting of a grounded, collaborative game. We\nfocus on a less understood family of utterances for eliciting agent action,\nlocatives like \\emph{The chair is in the other room}, and demonstrate how these\nutterances indirectly command in specific game state contexts. Our work shows\nthat models with domain-specific grounding can effectively realize the\npragmatic reasoning that is necessary for more robust natural language\ninteraction.\n",
"title": "The Pragmatics of Indirect Commands in Collaborative Discourse"
}
| null | null | null | null | true | null |
10716
| null |
Default
| null | null |
null |
{
"abstract": " In an imaginary conversation with Guido Altarelli, I express my views on the\nstatus of particle physics beyond the Standard Model and its future prospects.\n",
"title": "The Dawn of the Post-Naturalness Era"
}
| null | null | null | null | true | null |
10717
| null |
Default
| null | null |
null |
{
"abstract": " Approximate Bayesian computing is a powerful likelihood-free method that has\ngrown increasingly popular since early applications in population genetics.\nHowever, complications arise in the theoretical justification for Bayesian\ninference conducted from this method with a non-sufficient summary statistic.\nIn this paper, we seek to re-frame approximate Bayesian computing within a\nfrequentist context and justify its performance by standards set on the\nfrequency coverage rate. In doing so, we develop a new computational technique\ncalled approximate confidence distribution computing, yielding theoretical\nsupport for the use of non-sufficient summary statistics in likelihood-free\nmethods. Furthermore, we demonstrate that approximate confidence distribution\ncomputing extends the scope of approximate Bayesian computing to include\ndata-dependent priors without damaging the inferential integrity. This\ndata-dependent prior can be viewed as an initial `distribution estimate' of the\ntarget parameter which is updated with the results of the approximate\nconfidence distribution computing method. A general strategy for constructing\nan appropriate data-dependent prior is also discussed and is shown to often\nincrease the computing speed while maintaining statistical inferential\nguarantees. We supplement the theory with simulation studies illustrating the\nbenefits of the proposed method, namely the potential for broader applications\nand the increased computing speed compared to the standard approximate Bayesian\ncomputing methods.\n",
"title": "An effective likelihood-free approximate computing method with statistical inferential guarantees"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
10718
| null |
Validated
| null | null |
null |
{
"abstract": " We study the word and conjugacy problems in lacunary hyperbolic groups\n(briefly, LHG). In particular, we describe a necessary and sufficient condition\nfor decidability of the word problem in LHG. Then, based on the graded\nsmall-cancellation theory of Olshanskii, we develop a general framework which\nallows us to construct lacunary hyperbolic groups with word and conjugacy\nproblems highly controllable and flexible both in terms of computability and\ncomputational complexity.\nAs an application, we show that for any recursively enumerable subset\n$\\mathcal{L} \\subseteq \\mathcal{A}^*$, where $\\mathcal{A}^*$ is the set of\nwords over arbitrarily chosen non-empty finite alphabet $\\mathcal{A}$, there\nexists a lacunary hyperbolic group $G_{\\mathcal{L}}$ such that the membership\nproblem for $ \\mathcal{L}$ is `almost' linear time equivalent to the conjugacy\nproblem in $G_{\\mathcal{L}}$. Moreover, for the mentioned group the word and\nindividual conjugacy problems are decidable in `almost' linear time.\nAnother application is the construction of a lacunary hyperbolic group with\n`almost' linear time word problem and with all the individual conjugacy\nproblems being undecidable except the word problem.\nAs yet another application of the developed framework, we construct infinite\nverbally complete groups and torsion free Tarski monsters, i.e. infinite\ntorsion-free groups all of whose proper subgroups are cyclic, with `almost'\nlinear time word and polynomial time conjugacy problems. These groups are\nconstructed as quotients of arbitrarily given non-elementary torsion-free\nhyperbolic groups and are lacunary hyperbolic.\nFinally, as a consequence of the main results, we answer a few open\nquestions.\n",
"title": "The word and conjugacy problems in lacunary hyperbolic groups"
}
| null | null | null | null | true | null |
10719
| null |
Default
| null | null |
null |
{
"abstract": " Higgs resonance modes in condensed matter systems are generally broad;\nmeaning large decay widths or short relaxation times. This common feature has\nobscured and limited their observation to a select few systems. Contrary to\nthis, the present work predicts that Higgs resonances in magnetic field\ninduced, three-dimensional magnon Bose-condensates have vanishingly small decay\nwidths. Specifically for parameters relating to TlCuCl$_3$, we find an energy\n($\\Delta_H$) to width ($\\Gamma_H$) ratio $\\Delta_H/\\Gamma_H\\sim500$, making\nthis the narrowest predicted Higgs mode in a condensed matter system, some two\norders of magnitude `narrower' than the sharpest condensed matter Higgs\nobserved so far.\n",
"title": "Prediction of ultra-narrow Higgs resonance in magnon Bose-condensates"
}
| null | null | null | null | true | null |
10720
| null |
Default
| null | null |
null |
{
"abstract": " Generic text embeddings are successfully used in a variety of tasks. However,\nthey are often learnt by capturing the co-occurrence structure from pure text\ncorpora, resulting in limitations of their ability to generalize. In this\npaper, we explore models that incorporate visual information into the text\nrepresentation. Based on comprehensive ablation studies, we propose a\nconceptually simple, yet well performing architecture. It outperforms previous\nmultimodal approaches on a set of well established benchmarks. We also improve\nthe state-of-the-art results for image-related text datasets, using orders of\nmagnitude less data.\n",
"title": "Better Text Understanding Through Image-To-Text Transfer"
}
| null | null | null | null | true | null |
10721
| null |
Default
| null | null |
null |
{
"abstract": " Instrumental variable analysis is a widely used method to estimate causal\neffects in the presence of unmeasured confounding. When the instruments,\nexposure and outcome are not measured in the same sample, Angrist and Krueger\n(1992) suggested to use two-sample instrumental variable (TSIV) estimators that\nuse sample moments from an instrument-exposure sample and an instrument-outcome\nsample. However, this method is biased if the two samples are from\nheterogeneous populations so that the distributions of the instruments are\ndifferent. In linear structural equation models, we derive a new class of TSIV\nestimators that are robust to heterogeneous samples under the key assumption\nthat the structural relations in the two samples are the same. The widely used\ntwo-sample two-stage least squares estimator belongs to this class. It is\ngenerally not asymptotically efficient, although we find that it performs\nsimilarly to the optimal TSIV estimator in most practical situations. We then\nattempt to relax the linearity assumption. We find that, unlike one-sample\nanalyses, the TSIV estimator is not robust to misspecified exposure model.\nAdditionally, to nonparametrically identify the magnitude of the causal effect,\nthe noise in the exposure must have the same distributions in the two samples.\nHowever, this assumption is in general untestable because the exposure is not\nobserved in one sample. Nonetheless, we may still identify the sign of the\ncausal effect in the absence of homogeneity of the noise.\n",
"title": "Two-sample instrumental variable analyses using heterogeneous samples"
}
| null | null | null | null | true | null |
10722
| null |
Default
| null | null |
null |
{
"abstract": " Schubert polynomials are a basis for the polynomial ring that represent\nSchubert classes for the flag manifold. In this paper, we introduce and develop\nseveral new combinatorial models for Schubert polynomials that relate them to\nother known bases including key polynomials and fundamental slide polynomials.\nWe unify these and existing models by giving simple bijections between the\ncombinatorial objects indexing each. In particular, we give a simple bijective\nproof that the balanced tableaux of Edelman and Greene enumerate reduced\nexpressions and a direct combinatorial proof of Kohnert's algorithm for\ncomputing Schubert polynomials. Further, we generalize the insertion algorithm\nof Edelman and Greene to give a bijection between reduced expressions and pairs\nof tableaux of the same key diagram shape and use this to give a simple\nformula, directly in terms of reduced expressions, for the key polynomial\nexpansion of a Schubert polynomial.\n",
"title": "Combinatorial models for Schubert polynomials"
}
| null | null | null | null | true | null |
10723
| null |
Default
| null | null |
null |
{
"abstract": " Since the the first studies of thermodynamics, heat transport has been a\ncrucial element for the understanding of any thermal system. Quantum mechanics\nhas introduced new appealing ingredients for the manipulation of heat currents,\nsuch as the long-range coherence of the superconducting condensate. The latter\nhas been exploited by phase-coherent caloritronics, a young field of\nnanoscience, to realize Josephson heat interferometers, which can control\nelectronic thermal currents as a function of the external magnetic flux. So\nfar, only one output temperature has been modulated, while multi-terminal\ndevices that allow to distribute the heat flux among different reservoirs are\nstill missing. Here, we report the experimental realization of a phase-tunable\nthermal router able to control the heat transferred between two terminals\nresiding at different temperatures. Thanks to the Josephson effect, our\nstructure allows to regulate the thermal gradient between the output electrodes\nuntil reaching its inversion. Together with interferometers, heat diodes and\nthermal memories, the thermal router represents a fundamental step towards the\nthermal conversion of non-linear electronic devices, and the realization of\ncaloritronic logic components.\n",
"title": "Phase-tunable Josephson thermal router"
}
| null | null | null | null | true | null |
10724
| null |
Default
| null | null |
null |
{
"abstract": " In complex, high dimensional and unstructured data it is often difficult to\nextract meaningful patterns. This is especially the case when dealing with\ntextual data. Recent studies in machine learning, information theory and\nnetwork science have developed several novel instruments to extract the\nsemantics of unstructured data, and harness it to build a network of relations.\nSuch approaches serve as an efficient tool for dimensionality reduction and\npattern detection. This paper applies semantic network science to extract\nideological proximity in the international arena, by focusing on the data from\nGeneral Debates in the UN General Assembly on the topics of high salience to\ninternational community. UN General Debate corpus (UNGDC) covers all high-level\ndebates in the UN General Assembly from 1970 to 2014, covering all UN member\nstates. The research proceeds in three main steps. First, Latent Dirichlet\nAllocation (LDA) is used to extract the topics of the UN speeches, and\ntherefore semantic information. Each country is then assigned a vector\nspecifying the exposure to each of the topics identified. This intermediate\noutput is then used in to construct a network of countries based on information\ntheoretical metrics where the links capture similar vectorial patterns in the\ntopic distributions. Topology of the networks is then analyzed through network\nproperties like density, path length and clustering. Finally, we identify\nspecific topological features of our networks using the map equation framework\nto detect communities in our networks of countries.\n",
"title": "Topology Analysis of International Networks Based on Debates in the United Nations"
}
| null | null | null | null | true | null |
10725
| null |
Default
| null | null |
null |
{
"abstract": " We consider a numerical approach for the incompressible surface Navier-Stokes\nequation on surfaces with arbitrary genus $g(\\mathcal{S})$. The approach is\nbased on a reformulation of the equation in Cartesian coordinates of the\nembedding $\\mathbb{R}^3$, penalization of the normal component, a Chorin\nprojection method and discretization in space by surface finite elements for\neach component. The approach thus requires only standard ingredients which most\nfinite element implementations can offer. We compare computational results with\ndiscrete exterior calculus (DEC) simulations on a torus and demonstrate the\ninterplay of the flow field with the topology by showing realizations of the\nPoincaré-Hopf theorem on $n$-tori.\n",
"title": "Solving the incompressible surface Navier-Stokes equation by surface finite elements"
}
| null | null | null | null | true | null |
10726
| null |
Default
| null | null |
null |
{
"abstract": " We propose a machine-learning method for evaluating the potential barrier\ngoverning atomic transport based on the preferential selection of dominant\npoints for the atomic transport. The proposed method generates numerous random\nsamples of the entire potential energy surface (PES) from a probabilistic\nGaussian process model of the PES, which enables defining the likelihood of the\ndominant points. The robustness and efficiency of the method are demonstrated\non a dozen model cases for proton diffusion in oxides, in comparison with a\nconventional nudge elastic band method.\n",
"title": "Exploring a potential energy surface by machine learning for characterizing atomic transport"
}
| null | null | null | null | true | null |
10727
| null |
Default
| null | null |
null |
{
"abstract": " In this article, we consider products of random walks on finite groups with\nmoderate growth and discuss their cutoffs in the total variation. Based on\nseveral comparison techniques, we are able to identify the total variation\ncutoff of discrete time lazy random walks with the Hellinger distance cutoff of\ncontinuous time random walks. Along with the cutoff criterion for Laplace\ntransforms, we derive a series of equivalent conditions on the existence of\ncutoffs, including the existence of pre-cutoffs, Peres' product condition and a\nformula generated by the graph diameters. For illustration, we consider\nproducts of Heisenberg groups and randomized products of finite cycles.\n",
"title": "Products of random walks on finite groups with moderate growth"
}
| null | null | null | null | true | null |
10728
| null |
Default
| null | null |
null |
{
"abstract": " Dantzig selector (DS) and LASSO problems have attracted plenty of attention\nin statistical learning, sparse data recovery and mathematical optimization. In\nthis paper, we provide a theoretical analysis of the sparse recovery stability\nof these optimization problems in more general settings and from a new\nperspective. We establish recovery error bounds for these optimization problems\nunder a mild assumption called weak range space property of a transposed design\nmatrix. This assumption is less restrictive than the well known sparse recovery\nconditions such as restricted isometry property (RIP), null space property\n(NSP) or mutual coherence. In fact, our analysis indicates that this assumption\nis tight and cannot be relaxed for the standard DS problems in order to\nmaintain their sparse recovery stability. As a result, a series of new\nstability results for DS and LASSO have been established under various matrix\nproperties, including the RIP with constant $\\delta_{2k}< 1/\\sqrt{2}$ and the\n(constant-free) standard NSP of order $k.$ We prove that these matrix\nproperties can yield an identical recovery error bound for DS and LASSO with\nstability coefficients being measured by the so-called Robinson's constant,\ninstead of the conventional RIP or NSP constant. To our knowledge, this is the\nfirst time that the stability results with such a unified feature are\nestablished for DS and LASSO problems. Different from the standard analysis in\nthis area of research, our analysis is carried out deterministically, and the\nkey analytic tools used in our analysis include the error bound of linear\nsystems due to Hoffman and Robinson and polytope approximation of symmetric\nconvex bodies due to Barvinok.\n",
"title": "A Theoretical Analysis of Sparse Recovery Stability of Dantzig Selector and LASSO"
}
| null | null | null | null | true | null |
10729
| null |
Default
| null | null |
null |
{
"abstract": " Design of adaptive algorithms for simultaneous regulation and estimation of\nMIMO linear dynamical systems is a canonical reinforcement learning problem.\nEfficient policies whose regret (i.e. increase in the cost due to uncertainty)\nscales at a square-root rate of time have been studied extensively in the\nrecent literature. Nevertheless, existing strategies are computationally\nintractable and require a priori knowledge of key system parameters. The only\nexception is a randomized Greedy regulator, for which asymptotic regret bounds\nhave been recently established. However, randomized Greedy leads to probable\nfluctuations in the trajectory of the system, which renders its finite time\nregret suboptimal.\nThis work addresses the above issues by designing policies that utilize input\nsignals perturbations. We show that perturbed Greedy guarantees non-asymptotic\nregret bounds of (nearly) square-root magnitude w.r.t. time. More generally, we\nestablish high probability bounds on both the regret and the learning accuracy\nunder arbitrary input perturbations. The settings where Greedy attains the\ninformation theoretic lower bound of logarithmic regret are also discussed. To\nobtain the results, state-of-the-art tools from martingale theory together with\nthe recently introduced method of policy decomposition are leveraged. Beside\nadaptive regulators, analysis of input perturbations captures key applications\nincluding remote sensing and distributed control.\n",
"title": "Input Perturbations for Adaptive Regulation and Learning"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10730
| null |
Validated
| null | null |
null |
{
"abstract": " The Birkhoff conjecture says that the boundary of a strictly convex\nintegrable billiard table is necessarily an ellipse. In this article, we\nconsider a stronger notion of integrability, namely integrability close to the\nboundary, and prove a local version of this conjecture: a small perturbation of\nan ellipse of small eccentricity which preserves integrability near the\nboundary, is itself an ellipse. This extends the result in [1], where\nintegrability was assumed on a larger set. In particular, it shows that (local)\nintegrability near the boundary implies global integrability. One of the\ncrucial ideas in the proof consists in analyzing Taylor expansion of the\ncorresponding action-angle coordinates with respect to the eccentricity\nparameter, deriving and studying higher order conditions for the preservation\nof integrable rational caustics.\n",
"title": "Nearly circular domains which are integrable close to the boundary are ellipses"
}
| null | null | null | null | true | null |
10731
| null |
Default
| null | null |
null |
{
"abstract": " Pedestrian crowds often include social groups, i.e. pedestrians that walk\ntogether because of social relationships. They show characteristic\nconfigurations and influence the dynamics of the entire crowd. In order to\ninvestigate the impact of social groups on evacuations we performed an\nempirical study with pupils. Several evacuation runs with groups of different\nsizes and different interactions were performed. New group parameters are\nintroduced which allow to describe the dynamics of the groups and the\nconfiguration of the group members quantitatively. The analysis shows a\npossible decrease of evacuation times for large groups due to self-ordering\neffects. Social groups can be approximated as ellipses that orientate along\ntheir direction of motion. Furthermore, explicitly cooperative behaviour among\ngroup members leads to a stronger aggregation of group members and an\nintermittent way of evacuation.\n",
"title": "Empirical study on social groups in pedestrian evacuation dynamics"
}
| null | null | null | null | true | null |
10732
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we derive upper and lower bounds as well as a simple\nclosed-form approximation for the capacity of the continuous-time, bandlimited,\nadditive white Gaussian noise channel in a three-dimensional free-space\nelectromagnetic propagation environment subject to constraints on the total\neffective antenna aperture area of the link and a total transmitter power\nconstraint. We assume that the communication range is much larger than the\nradius of the sphere containing the antennas at both ends of the link, and we\nshow that, in general, the capacity can only be achieved by transmitting\nmultiple spatially-multiplexed data streams simultaneously over the channel.\nFurthermore, the lower bound on capacity can be approached asymptotically by\ntransmitting the data streams between a pair of physically-realizable\ndistributed antenna arrays at either end of the link. A consequence of this\nresult is that, in general, communication at close to the maximum achievable\ndata rate on a deep-space communication link can be achieved in practice if and\nonly if the communication system utilizes spatial multiplexing over a\ndistributed MIMO antenna array. Such an approach to deep-space communication\ndoes not appear to be envisioned currently by any of the international space\nagencies or any commercial space companies. A second consequence is that the\ncapacity of a long-range free-space communication link, if properly utilized,\ngrows asymptotically as a function of the square root of the received SNR\nrather than only logarithmically in the received SNR.\n",
"title": "Capacity of the Aperture-Constrained AWGN Free-Space Communication Channel"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10733
| null |
Validated
| null | null |
null |
{
"abstract": " We discuss the quasiparticle entropy and heat capacity of a dirty\nsuperconductor-normal metal-superconductor junction. In the case of short\njunctions, the inverse proximity effect extending in the superconducting banks\nplays a crucial role in determining the thermodynamic quantities. In this case,\ncommonly used approximations can violate thermodynamic relations between\nsupercurrent and quasiparticle entropy. We provide analytical and numerical\nresults as a function of different geometrical parameters. Quantitative\nestimates for the heat capacity can be relevant for the design of caloritronic\ndevices or radiation sensor applications.\n",
"title": "Quasiparticle entropy in superconductor/normal metal/superconductor proximity junctions in the diffusive limit"
}
| null | null | null | null | true | null |
10734
| null |
Default
| null | null |
null |
{
"abstract": " Wide-field high precision photometric surveys such as Kepler have produced\nreams of data suitable for investigating stellar magnetic activity of cooler\nstars. Starspot activity produces quasi-sinusoidal light curves whose phase and\namplitude vary as active regions grow and decay over time. Here we investigate,\nfirstly, whether there is a correlation between the size of starspots - assumed\nto be related to the amplitude of the sinusoid - and their decay timescale and,\nsecondly, whether any such correlation depends on the stellar effective\ntemperature. To determine this, we computed the autocorrelation functions of\nthe light curves of samples of stars from Kepler and fitted them with apodised\nperiodic functions. The light curve amplitudes, representing spot size were\nmeasured from the root-mean-squared scatter of the normalised light curves. We\nused a Monte Carlo Markov Chain to measure the periods and decay timescales of\nthe light curves. The results show a correlation between the decay time of\nstarspots and their inferred size. The decay time also depends strongly on the\ntemperature of the star. Cooler stars have spots that last much longer, in\nparticular for stars with longer rotational periods. This is consistent with\ncurrent theories of diffusive mechanisms causing starspot decay. We also find\nthat the Sun is not unusually quiet for its spectral type - stars with\nsolar-type rotation periods and temperatures tend to have (comparatively)\nsmaller starspots than stars with mid-G or later spectral types.\n",
"title": "A Kepler Study of Starspot Lifetimes with Respect to Light Curve Amplitude and Spectral Type"
}
| null | null | null | null | true | null |
10735
| null |
Default
| null | null |
null |
{
"abstract": " A plasmon-assisted channeling acceleration can be realized with a large\nchannel, possibly at the nanometer scale. Carbon nanotubes (CNTs) are the most\ntypical example of nano-channels that can confine a large number of channeled\nparticles in a photon-plasmon coupling condition. This paper presents a\ntheoretical and numerical study on the concept of high-field charge\nacceleration driven by photo-excited Luttinger-liquid plasmons (LLP) in a\nnanotube. An analytic description of the plasmon-assisted laser acceleration is\ndetailed with practical acceleration parameters, in particular with\nspecifications of a typical tabletop femtosecond laser system. The maximally\nachievable acceleration gradients and energy gains within dephasing lengths and\nCNT lengths are discussed with respect to laser-incident angles and CNT-filling\nratios.\n",
"title": "Plasmon-Driven Acceleration in a Photo-Excited Nanotube"
}
| null | null | null | null | true | null |
10736
| null |
Default
| null | null |
null |
{
"abstract": " Formal verification techniques are widely used for detecting design flaws in\nsoftware systems. Formal verification can be done by transforming an already\nimplemented source code to a formal model and attempting to prove certain\nproperties of the model (e.g. that no erroneous state can occur during\nexecution). Unfortunately, transformations from source code to a formal model\noften yield large and complex models, making the verification process\ninefficient and costly. In order to reduce the size of the resulting model,\noptimization transformations can be used. Such optimizations include common\nalgorithms known from compiler design and different program slicing techniques.\nOur paper describes a framework for transforming C programs to a formal model,\nenhanced by various optimizations for size reduction. We evaluate and compare\nseveral optimization algorithms regarding their effect on the size of the model\nand the efficiency of the verification. Results show that different\noptimizations are more suitable for certain models, justifying the need for a\nframework that includes several algorithms.\n",
"title": "Towards Evaluating Size Reduction Techniques for Software Model Checking"
}
| null | null | null | null | true | null |
10737
| null |
Default
| null | null |
null |
{
"abstract": " The underpotential deposition of transition metal ions is a critical step in\nmany electrosynthetic approaches. While underpotential deposition has been\nintensively studied at the atomic level, first-principles calculations in\nvacuum can strongly underestimate the stability of underpotentially deposited\nmetals. It has been shown recently that the consideration of co-adsorbed anions\ncan deliver more reliable descriptions of underpotential deposition reactions;\nhowever, the influence of additional key environmental factors such as the\nelectrification of the interface under applied voltage and the activities of\nthe ions in solution have yet to be investigated. In this work, copper\nunderpotential deposition on gold is studied under realistic electrochemical\nconditions using a quantum-continuum model of the electrochemical interface. We\nreport here on the influence of surface electrification, concentration effects,\nand anion co-adsorption on the stability of the copper underpotential\ndeposition layer on the gold (100) surface.\n",
"title": "Quantum-continuum simulation of underpotential deposition at electrified metal-solution interfaces"
}
| null | null | null | null | true | null |
10738
| null |
Default
| null | null |
null |
{
"abstract": " Many real world tasks such as reasoning and physical interaction require\nidentification and manipulation of conceptual entities. A first step towards\nsolving these tasks is the automated discovery of distributed symbol-like\nrepresentations. In this paper, we explicitly formalize this problem as\ninference in a spatial mixture model where each component is parametrized by a\nneural network. Based on the Expectation Maximization framework we then derive\na differentiable clustering method that simultaneously learns how to group and\nrepresent individual entities. We evaluate our method on the (sequential)\nperceptual grouping task and find that it is able to accurately recover the\nconstituent objects. We demonstrate that the learned representations are useful\nfor next-step prediction.\n",
"title": "Neural Expectation Maximization"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
10739
| null |
Validated
| null | null |
null |
{
"abstract": " Usage of online textual media is steadily increasing. Daily, more and more\nnews stories, blog posts and scientific articles are added to the online\nvolumes. These are all freely accessible and have been employed extensively in\nmultiple research areas, e.g. automatic text summarization, information\nretrieval, information extraction, etc. Meanwhile, online debate forums have\nrecently become popular, but have remained largely unexplored. For this reason,\nthere are no sufficient resources of annotated debate data available for\nconducting research in this genre. In this paper, we collected and annotated\ndebate data for an automatic summarization task. Similar to extractive gold\nstandard summary generation our data contains sentences worthy to include into\na summary. Five human annotators performed this task. Inter-annotator\nagreement, based on semantic similarity, is 36% for Cohen's kappa and 48% for\nKrippendorff's alpha. Moreover, we also implement an extractive summarization\nsystem for online debates and discuss prominent features for the task of\nsummarizing online debate data automatically.\n",
"title": "Gold Standard Online Debates Summaries and First Experiments Towards Automatic Summarization of Online Debate Data"
}
| null | null | null | null | true | null |
10740
| null |
Default
| null | null |
null |
{
"abstract": " We show that Entropy-SGD (Chaudhari et al., 2017), when viewed as a learning\nalgorithm, optimizes a PAC-Bayes bound on the risk of a Gibbs (posterior)\nclassifier, i.e., a randomized classifier obtained by a risk-sensitive\nperturbation of the weights of a learned classifier. Entropy-SGD works by\noptimizing the bound's prior, violating the hypothesis of the PAC-Bayes theorem\nthat the prior is chosen independently of the data. Indeed, available\nimplementations of Entropy-SGD rapidly obtain zero training error on random\nlabels and the same holds of the Gibbs posterior. In order to obtain a valid\ngeneralization bound, we rely on a result showing that data-dependent priors\nobtained by stochastic gradient Langevin dynamics (SGLD) yield valid PAC-Bayes\nbounds provided the target distribution of SGLD is $\\epsilon$-differentially\nprivate. We observe that test error on MNIST and CIFAR10 falls within the\n(empirically nonvacuous) risk bounds computed under the assumption that SGLD\nreaches stationarity. In particular, Entropy-SGLD can be configured to yield\nrelatively tight generalization bounds and still fit real labels, although\nthese same settings do not obtain state-of-the-art performance.\n",
"title": "Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors"
}
| null | null | null | null | true | null |
10741
| null |
Default
| null | null |
null |
{
"abstract": " Learning sophisticated feature interactions behind user behaviors is critical\nin maximizing CTR for recommender systems. Despite great progress, existing\nmethods seem to have a strong bias towards low- or high-order interactions, or\nrequire expertise feature engineering. In this paper, we show that it is\npossible to derive an end-to-end learning model that emphasizes both low- and\nhigh-order feature interactions. The proposed model, DeepFM, combines the power\nof factorization machines for recommendation and deep learning for feature\nlearning in a new neural network architecture. Compared to the latest Wide \\&\nDeep model from Google, DeepFM has a shared input to its \"wide\" and \"deep\"\nparts, with no need of feature engineering besides raw features. Comprehensive\nexperiments are conducted to demonstrate the effectiveness and efficiency of\nDeepFM over the existing models for CTR prediction, on both benchmark data and\ncommercial data.\n",
"title": "DeepFM: A Factorization-Machine based Neural Network for CTR Prediction"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10742
| null |
Validated
| null | null |
null |
{
"abstract": " We investigate the stability of the many-body localized (MBL) phase for a\nsystem in contact with a single ergodic grain, modelling a Griffiths region\nwith low disorder. Our numerical analysis provides evidence that even a small\nergodic grain consisting of only 3 qubits can delocalize a localized chain, as\nsoon as the localization length exceeds a critical value separating localized\nand extended regimes of the whole system. We present a simple theory,\nconsistent with the arguments in [Phys. Rev. B 95, 155129 (2017)], that assumes\na system to be locally ergodic unless the local relaxation time, determined by\nFermi's Golden Rule, is larger than the inverse level spacing. This theory\npredicts a critical value for the localization length that is perfectly\nconsistent with our numerical calculations. We analyze in detail the behavior\nof local operators inside and outside the ergodic grain, and find excellent\nagreement of numerics and theory.\n",
"title": "How a small quantum bath can thermalize long localized chains"
}
| null | null |
[
"Physics"
] | null | true | null |
10743
| null |
Validated
| null | null |
null |
{
"abstract": " Almost all EEG-based brain-computer interfaces (BCIs) need some labeled\nsubject-specific data to calibrate a new subject, as neural responses are\ndifferent across subjects to even the same stimulus. So, a major challenge in\ndeveloping high-performance and user-friendly BCIs is to cope with such\nindividual differences so that the calibration can be reduced or even\ncompletely eliminated. This paper focuses on the latter. More specifically, we\nconsider an offline application scenario, in which we have unlabeled EEG trials\nfrom a new subject, and would like to accurately label them by leveraging\nauxiliary labeled EEG trials from other subjects in the same task. To\naccommodate the individual differences, we propose a novel unsupervised\napproach to align the EEG trials from different subjects in the Euclidean space\nto make them more consistent. It has three desirable properties: 1) the aligned\ntrial lie in the Euclidean space, which can be used by any Euclidean space\nsignal processing and machine learning approach; 2) it can be computed very\nefficiently; and, 3) it does not need any labeled trials from the new subject.\nExperiments on motor imagery and event-related potentials demonstrated the\neffectiveness and efficiency of our approach.\n",
"title": "Transfer Learning for Brain-Computer Interfaces: An Euclidean Space Data Alignment Approach"
}
| null | null | null | null | true | null |
10744
| null |
Default
| null | null |
null |
{
"abstract": " Conditional on Fourier restriction estimates for elliptic hypersurfaces, we\nprove optimal restriction estimates for polynomial hypersurfaces of revolution\nfor which the defining polynomial has non-negative coefficients. In particular,\nwe obtain uniform--depending only on the dimension and polynomial\ndegree--estimates for restriction with affine surface measure, slightly beyond\nthe bilinear range. The main step in the proof of our linear result is an\n(unconditional) bilinear adjoint restriction estimate for pieces at different\nscales.\n",
"title": "Linear and bilinear restriction to certain rotationally symmetric hypersurfaces"
}
| null | null | null | null | true | null |
10745
| null |
Default
| null | null |
null |
{
"abstract": " When a measurement falls outside the quantization or measurable range, it\nbecomes saturated and cannot be used in classical reconstruction methods. For\nexample, in C-arm angiography systems, which provide projection radiography,\nfluoroscopy, digital subtraction angiography, and are widely used for medical\ndiagnoses and interventions, the limited dynamic range of C-arm flat detectors\nleads to overexposure in some projections during an acquisition, such as\nimaging relatively thin body parts (e.g., the knee). Aiming at overexposure\ncorrection for computed tomography (CT) reconstruction, we in this paper\npropose a mixed one-bit compressive sensing (M1bit-CS) to acquire information\nfrom both regular and saturated measurements. This method is inspired by the\nrecent progress on one-bit compressive sensing, which deals with only sign\nobservations. Its successful applications imply that information carried by\nsaturated measurements is useful to improve recovery quality. For the proposed\nM1bit-CS model, alternating direction methods of multipliers is developed and\nan iterative saturation detection scheme is established. Then we evaluate\nM1bit-CS on one-dimensional signal recovery tasks. In some experiments, the\nperformance of the proposed algorithms on mixed measurements is almost the same\nas recovery on unsaturated ones with the same amount of measurements. Finally,\nwe apply the proposed method to overexposure correction for CT reconstruction\non a phantom and a simulated clinical image. The results are promising, as the\ntypical streaking artifacts and capping artifacts introduced by saturated\nprojection data are effectively reduced, yielding significant error reduction\ncompared with existing algorithms based on extrapolation.\n",
"title": "Mixed one-bit compressive sensing with applications to overexposure correction for CT reconstruction"
}
| null | null | null | null | true | null |
10746
| null |
Default
| null | null |
null |
{
"abstract": " There is an increased interest in building data analytics frameworks with\nadvanced algebraic capabilities both in industry and academia. Many of these\nframeworks, e.g., TensorFlow and BIDMach, implement their compute-intensive\nprimitives in two flavors---as multi-thread routines for multi-core CPUs and as\nhighly-parallel kernels executed on GPU. Stochastic gradient descent (SGD) is\nthe most popular optimization method for model training implemented extensively\non modern data analytics platforms. While the data-intensive properties of SGD\nare well-known, there is an intense debate on which of the many SGD variants is\nbetter in practice. In this paper, we perform a comprehensive study of parallel\nSGD for training generalized linear models. We consider the impact of three\nfactors -- computing architecture (multi-core CPU or GPU), synchronous or\nasynchronous model updates, and data sparsity -- on three measures---hardware\nefficiency, statistical efficiency, and time to convergence. In the process, we\ndesign an optimized asynchronous SGD algorithm for GPU that leverages warp\nshuffling and cache coalescing for data and model access. We draw several\ninteresting findings from our extensive experiments with logistic regression\n(LR) and support vector machines (SVM) on five real datasets. For synchronous\nSGD, GPU always outperforms parallel CPU---they both outperform a sequential\nCPU solution by more than 400X. For asynchronous SGD, parallel CPU is the\nsafest choice while GPU with data replication is better in certain situations.\nThe choice between synchronous GPU and asynchronous CPU depends on the task and\nthe characteristics of the data. As a reference, our best implementation\noutperforms TensorFlow and BIDMach consistently. We hope that our insights\nprovide a useful guide for applying parallel SGD to generalized linear models.\n",
"title": "Stochastic Gradient Descent on Highly-Parallel Architectures"
}
| null | null | null | null | true | null |
10747
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks have enabled progress in a wide variety of applications.\nGrowing the size of the neural network typically results in improved accuracy.\nAs model sizes grow, the memory and compute requirements for training these\nmodels also increases. We introduce a technique to train deep neural networks\nusing half precision floating point numbers. In our technique, weights,\nactivations and gradients are stored in IEEE half-precision format.\nHalf-precision floating numbers have limited numerical range compared to\nsingle-precision numbers. We propose two techniques to handle this loss of\ninformation. Firstly, we recommend maintaining a single-precision copy of the\nweights that accumulates the gradients after each optimizer step. This\nsingle-precision copy is rounded to half-precision format during training.\nSecondly, we propose scaling the loss appropriately to handle the loss of\ninformation with half-precision gradients. We demonstrate that this approach\nworks for a wide variety of models including convolution neural networks,\nrecurrent neural networks and generative adversarial networks. This technique\nworks for large scale models with more than 100 million parameters trained on\nlarge datasets. Using this approach, we can reduce the memory consumption of\ndeep learning models by nearly 2x. In future processors, we can also expect a\nsignificant computation speedup using half-precision hardware units.\n",
"title": "Mixed Precision Training"
}
| null | null | null | null | true | null |
10748
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the differential equation for the Jacobi-type polynomials\nwhich are orthogonal on the interval $[-1,1]$ with respect to the classical\nJacobi measure and an additional point mass at one endpoint. This scale of\nhigher-order equations was introduced by J. and R. Koekoek in 1999 essentially\nby using special function methods. In this paper, a completely elementary\nrepresentation of the Jacobi-type differential operator of any even order is\ngiven. This enables us to trace the orthogonality relation of the Jacobi-type\npolynomials back to their differential equation. Moreover, we establish a new\nfactorization of the Jacobi-type operator which gives rise to a recurrence\nrelation with respect to the order of the equation.\n",
"title": "An elementary representation of the higher-order Jacobi-type differential equation"
}
| null | null | null | null | true | null |
10749
| null |
Default
| null | null |
null |
{
"abstract": " X-ray absorption spectroscopy measured at the $L$-edge of transition metals\n(TMs) is a powerful element-selective tool providing direct information about\nthe correlation effects in the $3d$ states. The theoretical modeling of the\n$2p\\rightarrow3d$ excitation processes remains to be challenging for\ncontemporary \\textit{ab initio} electronic structure techniques, due to strong\ncore-hole and multiplet effects influencing the spectra. In this work we\npresent a realization of the method combining the density-functional theory\nwith multiplet ligand field theory, proposed in Haverkort et al.\n(this https URL), Phys. Rev. B 85, 165113\n(2012). In this approach a single-impurity Anderson model (SIAM) is\nconstructed, with almost all parameters obtained from first principles, and\nthen solved to obtain the spectra. In our implementation we adopt the language\nof the dynamical mean-field theory and utilize the local density of states and\nthe hybridization function, projected onto TM $3d$ states, in order to\nconstruct the SIAM. The developed computational scheme is applied to calculate\nthe $L$-edge spectra for several TM monoxides. A very good agreement between\nthe theory and experiment is found for all studied systems. The effect of\ncore-hole relaxation, hybridization discretization, possible extensions of the\nmethod as well as its limitations are discussed.\n",
"title": "Theory of $L$-edge spectroscopy of strongly correlated systems"
}
| null | null | null | null | true | null |
10750
| null |
Default
| null | null |
null |
{
"abstract": " Topological states of matter are at the root of some of the most fascinating\nphenomena in condensed matter physics. Here we argue that skyrmions in the\npseudo-spin space related to an emerging SU(2) symmetry enlighten many\nmysterious properties of the pseudogap phase in under-doped cuprates. We detail\nthe role of the SU(2) symmetry in controlling the phase diagram of the\ncuprates, in particular how a cascade of phase transitions explains the arising\nof the pseudogap, superconducting and charge modulation phases seen at low\ntemperature. We specify the structure of the charge modulations inside the\nvortex core below $T_{c}$, as well as in a wide temperature region above\n$T_{c}$, which is a signature of the skyrmion topological structure. We argue\nthat the underlying SU(2) symmetry is the main structure controlling the\nemergent complexity of excitations at the pseudogap scale $T^{*}$. The theory\nyields a gapping of a large part of the anti-nodal region of the Brillouin\nzone, along with $q=0$ phase transitions, of both nematic and loop currents\ncharacters.\n",
"title": "Pseudo-spin Skyrmions in the Phase Diagram of Cuprate Superconductors"
}
| null | null | null | null | true | null |
10751
| null |
Default
| null | null |
null |
{
"abstract": " We consider large scale empirical risk minimization (ERM) problems, where\nboth the problem dimension and variable size is large. In these cases, most\nsecond order methods are infeasible due to the high cost in both computing the\nHessian over all samples and computing its inverse in high dimensions. In this\npaper, we propose a novel adaptive sample size second-order method, which\nreduces the cost of computing the Hessian by solving a sequence of ERM problems\ncorresponding to a subset of samples and lowers the cost of computing the\nHessian inverse using a truncated eigenvalue decomposition. We show that while\nwe geometrically increase the size of the training set at each stage, a single\niteration of the truncated Newton method is sufficient to solve the new ERM\nwithin its statistical accuracy. Moreover, for a large number of samples we are\nallowed to double the size of the training set at each stage, and the proposed\nmethod subsequently reaches the statistical accuracy of the full training set\napproximately after two effective passes. In addition to this theoretical\nresult, we show empirically on a number of well known data sets that the\nproposed truncated adaptive sample size algorithm outperforms stochastic\nalternatives for solving ERM problems.\n",
"title": "Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
10752
| null |
Validated
| null | null |
null |
{
"abstract": " Let $\\Gamma$ be a convex co-compact discrete group of isometries of the\nhyperbolic plane $\\mathbb{H}^2$, and $X=\\Gamma\\backslash \\mathbb{H}^2$ the\nassociated surface. In this paper we investigate the behaviour of resonances of\nthe Laplacian for large degree covers of $X$ given by a finite index normal\nsubgroup of $\\Gamma$. Using various techniques of thermodynamical formalism and\nrepresentation theory, we prove two new existence results of \"sharp non-trivial\nresonances\" close to $\\Re(s)=\\delta_\\Gamma$, both in the large degree limit,\nfor abelian covers and also infinite index congruence subgroups of\n$SL2(\\mathbb{Z})$.\n",
"title": "Large covers and sharp resonances of hyperbolic surfaces"
}
| null | null |
[
"Mathematics"
] | null | true | null |
10753
| null |
Validated
| null | null |
null |
{
"abstract": " This paper investigates the role of tutor feedback in language learning using\ncomputational models. We compare two dominant paradigms in language learning:\ninteractive learning and cross-situational learning - which differ primarily in\nthe role of social feedback such as gaze or pointing. We analyze the\nrelationship between these two paradigms and propose a new mixed paradigm that\ncombines the two paradigms and allows to test algorithms in experiments that\ncombine no feedback and social feedback. To deal with mixed feedback\nexperiments, we develop new algorithms and show how they perform with respect\nto traditional knn and prototype approaches.\n",
"title": "Computational Models of Tutor Feedback in Language Acquisition"
}
| null | null | null | null | true | null |
10754
| null |
Default
| null | null |
null |
{
"abstract": " This paper analyzes the coexistence performance of Wi-Fi and cellular\nnetworks conditioned on non-saturated traffic in the unlicensed spectrum. Under\nthe condition, the time-domain behavior of a cellular small-cell base station\n(SCBS) with a listen-before-talk (LBT) procedure is modeled as a Markov chain,\nand it is combined with a Markov chain which describes the time-domain behavior\nof a Wi-Fi access point. Using the proposed model, this study finds the optimal\ncontention window size of cellular SCBSs in which total throughput of both\nnetworks is maximized while satisfying the required throughput of each network,\nunder the given traffic densities of both networks. This will serve as a\nguideline for cellular operators with respect to performing LBT at cellular\nSCBSs according to the changes of traffic volumes of both networks over time.\n",
"title": "Non-Saturated Throughput Analysis of Coexistence of Wi-Fi and Cellular With Listen-Before-Talk in Unlicensed Spectrum"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10755
| null |
Validated
| null | null |
null |
{
"abstract": " Real-time traffic flow prediction can not only provide travelers with\nreliable traffic information so that it can save people's time, but also assist\nthe traffic management agency to manage traffic system. It can greatly improve\nthe efficiency of the transportation system. Traditional traffic flow\nprediction approaches usually need a large amount of data but still give poor\nperformances. With the development of deep learning, researchers begin to pay\nattention to artificial neural networks (ANNs) such as RNN and LSTM. However,\nthese ANNs are very time-consuming. In our research, we improve the Deep\nResidual Network and build a dynamic model which previous researchers hardly\nuse. We firstly integrate the input and output of the $i^{th}$ layer to the\ninput of the $i+1^{th}$ layer and prove that each layer will fit a simpler\nfunction so that the error rate will be much smaller. Then, we use the concept\nof online learning in our model to update pre-trained model during prediction.\nOur result shows that our model has higher accuracy than some state-of-the-art\nmodels. In addition, our dynamic model can perform better in practical\napplications.\n",
"title": "A Dynamic Model for Traffic Flow Prediction Using Improved DRN"
}
| null | null | null | null | true | null |
10756
| null |
Default
| null | null |
null |
{
"abstract": " Since its introduction in 2000, the locally linear embedding (LLE) has been\nwidely applied in data science. We provide an asymptotical analysis of the LLE\nunder the manifold setup. We show that for the general manifold, asymptotically\nwe may not obtain the Laplace-Beltrami operator, and the result may depend on\nthe non-uniform sampling, unless a correct regularization is chosen. We also\nderive the corresponding kernel function, which indicates that the LLE is not a\nMarkov process. A comparison with the other commonly applied nonlinear\nalgorithms, particularly the diffusion map, is provided, and its relationship\nwith the locally linear regression is also discussed.\n",
"title": "Think globally, fit locally under the Manifold Setup: Asymptotic Analysis of Locally Linear Embedding"
}
| null | null | null | null | true | null |
10757
| null |
Default
| null | null |
null |
{
"abstract": " We studied acetylhistidine (AcH), bare or microsolvated with a zinc cation by\nsimulations in isolation. First, a global search for minima of the potential\nenergy surface combining both, empirical and first-principles methods, is\nperformed individually for either one of five possible protonation states.\nComparing the most stable structures between tautomeric forms of negatively\ncharged AcH shows a clear preference for conformers with the neutral imidazole\nring protonated at the N-epsilon-2 atom. When adding a zinc cation to the\nsystem, the situation is reversed and N-delta-1-protonated structures are\nenergetically more favorable. Obtained minima structures then served as basis\nfor a benchmark study to examine the goodness of commonly applied levels of\ntheory, i.e. force fields, semi-empirical methods, density-functional\napproximations (DFA), and wavefunction-based methods with respect to high-level\ncoupled-cluster calculations, i.e. the DLPNO-CCSD(T) method. All tested force\nfields and semi-empirical methods show a poor performance in reproducing the\nenergy hierarchies of conformers, in particular of systems involving the zinc\ncation. Meta-GGA, hybrid, double hybrid DFAs, and the MP2 method are able to\ndescribe the energetics of the reference method within chemical accuracy, i.e.\nwith a mean absolute error of less than 1kcal/mol. Best performance is found\nfor the double hybrid DFA B3LYP+XYG3 with a mean absolute error of 0.7 kcal/mol\nand a maximum error of 1.8 kcal/mol. While MP2 performs similarly as\nB3LYP+XYG3, computational costs, i.e. timings, are increased by a factor of 4\nin comparison due to the large basis sets required for accurate results.\n",
"title": "Relative energetics of acetyl-histidine protomers with and without Zn2+ and a benchmark of energy methods"
}
| null | null | null | null | true | null |
10758
| null |
Default
| null | null |
null |
{
"abstract": " Quantum mechanics is not about 'quantum states': it is about values of\nphysical variables. I give a short fresh presentation and update on the\n$relational$ perspective on the theory, and a comment on its philosophical\nimplications.\n",
"title": "\"Space is blue and birds fly through it\""
}
| null | null | null | null | true | null |
10759
| null |
Default
| null | null |
null |
{
"abstract": " It is known that the set of all correlated equilibria of an n-player\nnon-cooperative game is a convex polytope and includes all the Nash equilibria.\nFurther, the Nash equilibria all lie on the boundary of this polytope. We study\nthe geometry of both these equilibrium notions when the players have cumulative\nprospect theoretic (CPT) preferences. The set of CPT correlated equilibria\nincludes all the CPT Nash equilibria but it need not be a convex polytope. We\nshow that it can, in fact, be disconnected. However, all the CPT Nash\nequilibria continue to lie on its boundary. We also characterize the sets of\nCPT correlated equilibria and CPT Nash equilibria for all 2x2 games.\n",
"title": "On the Geometry of Nash and Correlated Equilibria with Cumulative Prospect Theoretic Preferences"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10760
| null |
Validated
| null | null |
null |
{
"abstract": " It is known that unconfined dust explosions consist of a relatively weak\nprimary (turbulent) deflagrations followed by a devastating secondary\nexplosion. The secondary explosion may propagate with a speed of up to 1000 m/s\nproducing overpressures of over 8-10 atm. Since detonation is the only\nestablished theory that allows a rapid burning producing a high pressure that\ncan be sustained in open areas, the generally accepted view was that the\nmechanism explaining the high rate of combustion in dust explosions is\ndeflagration to detonation transition. In the present work we propose a\ntheoretical substantiation of the alternative propagation mechanism explaining\norigin of the secondary explosion producing the high speeds of combustion and\nhigh overpressures in unconfined dust explosions. We show that clustering of\ndust particles in a turbulent flow gives rise to a significant increase of the\nthermal radiation absorption length ahead of the advancing flame front. This\neffect ensures that clusters of dust particles are exposed to and heated by the\nradiation from hot combustion products of large gaseous explosions sufficiently\nlong time to become multi-point ignition kernels in a large volume ahead of the\nadvancing flame front. The ignition times of fuel-air mixture by the\nradiatively heated clusters of particles is considerably reduced compared to\nthe ignition time by the isolated particle. The radiation-induced multi-point\nignitions of a large volume of fuel-air ahead of the primary flame efficiently\nincrease the total flame area, giving rise to the secondary explosion, which\nresults in high rates of combustion and overpressures required to account for\nthe observed level of overpressures and damages in unconfined dust explosions,\nsuch as e.g. the 2005 Buncefield explosion and several vapor cloud explosions\nof severity similar to that of the Buncefield incident.\n",
"title": "Multipoint Radiation Induced Ignition of Dust Explosions: Turbulent Clustering of Particles and Increased Transparency"
}
| null | null | null | null | true | null |
10761
| null |
Default
| null | null |
null |
{
"abstract": " While both the data volume and heterogeneity of the digital music content is\nhuge, it has become increasingly important and convenient to build a\nrecommendation or search system to facilitate surfacing these content to the\nuser or consumer community. Most of the recommendation models fall into two\nprimary species, collaborative filtering based and content based approaches.\nVariants of instantiations of collaborative filtering approach suffer from the\ncommon issues of so called \"cold start\" and \"long tail\" problems where there is\nnot much user interaction data to reveal user opinions or affinities on the\ncontent and also the distortion towards the popular content. Content-based\napproaches are sometimes limited by the richness of the available content data\nresulting in a heavily biased and coarse recommendation result. In recent\nyears, the deep neural network has enjoyed a great success in large-scale image\nand video recognitions. In this paper, we propose and experiment using deep\nconvolutional neural network to imitate how human brain processes hierarchical\nstructures in the auditory signals, such as music, speech, etc., at various\ntimescales. This approach can be used to discover the latent factor models of\nthe music based upon acoustic hyper-images that are extracted from the raw\naudio waves of music. These latent embeddings can be used either as features to\nfeed to subsequent models, such as collaborative filtering, or to build\nsimilarity metrics between songs, or to classify music based on the labels for\ntraining such as genre, mood, sentiment, etc.\n",
"title": "Modeling of the Latent Embedding of Music using Deep Neural Network"
}
| null | null | null | null | true | null |
10762
| null |
Default
| null | null |
null |
{
"abstract": " We theoretically investigate the generation of intense keV attosecond pulses\nin an orthogonally polarized multicycle midinfrared two-color laser field. It\nis demonstrated that multiple continuum-like humps, which have a spectral width\nof about twenty orders of harmonics and an intensity of about one order higher\nthan adjacent normal harmonic peaks, are generated under proper two-color\ndelays, owing to the reduction of the number of electron-ion recollisions and\nsuppression of inter-half-cycle interference effect of multiple electron\ntrajectories when the long wavelength midinfrared driving field is used. Using\nthe semiclassical trajectory model, we have revealed the two-dimensional\nmanipulation of the electron-ion recollision process, which agrees well with\nthe time frequency analysis. By filtering these humps, intense isolated\nattosecond pulses are directly generated without any phase compensation. Our\nproposal provides a simple technique to generate intense isolated attosecond\npulses with various central photon energies covering the multi-keV spectral\nregime by using multicycle driving pulses with high pump energy in experiment.\n",
"title": "Intense keV isolated attosecond pulse generation by orthogonally polarized multicycle midinfrared two-color laser field"
}
| null | null | null | null | true | null |
10763
| null |
Default
| null | null |
null |
{
"abstract": " We demonstrate the usefulness of adding delay to infinite games with\nquantitative winning conditions. In a delay game, one of the players may delay\nher moves to obtain a lookahead on her opponent's moves. We show that\ndetermining the winner of delay games with winning conditions given by parity\nautomata with costs is EXPTIME-complete and that exponential bounded lookahead\nis both sufficient and in general necessary. Thus, although the parity\ncondition with costs is a quantitative extension of the parity condition, our\nresults show that adding costs does not increase the complexity of delay games\nwith parity conditions.\nFurthermore, we study a new phenomenon that appears in quantitative delay\ngames: lookahead can be traded for the quality of winning strategies and vice\nversa. We determine the extent of this tradeoff. In particular, even the\nsmallest lookahead allows to improve the quality of an optimal strategy from\nthe worst possible value to almost the smallest possible one. Thus, the benefit\nof introducing lookahead is twofold: not only does it allow the delaying player\nto win games she would lose without, but lookahead also allows her to improve\nthe quality of her winning strategies in games she wins even without lookahead.\n",
"title": "Games with Costs and Delays"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10764
| null |
Validated
| null | null |
null |
{
"abstract": " Reachability analysis for hybrid systems is an active area of development and\nhas resulted in many promising prototype tools. Most of these tools allow users\nto express hybrid system as automata with a set of ordinary differential\nequations (ODEs) associated with each state, as well as rules for transitions\nbetween states. Significant effort goes into developing and verifying and\ncorrectly implementing those tools. As such, it is desirable to expand the\nscope of applicability tools of such as far as possible. With this goal, we\nshow how compile-time transformations can be used to extend the basic hybrid\nODE formalism traditionally supported in hybrid reachability tools such as\nSpaceEx or Flow*. The extension supports certain types of partial derivatives\nand equational constraints. These extensions allow users to express, among\nother things, the Euler-Lagrangian equation, and to capture practically\nrelevant constraints that arise naturally in mechanical systems. Achieving this\nlevel of expressiveness requires using a binding time-analysis (BTA), program\ndifferentiation, symbolic Gaussian elimination, and abstract interpretation\nusing interval analysis. Except for BTA, the other components are either\nreadily available or can be easily added to most reachability tools. The paper\ntherefore focuses on presenting both the declarative and algorithmic\nspecifications for the BTA phase, and establishes the soundness of the\nalgorithmic specifications with respect to the declarative one.\n",
"title": "Compile-Time Extensions to Hybrid ODEs"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10765
| null |
Validated
| null | null |
null |
{
"abstract": " The Lasso is biased. Concave penalized least squares estimation (PLSE) takes\nadvantage of signal strength to reduce this bias, leading to sharper error\nbounds in prediction, coefficient estimation and variable selection. For\nprediction and estimation, the bias of the Lasso can be also reduced by taking\na smaller penalty level than what selection consistency requires, but such\nsmaller penalty level depends on the sparsity of the true coefficient vector.\nThe sorted L1 penalized estimation (Slope) was proposed for adaptation to such\nsmaller penalty levels. However, the advantages of concave PLSE and Slope do\nnot subsume each other. We propose sorted concave penalized estimation to\ncombine the advantages of concave and sorted penalizations. We prove that\nsorted concave penalties adaptively choose the smaller penalty level and at the\nsame time benefits from signal strength, especially when a significant\nproportion of signals are stronger than the corresponding adaptively selected\npenalty levels. A local convex approximation, which extends the local linear\nand quadratic approximations to sorted concave penalties, is developed to\nfacilitate the computation of sorted concave PLSE and proven to possess desired\nprediction and estimation error bounds. We carry out a unified treatment of\npenalty functions in a general optimization setting, including the penalty\nlevels and concavity of the above mentioned sorted penalties and mixed\npenalties motivated by Bayesian considerations. Our analysis of prediction and\nestimation errors requires the restricted eigenvalue condition on the design,\nnot beyond, and provides selection consistency under a required minimum signal\nstrength condition in addition. Thus, our results also sharpens existing\nresults on concave PLSE by removing the upper sparse eigenvalue component of\nthe sparse Riesz condition.\n",
"title": "Sorted Concave Penalized Regression"
}
| null | null | null | null | true | null |
10766
| null |
Default
| null | null |
null |
{
"abstract": " The interval subset sum problem (ISSP) is a generalization of the well-known\nsubset sum problem. Given a set of intervals\n$\\left\\{[a_{i,1},a_{i,2}]\\right\\}_{i=1}^n$ and a target integer $T,$ the ISSP\nis to find a set of integers, at most one from each interval, such that their\nsum best approximates the target $T$ but cannot exceed it. In this paper, we\nfirst study the computational complexity of the ISSP. We show that the ISSP is\nrelatively easy to solve compared to the 0-1 Knapsack problem (KP). We also\nidentify several subclasses of the ISSP which are polynomial time solvable\n(with high probability), albeit the problem is generally NP-hard. Then, we\npropose a new fully polynomial time approximation scheme (FPTAS) for solving\nthe general ISSP problem. The time and space complexities of the proposed\nscheme are ${\\cal O}\\left(n \\max\\left\\{1 / \\epsilon,\\log n\\right\\}\\right)$ and\n${\\cal O}\\left(n+1/\\epsilon\\right),$ respectively, where $\\epsilon$ is the\nrelative approximation error. To the best of our knowledge, the proposed scheme\nhas almost the same time complexity but a significantly lower space complexity\ncompared to the best known scheme. Both the correctness and efficiency of the\nproposed scheme are validated by numerical simulations. In particular, the\nproposed scheme successfully solves ISSP instances with $n=100,000$ and\n$\\epsilon=0.1\\%$ within one second.\n",
"title": "A New Fully Polynomial Time Approximation Scheme for the Interval Subset Sum Problem"
}
| null | null | null | null | true | null |
10767
| null |
Default
| null | null |
null |
{
"abstract": " Kernel methods are powerful and flexible approach to solve many problems in\nmachine learning. Due to the pairwise evaluations in kernel methods, the\ncomplexity of kernel computation grows as the data size increases; thus the\napplicability of kernel methods is limited for large scale datasets. Random\nFourier Features (RFF) has been proposed to scale the kernel method for solving\nlarge scale datasets by approximating kernel function using randomized Fourier\nfeatures. While this method proved very popular, still it exists shortcomings\nto be effectively used. As RFF samples the randomized features from a\ndistribution independent of training data, it requires sufficient large number\nof feature expansions to have similar performances to kernelized classifiers,\nand this is proportional to the number samples in the dataset. Thus, reducing\nthe number of feature dimensions is necessary to effectively scale to large\ndatasets. In this paper, we propose a kernel approximation method in a data\ndependent way, coined as Pseudo Random Fourier Features (PRFF) for reducing the\nnumber of feature dimensions and also to improve the prediction performance.\nThe proposed approach is evaluated on classification and regression problems\nand compared with the RFF, orthogonal random features and Nystr{ö}m approach\n",
"title": "Data Dependent Kernel Approximation using Pseudo Random Fourier Features"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
10768
| null |
Validated
| null | null |
null |
{
"abstract": " This paper focuses on the problem of estimating historical traffic volumes\nbetween sparsely-located traffic sensors, which transportation agencies need to\naccurately compute statewide performance measures. To this end, the paper\nexamines applications of vehicle probe data, automatic traffic recorder counts,\nand neural network models to estimate hourly volumes in the Maryland highway\nnetwork, and proposes a novel approach that combines neural networks with an\nexisting profiling method. On average, the proposed approach yields 24% more\naccurate estimates than volume profiles, which are currently used by\ntransportation agencies across the US to compute statewide performance\nmeasures. The paper also quantifies the value of using vehicle probe data in\nestimating hourly traffic volumes, which provides important managerial insights\nto transportation agencies interested in acquiring this type of data. For\nexample, results show that volumes can be estimated with a mean absolute\npercent error of about 21% at locations where average number of observed probes\nis between 30 and 47 vehicles/hr, which provides a useful guideline for\nassessing the value of probe vehicle data from different vendors.\n",
"title": "Estimating Historical Hourly Traffic Volumes via Machine Learning and Vehicle Probe Data: A Maryland Case Study"
}
| null | null | null | null | true | null |
10769
| null |
Default
| null | null |
null |
{
"abstract": " The present paper proposes a novel method of quantification of the variation\nin biofilm architecture, in correlation with the alteration of growth\nconditions that include, variations of substrate and conditioning layer. The\npolymeric biomaterial serving as substrates are widely used in implants and\nindwelling medical devices, while the plasma proteins serve as the conditioning\nlayer. The present method uses descriptive statistics of FESEM images of\nbiofilms obtained during a variety of growth conditions. We aim to explore here\nthe texture and fractal analysis techniques, to identify the most\ndiscriminatory features which are capable of predicting the difference in\nbiofilm growth conditions. We initially extract some statistical features of\nbiofilm images on bare polymer surfaces, followed by those on the same\nsubstrates adsorbed with two different types of plasma proteins, viz. Bovine\nserum albumin (BSA) and Fibronectin (FN), for two different adsorption times.\nThe present analysis has the potential to act as a futuristic technology for\ndeveloping a computerized monitoring system in hospitals with automated image\nanalysis and feature extraction, which may be used to predict the growth\nprofile of an emerging biofilm on surgical implants or similar medical\napplications.\n",
"title": "Monitoring of Wild Pseudomonas Biofilm Strain Conditions Using Statistical Characterisation of Scanning Electron Microscopy Images"
}
| null | null | null | null | true | null |
10770
| null |
Default
| null | null |
null |
{
"abstract": " We are interested in dynamics of quantum many-body systems under continuous\nobservation, and its physical realizations involving cold atoms in lattices. In\nthe present work we focus on continuous measurement of atomic currents in\nlattice models, including the Hubbard model. We describe a Cavity QED setup,\nwhere measurement of a homodyne current provides a faithful representation of\nthe atomic current as a function of time. We employ the quantum optical\ndescription in terms of a diffusive stochastic Schrödinger equation to follow\nthe time evolution of the atomic system conditional to observing a given\nhomodyne current trajectory, thus accounting for the competition between the\nHamiltonian evolution and measurement back-action. As an illustration, we\ndiscuss minimal models of atomic dynamics and continuous current measurement on\nrings with synthetic gauge fields, involving both real space and synthetic\ndimension lattices (represented by internal atomic states). Finally, by `not\nreading' the current measurements the time evolution of the atomic system is\ngoverned by a master equation, where - depending on the microscopic details of\nour CQED setups - we effectively engineer a current coupling of our system to a\nquantum reservoir. This provides novel scenarios of dissipative dynamics\ngenerating `dark' pure quantum many-body states.\n",
"title": "Continuous Measurement of an Atomic Current"
}
| null | null | null | null | true | null |
10771
| null |
Default
| null | null |
null |
{
"abstract": " In finance, durations between successive transactions are usually modeled by\nthe autoregressive conditional duration model based on a continuous\ndistribution omitting frequent zero values. Zero durations can be caused by\neither split transactions or independent transactions. We propose a discrete\nmodel allowing for excessive zero values based on the zero-inflated negative\nbinomial distribution with score dynamics. We establish the invertibility of\nthe score filter. Additionally, we derive sufficient conditions for the\nconsistency and asymptotic normality of the maximum likelihood of the model\nparameters. In an empirical study of DJIA stocks, we find that split\ntransactions cause on average 63% of zero values. Furthermore, the loss of\ndecimal places in the proposed model is less severe than incorrect treatment of\nzero values in continuous models.\n",
"title": "Zero-Inflated Autoregressive Conditional Duration Model for Discrete Trade Durations with Excessive Zeros"
}
| null | null | null | null | true | null |
10772
| null |
Default
| null | null |
null |
{
"abstract": " The tensile strength of small dusty bodies in the solar system is determined\nby the interaction between the composing grains. In the transition regime\nbetween small and sticky dust ($\\rm \\mu m$) and non cohesive large grains (mm),\nparticles still stick to each other but are easily separated. In laboratory\nexperiments we find that thermal creep gas flow at low ambient pressure\ngenerates an overpressure sufficient to overcome the tensile strength. For the\nfirst time it allows a direct measurement of the tensile strength of\nindividual, very small (sub)-mm aggregates which consist of only tens of grains\nin the (sub)-mm size range. We traced the disintegration of aggregates by\noptical imaging in ground based as well as microgravity experiments and present\nfirst results for basalt, palagonite and vitreous carbon samples with up to a\nfew hundred Pa. These measurements show that low tensile strength can be the\nresult of building loose aggregates with compact (sub)-mm units. This is in\nfavour of a combined cometary formation scenario by aggregation to compact\naggreates and gravitational instability of these units.\n",
"title": "Analog Experiments on Tensile Strength of Dusty and Cometary Matter"
}
| null | null | null | null | true | null |
10773
| null |
Default
| null | null |
null |
{
"abstract": " State-of-the-art knowledge compilers generate deterministic subsets of DNNF,\nwhich have been recently shown to be exponentially less succinct than DNNF. In\nthis paper, we propose a new method to compile DNNFs without enforcing\ndeterminism necessarily. Our approach is based on compiling deterministic DNNFs\nwith the addition of auxiliary variables to the input formula. These variables\nare then existentially quantified from the deterministic structure in linear\ntime, which would lead to a DNNF that is equivalent to the input formula and\nnot necessarily deterministic. On the theoretical side, we show that the new\nmethod could generate exponentially smaller DNNFs than deterministic ones, even\nby adding a single auxiliary variable. Further, we show that various existing\ntechniques that introduce auxiliary variables to the input formulas can be\nemployed in our framework. On the practical side, we empirically demonstrate\nthat our new method can significantly advance DNNF compilation on certain\nbenchmarks.\n",
"title": "On Compiling DNNFs without Determinism"
}
| null | null | null | null | true | null |
10774
| null |
Default
| null | null |
null |
{
"abstract": " Working over the prime field of characteristic two, consequences of the\nKoszul duality between the Steenrod algebra and the big Dyer-Lashof algebra are\nstudied, with an emphasis on the interplay between instability for the Steenrod\nalgebra action and that for the Dyer-Lashof operations. The central algebraic\nframework is the category of length-graded modules over the Steenrod algebra\nequipped with an unstable action of the Dyer-Lashof algebra, with compatibility\nvia the Nishida relations.\nA first ingredient is a functor defined on modules over the Steenrod algebra\nthat arose in the work of Kuhn and McCarty on the homology of infinite loop\nspaces. This functor is given in terms of derived functors of destabilization\nfrom the category of modules over the Steenrod algebra to unstable modules,\nenriched by taking into account the action of Dyer-Lashof operations.\nA second ingredient is the derived functors of the Dyer-Lashof\nindecomposables functor to length-graded modules over the Steenrod algebra.\nThese are related to functors used by Miller in his study of a spectral\nsequence to calculate the homology of an infinite delooping. An important fact\nis that these functors can be calculated as the homology of an explicit Koszul\ncomplex with terms expressed as certain Steinberg functors. The latter are\nquadratic dual to the more familiar Singer functors.\nBy exploiting the explicit complex built from the Singer functors which\ncalculates the derived functors of destabilization, Koszul duality leads to an\nalgebraic infinite delooping spectral sequence. This is conceptually similar to\nMiller's spectral sequence, but there seems to be no direct relationship.\nThe spectral sequence sheds light on the relationship between unstable\nmodules over the Steenrod algebra and all modules.\n",
"title": "Algebraic infinite delooping and derived destabilization"
}
| null | null | null | null | true | null |
10775
| null |
Default
| null | null |
null |
{
"abstract": " To train an inference network jointly with a deep generative topic model,\nmaking it both scalable to big corpora and fast in out-of-sample prediction, we\ndevelop Weibull hybrid autoencoding inference (WHAI) for deep latent Dirichlet\nallocation, which infers posterior samples via a hybrid of stochastic-gradient\nMCMC and autoencoding variational Bayes. The generative network of WHAI has a\nhierarchy of gamma distributions, while the inference network of WHAI is a\nWeibull upward-downward variational autoencoder, which integrates a\ndeterministic-upward deep neural network, and a stochastic-downward deep\ngenerative model based on a hierarchy of Weibull distributions. The Weibull\ndistribution can be used to well approximate a gamma distribution with an\nanalytic Kullback-Leibler divergence, and has a simple reparameterization via\nthe uniform noise, which help efficiently compute the gradients of the evidence\nlower bound with respect to the parameters of the inference network. The\neffectiveness and efficiency of WHAI are illustrated with experiments on big\ncorpora.\n",
"title": "WHAI: Weibull Hybrid Autoencoding Inference for Deep Topic Modeling"
}
| null | null | null | null | true | null |
10776
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a novel generative formulation of deep probabilistic models\nimplementing \"soft\" constraints on their function dynamics. In particular, we\ndevelop a flexible methodological framework where the modeled functions and\nderivatives of a given order are subject to inequality or equality constraints.\nWe then characterize the posterior distribution over model and constraint\nparameters through stochastic variational inference. As a result, the proposed\napproach allows for accurate and scalable uncertainty quantification on the\npredictions and on all parameters. We demonstrate the application of equality\nconstraints in the challenging problem of parameter inference in ordinary\ndifferential equation models, while we showcase the application of inequality\nconstraints on the problem of monotonic regression of count data. The proposed\napproach is extensively tested in several experimental settings, leading to\nhighly competitive results in challenging modeling applications, while offering\nhigh expressiveness, flexibility and scalability.\n",
"title": "Constraining the Dynamics of Deep Probabilistic Models"
}
| null | null | null | null | true | null |
10777
| null |
Default
| null | null |
null |
{
"abstract": " Information and communications technology can continue to change our world.\nThese advances will partially depend upon designs that synergistically combine\nsoftware with specialized hardware. Today open-source software incubates rapid\nsoftware-only innovation. The government can unleash software-hardware\ninnovation with programs to develop open hardware components, tools, and design\nflows that simplify and reduce the cost of hardware design. Such programs will\nspeed development for startup companies, established industry leaders,\neducation, scientific research, and for government intelligence and defense\nplatforms.\n",
"title": "Democratizing Design for Future Computing Platforms"
}
| null | null | null | null | true | null |
10778
| null |
Default
| null | null |
null |
{
"abstract": " The second-order dependence structure of purely nondeterministic stationary\nprocess is described by the coefficients of the famous Wold representation.\nThese coefficients can be obtained by factorizing the spectral density of the\nprocess. This relation together with some spectral density estimator is used in\norder to obtain consistent estimators of these coefficients. A spectral\ndensity-driven bootstrap for time series is then developed which uses the\nentire sequence of estimated MA coefficients together with appropriately\ngenerated pseudo innovations in order to obtain a bootstrap pseudo time series.\nIt is shown that if the underlying process is linear and if the pseudo\ninnovations are generated by means of an i.i.d. wild bootstrap which mimics, to\nthe necessary extent, the moment structure of the true innovations, this\nbootstrap proposal asymptotically works for a wide range of statistics. The\nrelations of the proposed bootstrap procedure to some other bootstrap\nprocedures, including the autoregressive-sieve bootstrap, are discussed. It is\nshown that the latter is a special case of the spectral density-driven\nbootstrap, if a parametric autoregressive spectral density estimator is used.\nSimulations investigate the performance of the new bootstrap procedure in\nfinite sample situations. Furthermore, a real-life data example is presented.\n",
"title": "EstimatedWold Representation and Spectral Density-Driven Bootstrap for Time Series"
}
| null | null | null | null | true | null |
10779
| null |
Default
| null | null |
null |
{
"abstract": " In the present paper, we study the match test for extended regular\nexpressions. We approach this NP-complete problem by introducing a novel\nvariant of two-way multihead automata, which reveals that the complexity of the\nmatch test is determined by a hidden combinatorial property of extended regular\nexpressions, and it shows that a restriction of the corresponding parameter\nleads to rich classes with a polynomial time match test. For presentational\nreasons, we use the concept of pattern languages in order to specify extended\nregular expressions. While this decision, formally, slightly narrows the scope\nof our results, an extension of our concepts and results to more general\nnotions of extended regular expressions is straightforward.\n",
"title": "A Polynomial Time Match Test for Large Classes of Extended Regular Expressions"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10780
| null |
Validated
| null | null |
null |
{
"abstract": " Spin-spin correlation function response in the low electronic density regime\nand externally applied electric field is evaluated for 2D metallic crystals\nunder Rashba-type coupling, fixed number of particles and two-fold energy band\nstructure. Intrinsic Zeeman-like effect on electron spin polarization, density\nof states, Fermi surface topology and transverse magnetic susceptibility are\nanalyzed in the zero temperature limit. A possible magnetic state for Dirac\nelectrons depending on the zero field band gap magnitude under this conditions\nis found.\n",
"title": "Anomalous Magnetism for Dirac Electrons in Two Dimensional Rashba Systems"
}
| null | null | null | null | true | null |
10781
| null |
Default
| null | null |
null |
{
"abstract": " We consider estimating average treatment effects (ATE) of a binary treatment\nin observational data when data-driven variable selection is needed to select\nrelevant covariates from a moderately large number of available covariates\n$\\mathbf{X}$. To leverage covariates among $\\mathbf{X}$ predictive of the\noutcome for efficiency gain while using regularization to fit a parameteric\npropensity score (PS) model, we consider a dimension reduction of $\\mathbf{X}$\nbased on fitting both working PS and outcome models using adaptive LASSO. A\nnovel PS estimator, the Double-index Propensity Score (DiPS), is proposed, in\nwhich the treatment status is smoothed over the linear predictors for\n$\\mathbf{X}$ from both the initial working models. The ATE is estimated by\nusing the DiPS in a normalized inverse probability weighting (IPW) estimator,\nwhich is found to maintain double-robustness and also local semiparametric\nefficiency with a fixed number of covariates $p$. Under misspecification of\nworking models, the smoothing step leads to gains in efficiency and robustness\nover traditional doubly-robust estimators. These results are extended to the\ncase where $p$ diverges with sample size and working models are sparse.\nSimulations show the benefits of the approach in finite samples. We illustrate\nthe method by estimating the ATE of statins on colorectal cancer risk in an\nelectronic medical record (EMR) study and the effect of smoking on C-reactive\nprotein (CRP) in the Framingham Offspring Study.\n",
"title": "Estimating Average Treatment Effects with a Double-Index Propensity Score"
}
| null | null | null | null | true | null |
10782
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we address the problem of detection, classification and\nquantification of emotions of text in any form. We consider English text\ncollected from social media like Twitter, which can provide information having\nutility in a variety of ways, especially opinion mining. Social media like\nTwitter and Facebook is full of emotions, feelings and opinions of people all\nover the world. However, analyzing and classifying text on the basis of\nemotions is a big challenge and can be considered as an advanced form of\nSentiment Analysis. This paper proposes a method to classify text into six\ndifferent Emotion-Categories: Happiness, Sadness, Fear, Anger, Surprise and\nDisgust. In our model, we use two different approaches and combine them to\neffectively extract these emotions from text. The first approach is based on\nNatural Language Processing, and uses several textual features like emoticons,\ndegree words and negations, Parts Of Speech and other grammatical analysis. The\nsecond approach is based on Machine Learning classification algorithms. We have\nalso successfully devised a method to automate the creation of the training-set\nitself, so as to eliminate the need of manual annotation of large datasets.\nMoreover, we have managed to create a large bag of emotional words, along with\ntheir emotion-intensities. On testing, it is shown that our model provides\nsignificant accuracy in classifying tweets taken from Twitter.\n",
"title": "Emotion Detection and Analysis on Social Media"
}
| null | null | null | null | true | null |
10783
| null |
Default
| null | null |
null |
{
"abstract": " We propose an efficient method to generate white-box adversarial examples to\ntrick a character-level neural classifier. We find that only a few\nmanipulations are needed to greatly decrease the accuracy. Our method relies on\nan atomic flip operation, which swaps one token for another, based on the\ngradients of the one-hot input vectors. Due to efficiency of our method, we can\nperform adversarial training which makes the model more robust to attacks at\ntest time. With the use of a few semantics-preserving constraints, we\ndemonstrate that HotFlip can be adapted to attack a word-level classifier as\nwell.\n",
"title": "HotFlip: White-Box Adversarial Examples for Text Classification"
}
| null | null | null | null | true | null |
10784
| null |
Default
| null | null |
null |
{
"abstract": " We present and analyze two pathways to produce commercial optical-fiber patch\ncords with stable long-term transmission in the ultraviolet (UV) at powers up\nto $\\sim$ 200 mW, and typical bulk transmission between 66-75\\%. Commercial\nfiber patch cords in the UV are of great interest across a wide variety of\nscientific applications ranging from biology to metrology, and the lack of\navailability has yet to be suitably addressed. We provide a guide to producing\nsuch solarization-resistant, hydrogen-passivated, polarization-maintaining,\nconnectorized and jacketed optical fibers compatible with demanding scientific\nand industrial applications. Our presentation describes the fabrication and\nhydrogen loading procedure in detail and presents a high-pressure vessel\ndesign, calculations of required \\Ht\\ loading times, and information on patch\ncord handling and the mitigation of bending sensitivities. Transmission at 313\nnm is measured over many months for cumulative energy on the fiber output of >\n10 kJ with no demonstrable degradation due to UV solarization, in contrast to\nstandard uncured fibers. Polarization sensitivity and stability are\ncharacterized yielding polarization extinction ratios between 15 dB and 25 dB\nat 313 nm, where we find patch cords become linearly polarizing. We observe\nthat particle deposition at the fiber facet induced by high-intensity UV\nexposure can (reversibly) deteriorate patch cord performance and describe a\ntechnique for nitrogen purging of fiber collimators which mitigates this\nphenomenon.\n",
"title": "Towards fully commercial, UV-compatible fiber patch cords"
}
| null | null | null | null | true | null |
10785
| null |
Default
| null | null |
null |
{
"abstract": " Model instability and poor prediction of long-term behavior are common\nproblems when modeling dynamical systems using nonlinear \"black-box\"\ntechniques. Direct optimization of the long-term predictions, often called\nsimulation error minimization, leads to optimization problems that are\ngenerally non-convex in the model parameters and suffer from multiple local\nminima. In this work we present methods which address these problems through\nconvex optimization, based on Lagrangian relaxation, dissipation inequalities,\ncontraction theory, and semidefinite programming. We demonstrate the proposed\nmethods with a model order reduction task for electronic circuit design and the\nidentification of a pneumatic actuator from experiment.\n",
"title": "Convex Parameterizations and Fidelity Bounds for Nonlinear Identification and Reduced-Order Modelling"
}
| null | null | null | null | true | null |
10786
| null |
Default
| null | null |
null |
{
"abstract": " A spectroscopic study of Rydberg states of helium ($n$ = 30 and 45) in\nmagnetic, electric and combined magnetic and electric fields with arbitrary\nrelative orientations of the field vectors is presented. The emphasis is on two\nspecial cases where (i) the diamagnetic term is negligible and both\nparamagnetic Zeeman and Stark effects are linear ($n$ = 30, $B \\leq$ 120 mT and\n$F$ = 0 - 78 V/cm ), and (ii) the diamagnetic term is dominant and the Stark\neffect is linear ($n$ = 45, $B$ = 277 mT and $F$ = 0 - 8 V/cm). Both cases\ncorrespond to regimes where the interactions induced by the electric and\nmagnetic fields are much weaker than the Coulomb interaction, but much stronger\nthan the spin-orbit interaction. The experimental spectra are compared to\nspectra calculated by determining the eigenvalues of the Hamiltonian matrix\ndescribing helium Rydberg states in the external fields. The spectra and the\ncalculated energy-level diagrams in external fields reveal avoided crossings\nbetween levels of different $m_l$ values and pronounced $m_l$-mixing effects at\nall angles between the electric and magnetic field vectors other than 0. These\nobservations are discussed in the context of the development of a method to\ngenerate dense samples of cold atoms and molecules in a magnetic trap following\nRydberg-Stark deceleration.\n",
"title": "Rydberg states of helium in electric and magnetic fields of arbitrary relative orientation"
}
| null | null | null | null | true | null |
10787
| null |
Default
| null | null |
null |
{
"abstract": " This work is concerned with tests on structural breaks in the spot volatility\nprocess of a general Itô semimartingale based on discrete observations\ncontaminated with i.i.d. microstructure noise. We construct a consistent test\nbuilding up on infill asymptotic results for certain functionals of spectral\nspot volatility estimates. A weak limit theorem is established under the null\nhypothesis relying on extreme value theory. We prove consistency of the test\nand of an associated estimator for the change point. A simulation study\nillustrates the finite-sample performance of the method and efficiency gains\ncompared to a skip-sampling approach.\n",
"title": "Change-point inference on volatility in noisy Itô semimartingales"
}
| null | null | null | null | true | null |
10788
| null |
Default
| null | null |
null |
{
"abstract": " We present Direct Numerical Simulations of the transport of heat and heavy\nelements across a double-diffusive interface or a double-diffusive staircase,\nin conditions that are close to those one may expect to find near the boundary\nbetween the heavy-element rich core and the hydrogen-helium envelope of giant\nplanets such as Jupiter. We find that the non-dimensional ratio of the buoyancy\nflux associated with heavy element transport to the buoyancy flux associated\nwith heat transport lies roughly between 0.5 and 1, which is much larger than\nprevious estimates derived by analogy with geophysical double-diffusive\nconvection. Using these results in combination with a core-erosion model\nproposed by Guillot et al. (2004), we find that the entire core of Jupiter\nwould be eroded within less than 1Myr assuming that the core-envelope boundary\nis composed of a single interface. We also propose an alternative model that is\nmore appropriate in the presence of a well-established double-diffusive\nstaircase, and find that in this limit a large fraction of the core could be\npreserved. These findings are interesting in the context of Juno's recent\nresults, but call for further modeling efforts to better understand the process\nof core erosion from first principles.\n",
"title": "Double-diffusive erosion of the core of Jupiter"
}
| null | null | null | null | true | null |
10789
| null |
Default
| null | null |
null |
{
"abstract": " Recently, Odrzywolek and Rafelski (arXiv:1612.03556) have found three\ndistinct categories of exoplanets, when they are classified based on density.\nWe first carry out a similar classification of exoplanets according to their\ndensity using the Gaussian Mixture Model, followed by information theoretic\ncriterion (AIC and BIC) to determine the optimum number of components. Such a\none-dimensional classification favors two components using AIC and three using\nBIC, but the statistical significance from both the tests is not significant\nenough to decisively pick the best model between two and three components. We\nthen extend this GMM-based classification to two dimensions by using both the\ndensity and the Earth similarity index (arXiv:1702.03678), which is a measure\nof how similar each planet is compared to the Earth. For this two-dimensional\nclassification, both AIC and BIC provide decisive evidence in favor of three\ncomponents.\n",
"title": "Classifying Exoplanets with Gaussian Mixture Model"
}
| null | null |
[
"Physics"
] | null | true | null |
10790
| null |
Validated
| null | null |
null |
{
"abstract": " We study the effect of a uniform external magnetization on p-wave\nsuperconductivity on the (001) surface of the crystalline topological\ninsulator(TCI) Pb$_{1-x}$Sn$_{x}$Te. It was shown by us in an earlier work that\na chiral p-wave finite momentum pairing (FFLO) state can be stabilized in this\nsystem in the presence of weak repulsive interparticle interactions. In\nparticular, the superconducting instability is very sensitive to the Hund's\ninteraction in the multiorbital TCI, and no instabilities are found to be\npossible for the \"wrong\" sign of the Hund's splitting. Here we show that for a\nfinite Hund's splitting of interactions, a significant value of the external\nmagnetization is needed to degrade the surface superconductivity, while in the\nabsence of the Hund's interaction, an arbitrarily small external magnetization\ncan destroy the superconductivity. This implies that multiorbital effects in\nthis system play an important role in stabilizing electronic order on the\nsurface.\n",
"title": "Competing effects of Hund's splitting and symmetry-breaking perturbations on electronic order in Pb$_{1-x}$Sn$_{x}$Te"
}
| null | null |
[
"Physics"
] | null | true | null |
10791
| null |
Validated
| null | null |
null |
{
"abstract": " We calculate $q$-dimension of $k$-th Cartan power of fundamental\nrepresentation $\\Lambda_0$, corresponding to affine root of affine simply laced\nKac-Moody algebras, and show that in the limit $q\\rightarrow 1 $, and with\nnatural renormalization, it is equal to universal partition function of\nChern-Simons theory on three-dimensional sphere.\n",
"title": "Partition function of Chern-Simons theory as renormalized q-dimension"
}
| null | null | null | null | true | null |
10792
| null |
Default
| null | null |
null |
{
"abstract": " Workers participating in a crowdsourcing platform can have a wide range of\nabilities and interests. An important problem in crowdsourcing is the task\nrecommendation problem, in which tasks that best match a particular worker's\npreferences and reliabilities are recommended to that worker. A task\nrecommendation scheme that assigns tasks more likely to be accepted by a worker\nwho is more likely to complete it reliably results in better performance for\nthe task requester. Without prior information about a worker, his preferences\nand reliabilities need to be learned over time. In this paper, we propose a\nmulti-armed bandit (MAB) framework to learn a worker's preferences and his\nreliabilities for different categories of tasks. However, unlike the classical\nMAB problem, the reward from the worker's completion of a task is unobservable.\nWe therefore include the use of gold tasks (i.e., tasks whose solutions are\nknown \\emph{a priori} and which do not produce any rewards) in our task\nrecommendation procedure. Our model could be viewed as a new variant of MAB, in\nwhich the random rewards can only be observed at those time steps where gold\ntasks are used, and the accuracy of estimating the expected reward of\nrecommending a task to a worker depends on the number of gold tasks used. We\nshow that the optimal regret is $O(\\sqrt{n})$, where $n$ is the number of tasks\nrecommended to the worker. We develop three task recommendation strategies to\ndetermine the number of gold tasks for different task categories, and show that\nthey are order optimal. Simulations verify the efficiency of our approaches.\n",
"title": "Task Recommendation in Crowdsourcing Based on Learning Preferences and Reliabilities"
}
| null | null | null | null | true | null |
10793
| null |
Default
| null | null |
null |
{
"abstract": " By using representation theory of the elliptic quantum group U_{q,p}(sl_N^),\nwe present a systematic method of deriving the weight functions. The resultant\nsl_N type elliptic weight functions are new and give elliptic and dynamical\nanalogues of those obtained in the trigonometric case. We then discuss some\nbasic properties of the elliptic weight functions. We also present an explicit\nformula for formal elliptic hypergeometric integral solution to the face type,\ni.e. dynamical, elliptic q-KZ equation.\n",
"title": "Elliptic Weight Functions and Elliptic q-KZ Equation"
}
| null | null | null | null | true | null |
10794
| null |
Default
| null | null |
null |
{
"abstract": " The notion of formal duality in finite Abelian groups appeared recently in\nrelation to spherical designs, tight sphere packings, and energy minimizing\nconfigurations in Euclidean spaces. For finite cyclic groups it is conjectured\nthat there are no primitive formally dual pairs besides the trivial one and the\nTITO configuration. This conjecture has been verified for cyclic groups of\nprime power order, as well as of square-free order. In this paper, we will\nconfirm the conjecture for other classes of cyclic groups, namely almost all\ncyclic groups of order a product of two prime powers, with finitely many\nexceptions for each pair of primes, or whose order $N$ satisfies $p\\mid\\!\\mid\nN$, where $p$ a prime satisfying the so-called self-conjugacy property with\nrespect to $N$. For the above proofs, various tools were needed: the field\ndescent method, used chiefly for the circulant Hadamard conjecture, the\ntechniques of Coven & Meyerowitz for sets that tile $\\mathbb{Z}$ or\n$\\mathbb{Z}_N$ by translations, dubbed herein as the polynomial method, as well\nas basic number theory of cyclotomic fields, especially the splitting of primes\nin a given cyclotomic extension.\n",
"title": "Formal duality in finite cyclic groups"
}
| null | null |
[
"Mathematics"
] | null | true | null |
10795
| null |
Validated
| null | null |
null |
{
"abstract": " When our eyes are presented with the same image, the brain processes it to\nview it as a single coherent one. The lateral shift in the position of our\neyes, causes the two images to possess certain differences, which our brain\nexploits for the purpose of depth perception and to gauge the size of objects\nat different distances, a process commonly known as stereopsis. However, when\npresented with two different visual stimuli, the visual awareness alternates.\nThis phenomenon of binocular rivalry is a result of competition between the\ncorresponding neuronal populations of the two eyes. The article presents a\ncomparative study of various dynamical models proposed to capture this process.\nIt goes on to study the effect of a certain parameter on the rate of perceptual\nalternations and proceeds to disprove the initial propositions laid down to\ncharacterise this phenomenon. It concludes with a discussion on the possible\nfuture work that can be conducted to obtain a better picture of the neuronal\nfunctioning behind this rivalry.\n",
"title": "Nonlinear Dynamics of Binocular Rivalry: A Comparative Study"
}
| null | null | null | null | true | null |
10796
| null |
Default
| null | null |
null |
{
"abstract": " Almost two decades ago, Wattenberg published a paper with the title\n'Nonstandard Analysis and Constructivism?' in which he speculates on a possible\nconnection between Nonstandard Analysis and constructive mathematics. We study\nWattenberg's work in light of recent research on the aforementioned connection.\nOn one hand, with only slight modification, some of Wattenberg's theorems in\nNonstandard Analysis are seen to yield effective and constructive theorems (not\ninvolving Nonstandard Analysis). On the other hand, we establish the\nincorrectness of some of Wattenberg's (explicit and implicit) claims regarding\nthe constructive status of the axioms Transfer and Standard Part of Nonstandard\nAnalysis.\n",
"title": "Nonstandard Analysis and Constructivism!"
}
| null | null | null | null | true | null |
10797
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, a novel method using 3D Convolutional Neural Network (3D-CNN)\narchitecture has been proposed for speaker verification in the text-independent\nsetting. One of the main challenges is the creation of the speaker models. Most\nof the previously-reported approaches create speaker models based on averaging\nthe extracted features from utterances of the speaker, which is known as the\nd-vector system. In our paper, we propose an adaptive feature learning by\nutilizing the 3D-CNNs for direct speaker model creation in which, for both\ndevelopment and enrollment phases, an identical number of spoken utterances per\nspeaker is fed to the network for representing the speakers' utterances and\ncreation of the speaker model. This leads to simultaneously capturing the\nspeaker-related information and building a more robust system to cope with\nwithin-speaker variation. We demonstrate that the proposed method significantly\noutperforms the traditional d-vector verification system. Moreover, the\nproposed system can also be an alternative to the traditional d-vector system\nwhich is a one-shot speaker modeling system by utilizing 3D-CNNs.\n",
"title": "Text-Independent Speaker Verification Using 3D Convolutional Neural Networks"
}
| null | null |
[
"Computer Science"
] | null | true | null |
10798
| null |
Validated
| null | null |
null |
{
"abstract": " With nonignorable missing data, likelihood-based inference should be based on\nthe joint distribution of the study variables and their missingness indicators.\nThese joint models cannot be estimated from the data alone, thus requiring the\nanalyst to impose restrictions that make the models uniquely obtainable from\nthe distribution of the observed data. We present an approach for constructing\nclasses of identifiable nonignorable missing data models. The main idea is to\nuse a sequence of carefully set up identifying assumptions, whereby we specify\npotentially different missingness mechanisms for different blocks of variables.\nWe show that the procedure results in models with the desirable property of\nbeing non-parametric saturated.\n",
"title": "Sequential identification of nonignorable missing data mechanisms"
}
| null | null | null | null | true | null |
10799
| null |
Default
| null | null |
null |
{
"abstract": " Low-textured image stitching remains a challenging problem. It is difficult\nto achieve good alignment and it is easy to break image structures due to\ninsufficient and unreliable point correspondences. Moreover, because of the\nviewpoint variations between multiple images, the stitched images suffer from\nprojective distortions. To solve these problems, this paper presents a\nline-guided local warping method with a global similarity constraint for image\nstitching. Line features which serve well for geometric descriptions and scene\nconstraints, are employed to guide image stitching accurately. On one hand, the\nline features are integrated into a local warping model through a designed\nweight function. On the other hand, line features are adopted to impose strong\ngeometric constraints, including line correspondence and line colinearity, to\nimprove the stitching performance through mesh optimization. To mitigate\nprojective distortions, we adopt a global similarity constraint, which is\nintegrated with the projective warps via a designed weight strategy. This\nconstraint causes the final warp to slowly change from a projective to a\nsimilarity transformation across the image. Finally, the images undergo a\ntwo-stage alignment scheme that provides accurate alignment and reduces\nprojective distortion. We evaluate our method on a series of images and compare\nit with several other methods. The experimental results demonstrate that the\nproposed method provides a convincing stitching performance and that it\noutperforms other state-of-the-art methods.\n",
"title": "Image Stitching by Line-guided Local Warping with Global Similarity Constraint"
}
| null | null | null | null | true | null |
10800
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.