text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " We define an infinite measure-preserving transformation to have infinite\nsymmetric ergodic index if all finite Cartesian products of the transformation\nand its inverse are ergodic, and show that infinite symmetric ergodic index\ndoes not imply that all products of powers are conservative, so does not imply\npower weak mixing. We provide a sufficient condition for $k$-fold and infinite\nsymmetric ergodic index and use it to answer a question on the relationship\nbetween product conservativity and product ergodicity. We also show that a\nclass of rank-one transformations that have infinite symmetric ergodic index\nare not power weakly mixing, and precisely characterize a class of power weak\ntransformations that generalizes existing examples.\n",
"title": "Infinite symmetric ergodic index and related examples in infinite measure"
} | null | null | [
"Mathematics"
]
| null | true | null | 20101 | null | Validated | null | null |
null | {
"abstract": " An exciting branch of machine learning research focuses on methods for\nlearning, optimizing, and integrating unknown functions that are difficult or\ncostly to evaluate. A popular Bayesian approach to this problem uses a Gaussian\nprocess (GP) to construct a posterior distribution over the function of\ninterest given a set of observed measurements, and selects new points to\nevaluate using the statistics of this posterior. Here we extend these methods\nto exploit derivative information from the unknown function. We describe\nmethods for Bayesian optimization (BO) and Bayesian quadrature (BQ) in settings\nwhere first and second derivatives may be evaluated along with the function\nitself. We perform sampling-based inference in order to incorporate uncertainty\nover hyperparameters, and show that both hyperparameter and function\nuncertainty decrease much more rapidly when using derivative information.\nMoreover, we introduce techniques for overcoming ill-conditioning issues that\nhave plagued earlier methods for gradient-enhanced Gaussian processes and\nkriging. We illustrate the efficacy of these methods using applications to real\nand simulated Bayesian optimization and quadrature problems, and show that\nexploting derivatives can provide substantial gains over standard methods.\n",
"title": "Exploiting gradients and Hessians in Bayesian optimization and Bayesian quadrature"
} | null | null | null | null | true | null | 20102 | null | Default | null | null |
null | {
"abstract": " We consider a system of nonlinear partial differential equations that\ndescribes an age-structured population inhabiting several temporally varying\npatches. We prove existence and uniqueness of solution and analyze its\nlarge-time behavior in cases when the environment is constant and when it\nchanges periodically. A pivotal assumption is that individuals can disperse and\nthat each patch can be reached from every other patch, directly or through\nseveral intermediary patches. We introduce the net reproductive operator and\ncharacteristic equations for time-independent and periodical models and prove\nthat permanency is defined by the net reproductive rate for the whole system.\nIf the net reproductive rate is less or equal to one, extinction on all patches\nis imminent. Otherwise, permanency on all patches is guaranteed. The proof is\nbased on a new approach to analysis of large-time stability.\n",
"title": "Permanency of the age-structured population model on several temporally variable patches"
} | null | null | null | null | true | null | 20103 | null | Default | null | null |
null | {
"abstract": " We report the development of indium oxide (In2O3) transistors via a single\nstep laser-induced photochemical conversion process of a sol-gel metal oxide\nprecursor. Through careful optimization of the laser annealing conditions we\ndemonstrated successful conversion of the precursor to In2O3 and its subsequent\nimplementation in n-channel transistors with electron mobility up to 13 cm2/Vs.\nImportantly, the process does not require thermal annealing making it\ncompatible with temperature sensitive materials such as plastic. On the other\nhand, the spatial conversion/densification of the sol-gel layer eliminates\nadditional process steps associated with semiconductor patterning and hence\nsignificantly reduces fabrication complexity and cost. Our work demonstrates\nunambiguously that laser-induced photochemical conversion of sol-gel metal\noxide precursors can be rapid and compatible with large-area electronics\nmanufacturing.\n",
"title": "Rapid laser-induced photochemical conversion of sol-gel precursors to In2O3 layers and their application in thin-film transistors"
} | null | null | null | null | true | null | 20104 | null | Default | null | null |
null | {
"abstract": " In this paper, change-point problems for long memory stochastic volatility\nmodels are considered. A general testing problem which includes various\nalternative hypotheses is discussed. Under the hypothesis of stationarity the\nlimiting behavior of CUSUM- and Wilcoxon-type test statistics is derived. In\nthis context, a limit theorem for the two-parameter empirical process of long\nmemory stochastic volatility time series is proved. In particular, it is shown\nthat the asymptotic distribution of CUSUM test statistics may not be affected\nby long memory, unlike Wilcoxon test statistics which are typically influenced\nby long range dependence. To avoid the estimation of nuisance parameters in\napplications, the usage of self-normalized test statistics is proposed. The\ntheoretical results are accompanied by simulation studies which characterize\nthe finite sample behavior of the considered testing procedures when testing\nfor changes in mean, in variance, and in the tail index.\n",
"title": "Testing for Change in Stochastic Volatility with Long Range Dependence"
} | null | null | null | null | true | null | 20105 | null | Default | null | null |
null | {
"abstract": " Gravity assist manoeuvres are one of the most succesful techniques in\nastrodynamics. In these trajectories the spacecraft comes very close to the\nsurface of the Earth, or other Solar system planets or moons, and, as a\nconsequence, it experiences the effect of atmospheric friction by the outer\nlayers of the Earth's atmosphere or ionosphere.\nIn this paper we analyze a standard atmospheric model to estimate the density\nprofile during the two Galileo flybys, the NEAR and the Juno flyby. We show\nthat, even allowing for a margin of uncertainty in the spacecraft cross-section\nand the drag coefficient, the observed -8 mm/sec anomalous velocity decrease\nduring the second Galileo flyby of December, 8th, 1992 cannot be attributed\nonly to atmospheric friction. On the other hand, for perigees on the border\nbetween the termosphere and the exosphere the friction only accounts for a\nfraction of a millimeter per second in the final asymptotic velocity.\n",
"title": "Kinematics effects of atmospheric friction in spacecraft flybys"
} | null | null | null | null | true | null | 20106 | null | Default | null | null |
null | {
"abstract": " In the last years, model checking with interval temporal logics is emerging\nas a viable alternative to model checking with standard point-based temporal\nlogics, such as LTL, CTL, CTL*, and the like. The behavior of the system is\nmodeled by means of (finite) Kripke structures, as usual. However, while\ntemporal logics which are interpreted \"point-wise\" describe how the system\nevolves state-by-state, and predicate properties of system states, those which\nare interpreted \"interval-wise\" express properties of computation stretches,\nspanning a sequence of states. A proposition letter is assumed to hold over a\ncomputation stretch (interval) if and only if it holds over each component\nstate (homogeneity assumption). A natural question arises: is there any\nadvantage in replacing points by intervals as the primary temporal entities, or\nis it just a matter of taste?\nIn this paper, we study the expressiveness of Halpern and Shoham's interval\ntemporal logic (HS) in model checking, in comparison with those of LTL, CTL,\nand CTL*. To this end, we consider three semantic variants of HS: the\nstate-based one, introduced by Montanari et al., that allows time to branch\nboth in the past and in the future, the computation-tree-based one, that allows\ntime to branch in the future only, and the trace-based variant, that disallows\ntime to branch. These variants are compared among themselves and to the\naforementioned standard logics, getting a complete picture. In particular, we\nshow that HS with trace-based semantics is equivalent to LTL (but at least\nexponentially more succinct), HS with computation-tree-based semantics is\nequivalent to finitary CTL*, and HS with state-based semantics is incomparable\nwith all of them (LTL, CTL, and CTL*).\n",
"title": "Interval vs. Point Temporal Logic Model Checking: an Expressiveness Comparison"
} | null | null | null | null | true | null | 20107 | null | Default | null | null |
null | {
"abstract": " Next investigations in our program of transition from the He atom to the\ncomplex atoms description have been presented. The method of interacting\nconfigurations in the complex number representation is under consideration. The\nspectroscopic characteristics of the Mg and Ca atoms in the problem of the\nelectron-impact ionization of these atoms are investigated. The energies and\nthe widths of the lowest autoionizing states of Mg and Ca atoms are calculated.\nFew results in the photoionization problem on the autoionizing states above the\nn=2 threshold of helium-like Be ion are presented.\n",
"title": "Calculations for electron-impact ionization of magnesium and calcium atoms in the method of interacting configurations in the complex number representation"
} | null | null | null | null | true | null | 20108 | null | Default | null | null |
null | {
"abstract": " An important and emerging component of planetary exploration is sample\nretrieval and return to Earth. Obtaining and analyzing rock samples can provide\nunprecedented insight into the geology, geo-history and prospects for finding\npast life and water. Current methods of exploration rely on mission scientists\nto identify objects of interests and this presents major operational\nchallenges. Finding objects of interests will require systematic and efficient\nmethods to quickly and correctly evaluate the importance of hundreds if not\nthousands of samples so that the most interesting are saved for further\nanalysis by the mission scientists. In this paper, we propose an automated\ninformation theoretic approach to identify shapes of interests using a library\nof predefined interesting shapes. These predefined shapes maybe human input or\nsamples that are then extrapolated by the shape matching system using the\nSuperformula to judge the importance of newly obtained objects. Shape samples\nare matched to a library of shapes using the eigenfaces approach enabling\ncategorization and prioritization of the sample. The approach shows robustness\nto simulated sensor noise of up to 20%. The effect of shape parameters and\nrotational angle on shape matching accuracy has been analyzed. The approach\nshows significant promise and efforts are underway in testing the algorithm\nwith real rock samples.\n",
"title": "An Information Theoretic Approach to Sample Acquisition and Perception in Planetary Robotics"
} | null | null | null | null | true | null | 20109 | null | Default | null | null |
null | {
"abstract": " We consider the motion of incompressible viscous fluids bounded above by a\nfree surface and below by a solid surface in the $N$-dimensional Euclidean\nspace for $N\\geq 2$ when the gravity is not taken into account. The aim of this\npaper is to show the global solvability of the Naiver-Stokes equations with a\nfree surface, describing the above-mentioned motion, in the maximal\n$L_p\\text{-}L_q$ regularity class. Our approach is based on the maximal\n$L_p\\text{-}L_q$ regularity with exponential stability for the linearized\nequations, and solutions to the original nonlinear problem are also\nexponentially stable.\n",
"title": "Global solvability of the Navier-Stokes equations with a free surface in the maximal $L_p\\text{-}L_q$ regularity class"
} | null | null | null | null | true | null | 20110 | null | Default | null | null |
null | {
"abstract": " We study the fundamental group of the complement of the singular locus of\nLauricella's hypergeometric function $F_C$ of $n$ variables. The singular locus\nconsists of $n$ hyperplanes and a hypersurface of degree $2^{n-1}$ in the\ncomplex $n$-space. We derive some relations that holds for general $n\\geq 3$.\nWe give an explicit presentation of the fundamental groupin the\nthree-dimensional case. We also consider a presentation of the fundamental\ngroup of $2^3$-covering of this space.\nIn the version 2, we omit some of the calculations. For all the calculations,\nrefer to the version 1 (arXiv:1710.09594v1) of this article.\n",
"title": "The fundamental group of the complement of the singular locus of Lauricella's $F_C$"
} | null | null | null | null | true | null | 20111 | null | Default | null | null |
null | {
"abstract": " Spectral Clustering (SC) is a widely used data clustering method which first\nlearns a low-dimensional embedding $U$ of data by computing the eigenvectors of\nthe normalized Laplacian matrix, and then performs k-means on $U^\\top$ to get\nthe final clustering result. The Sparse Spectral Clustering (SSC) method\nextends SC with a sparse regularization on $UU^\\top$ by using the block\ndiagonal structure prior of $UU^\\top$ in the ideal case. However, encouraging\n$UU^\\top$ to be sparse leads to a heavily nonconvex problem which is\nchallenging to solve and the work (Lu, Yan, and Lin 2016) proposes a convex\nrelaxation in the pursuit of this aim indirectly. However, the convex\nrelaxation generally leads to a loose approximation and the quality of the\nsolution is not clear. This work instead considers to solve the nonconvex\nformulation of SSC which directly encourages $UU^\\top$ to be sparse. We propose\nan efficient Alternating Direction Method of Multipliers (ADMM) to solve the\nnonconvex SSC and provide the convergence guarantee. In particular, we prove\nthat the sequences generated by ADMM always exist a limit point and any limit\npoint is a stationary point. Our analysis does not impose any assumptions on\nthe iterates and thus is practical. Our proposed ADMM for nonconvex problems\nallows the stepsize to be increasing but upper bounded, and this makes it very\nefficient in practice. Experimental analysis on several real data sets verifies\nthe effectiveness of our method.\n",
"title": "Nonconvex Sparse Spectral Clustering by Alternating Direction Method of Multipliers and Its Convergence Analysis"
} | null | null | null | null | true | null | 20112 | null | Default | null | null |
null | {
"abstract": " A hyperbolic space has been shown to be more capable of modeling complex\nnetworks than a Euclidean space. This paper proposes an explicit update rule\nalong geodesics in a hyperbolic space. The convergence of our algorithm is\ntheoretically guaranteed, and the convergence rate is better than the\nconventional Euclidean gradient descent algorithm. Moreover, our algorithm\navoids the \"bias\" problem of existing methods using the Riemannian gradient.\nExperimental results demonstrate the good performance of our algorithm in the\n\\Poincare embeddings of knowledge base data.\n",
"title": "Stable Geodesic Update on Hyperbolic Space and its Application to Poincare Embeddings"
} | null | null | [
"Statistics"
]
| null | true | null | 20113 | null | Validated | null | null |
null | {
"abstract": " We present a parameterized approach to produce personalized variable length\nsummaries of soccer matches. Our approach is based on temporally segmenting the\nsoccer video into 'plays', associating a user-specifiable 'utility' for each\ntype of play and using 'bin-packing' to select a subset of the plays that add\nup to the desired length while maximizing the overall utility (volume in\nbin-packing terms). Our approach systematically allows a user to override the\ndefault weights assigned to each type of play with individual preferences and\nthus see a highly personalized variable length summarization of soccer matches.\nWe demonstrate our approach based on the output of an end-to-end pipeline that\nwe are building to produce such summaries. Though aspects of the overall\nend-to-end pipeline are human assisted at present, the results clearly show\nthat the proposed approach is capable of producing semantically meaningful and\ncompelling summaries. Besides the obvious use of producing summaries of\nsuperior league matches for news broadcasts, we anticipate our work to promote\ngreater awareness of the local matches and junior leagues by producing\nconsumable summaries of them.\n",
"title": "A Parameterized Approach to Personalized Variable Length Summarization of Soccer Matches"
} | null | null | [
"Computer Science"
]
| null | true | null | 20114 | null | Validated | null | null |
null | {
"abstract": " Recent studies have revealed the vulnerability of deep neural networks: A\nsmall adversarial perturbation that is imperceptible to human can easily make a\nwell-trained deep neural network misclassify. This makes it unsafe to apply\nneural networks in security-critical applications. In this paper, we propose a\nnew defense algorithm called Random Self-Ensemble (RSE) by combining two\nimportant concepts: {\\bf randomness} and {\\bf ensemble}. To protect a targeted\nmodel, RSE adds random noise layers to the neural network to prevent the strong\ngradient-based attacks, and ensembles the prediction over random noises to\nstabilize the performance. We show that our algorithm is equivalent to ensemble\nan infinite number of noisy models $f_\\epsilon$ without any additional memory\noverhead, and the proposed training procedure based on noisy stochastic\ngradient descent can ensure the ensemble model has a good predictive\ncapability. Our algorithm significantly outperforms previous defense techniques\non real data sets. For instance, on CIFAR-10 with VGG network (which has 92\\%\naccuracy without any attack), under the strong C\\&W attack within a certain\ndistortion tolerance, the accuracy of unprotected model drops to less than\n10\\%, the best previous defense technique has $48\\%$ accuracy, while our method\nstill has $86\\%$ prediction accuracy under the same level of attack. Finally,\nour method is simple and easy to integrate into any neural network.\n",
"title": "Towards Robust Neural Networks via Random Self-ensemble"
} | null | null | null | null | true | null | 20115 | null | Default | null | null |
null | {
"abstract": " The detection of intermediate mass black holes (IMBHs) in Galactic globular\nclusters (GCs) has so far been controversial. In order to characterize the\neffectiveness of integrated-light spectroscopy through integral field units, we\nanalyze realistic mock data generated from state-of-the-art Monte Carlo\nsimulations of GCs with a central IMBH, considering different setups and\nconditions varying IMBH mass, cluster distance, and accuracy in determination\nof the center. The mock observations are modeled with isotropic Jeans models to\nassess the success rate in identifying the IMBH presence, which we find to be\nprimarily dependent on IMBH mass. However, even for a IMBH of considerable mass\n(3% of the total GC mass), the analysis does not yield conclusive results in 1\nout of 5 cases, because of shot noise due to bright stars close to the IMBH\nline-of-sight. This stochastic variability in the modeling outcome grows with\ndecreasing BH mass, with approximately 3 failures out of 4 for IMBHs with 0.1%\nof total GC mass. Finally, we find that our analysis is generally unable to\nexclude at 68% confidence an IMBH with mass of $10^3~M_\\odot$ in snapshots\nwithout a central BH. Interestingly, our results are not sensitive to GC\ndistance within 5-20 kpc, nor to mis-identification of the GC center by less\nthan 2'' (<20% of the core radius). These findings highlight the value of\nground-based integral field spectroscopy for large GC surveys, where systematic\nfailures can be accounted for, but stress the importance of discrete kinematic\nmeasurements that are less affected by stochasticity induced by bright stars.\n",
"title": "Prospects for detection of intermediate-mass black holes in globular clusters using integrated-light spectroscopy"
} | null | null | [
"Physics"
]
| null | true | null | 20116 | null | Validated | null | null |
null | {
"abstract": " We examine the possibility of a dark matter (DM) contribution to the recently\nobserved gamma-ray spectrum seen in the M31 galaxy. In particular, we apply\nlimits on Weakly Interacting Massive Particle DM annihilation cross-sections\nderived from the Coma galaxy cluster and the Reticulum II dwarf galaxy to\ndetermine the maximal flux contribution by DM annihilation to both the M31\ngamma-ray spectrum and that of the Milky-Way galactic centre. We limit the\nenergy range between 1 and 12 GeV in M31 and galactic centre spectra due to the\nlimited range of former's data, as well as to encompass the high-energy\ngamma-ray excess observed in the latter target. In so doing, we will make use\nof Fermi-LAT data for all mentioned targets, as well as diffuse radio data for\nthe Coma cluster. The multi-target strategy using both Coma and Reticulum II to\nderive cross-section limits, as well as multi-frequency data, ensures that our\nresults are robust against the various uncertainties inherent in modelling of\nindirect DM emissions.\nOur results indicate that, when a Navarro-Frenk-White (or shallower) radial\ndensity profile is assumed, severe constraints can be imposed upon the fraction\nof the M31 and galactic centre spectra that can be accounted for by DM, with\nthe best limits arising from cross-section constraints from Coma radio data and\nReticulum II gamma-ray limits. These particular limits force all the studied\nannihilation channels to contribute 1% or less to the total integrated\ngamma-ray flux within both M31 and galactic centre targets. In contrast,\nconsiderably more, 10-100%, of the flux can be attributed to DM when a\ncontracted Navarro-Frenk-White profile is assumed. This demonstrates how\nsensitive DM contributions to gamma-ray emissions are to the possibility of\ncored profiles in galaxies.\n",
"title": "A Multi-frequency analysis of possible Dark Matter Contributions to M31 Gamma-Ray Emissions"
} | null | null | null | null | true | null | 20117 | null | Default | null | null |
null | {
"abstract": " Modeling inverse dynamics is crucial for accurate feedforward robot control.\nThe model computes the necessary joint torques, to perform a desired movement.\nThe highly non-linear inverse function of the dynamical system can be\napproximated using regression techniques. We propose as regression method a\ntensor decomposition model that exploits the inherent three-way interaction of\npositions x velocities x accelerations. Most work in tensor factorization has\naddressed the decomposition of dense tensors. In this paper, we build upon the\ndecomposition of sparse tensors, with only small amounts of nonzero entries.\nThe decomposition of sparse tensors has successfully been used in relational\nlearning, e.g., the modeling of large knowledge graphs. Recently, the approach\nhas been extended to multi-class classification with discrete input variables.\nRepresenting the data in high dimensional sparse tensors enables the\napproximation of complex highly non-linear functions. In this paper we show how\nthe decomposition of sparse tensors can be applied to regression problems.\nFurthermore, we extend the method to continuous inputs, by learning a mapping\nfrom the continuous inputs to the latent representations of the tensor\ndecomposition, using basis functions. We evaluate our proposed model on a\ndataset with trajectories from a seven degrees of freedom SARCOS robot arm. Our\nexperimental results show superior performance of the proposed functional\ntensor model, compared to challenging state-of-the art methods.\n",
"title": "Tensor Decompositions for Modeling Inverse Dynamics"
} | null | null | null | null | true | null | 20118 | null | Default | null | null |
null | {
"abstract": " New ternary Mg-Ni-Mn intermetallics have been successfully synthesized by\nHigh Energy Ball Milling (HEBM) and have been studied as possible materials for\nefficient hydrogen storage applications. The microstructures of the as-cast and\nmilled alloys were characterized by means of X-ray Powder Diffraction (XRD) and\nScanning Electron Microscopy (SEM) both prior and after the hydrogenation\nprocess, while the hydrogen storage characteristics (P-c-T) and the kinetics\nwere measured by using a commercial and automatically controlled Sievert-type\napparatus. The hydrogenation and dehydrogenation measurements were performed at\nfour different temperatures 150-200-250-300oC and the results showed that the\nkinetics for both the hydrogenation and dehydrogenation process are very fast\nfor operation temperatures 250 and 300oC, but for temperatures below 200oC the\nhydrogenation process becomes very slow and the dehydrogenation process cannot\nbe achieved.\n",
"title": "Synthesis and Hydrogen Sorption Characteristics of Mechanically Alloyed Mg(NixMn1-x)2 Intermetallics"
} | null | null | null | null | true | null | 20119 | null | Default | null | null |
null | {
"abstract": " Recent advances in the field of network embedding have shown the\nlow-dimensional network representation is playing a critical role in network\nanalysis. However, most of the existing principles of network embedding do not\nincorporate auxiliary information such as content and labels of nodes flexibly.\nIn this paper, we take a matrix factorization perspective of network embedding,\nand incorporate structure, content and label information of the network\nsimultaneously. For structure, we validate that the matrix we construct\npreserves high-order proximities of the network. Label information can be\nfurther integrated into the matrix via the process of random walk sampling to\nenhance the quality of embedding in an unsupervised manner, i.e., without\nleveraging downstream classifiers. In addition, we generalize the Skip-Gram\nNegative Sampling model to integrate the content of the network in a matrix\nfactorization framework. As a consequence, network embedding can be learned in\na unified framework integrating network structure and node content as well as\nlabel information simultaneously. We demonstrate the efficacy of the proposed\nmodel with the tasks of semi-supervised node classification and link prediction\non a variety of real-world benchmark network datasets.\n",
"title": "Enhancing Network Embedding with Auxiliary Information: An Explicit Matrix Factorization Perspective"
} | null | null | null | null | true | null | 20120 | null | Default | null | null |
null | {
"abstract": " This paper describes some applications of an incremental implementation of\nthe principal component analysis (PCA). The algorithm updates the\ntransformation coefficients matrix on-line for each new sample, without the\nneed to keep all the samples in memory. The algorithm is formally equivalent to\nthe usual batch version, in the sense that given a sample set the\ntransformation coefficients at the end of the process are the same. The\nimplications of applying the PCA in real time are discussed with the help of\ndata analysis examples. In particular we focus on the problem of the continuity\nof the PCs during an on-line analysis.\n",
"title": "Incremental Principal Component Analysis Exact implementation and continuity corrections"
} | null | null | null | null | true | null | 20121 | null | Default | null | null |
null | {
"abstract": " The question in this paper is whether R&D efforts affect education\nperformance in small classes. Merging two datasets collected from the PISA\nstudies and the World Development Indicators and using Learning Bayesian\nNetworks, we prove the existence of a statistical causal relationship between\ninvestment in R&D of a country and its education performance (PISA scores). We\nalso prove that the effect of R\\&D on Education is long term as a country has\nto invest at least 10 years before beginning to improve the level of young\npupils.\n",
"title": "More investment in Research and Development for better Education in the future?"
} | null | null | null | null | true | null | 20122 | null | Default | null | null |
null | {
"abstract": " The global dynamics of event cascades are often governed by the local\ndynamics of peer influence. However, detecting social influence from\nobservational data is challenging, due to confounds like homophily and\npractical issues like missing data. In this work, we propose a novel\ndiscriminative method to detect influence from observational data. The core of\nthe approach is to train a ranking algorithm to predict the source of the next\nevent in a cascade, and compare its out-of-sample accuracy against a\ncompetitive baseline which lacks access to features corresponding to social\ninfluence. Using synthetically generated data, we provide empirical evidence\nthat this method correctly identifies influence in the presence of confounds,\nand is robust to both missing data and misspecification --- unlike popular\nalternatives. We also apply the method to two real-world datasets: (1) cascades\nof co-sponsorship of legislation in the U.S. House of Representatives, on a\nsocial network of shared campaign donors; (2) rumors about the Higgs boson\ndiscovery, on a follower network of $10^5$ Twitter accounts. Our model\nidentifies the role of peer influence in these scenarios, and uses it to make\nmore accurate predictions about the future trajectory of cascades.\n",
"title": "Discriminative Modeling of Social Influence for Prediction and Explanation in Event Cascades"
} | null | null | null | null | true | null | 20123 | null | Default | null | null |
null | {
"abstract": " We revisit a classical scenario in communication theory: a source is\ngenerating a waveform which we sample at regular intervals; we wish to\ntransform the signal in such a way as to minimize distortion in its\nreconstruction, despite noise. The transformation must be online (also called\ncausal), in order to enable real-time signaling. The noise model we consider is\nadversarial $\\ell_1$-bounded; this is the \"atomic norm\" convex relaxation of\nthe standard adversary model in discrete-alphabet communications, namely\nsparsity (low Hamming weight). We require that our encoding not increase the\npower of the original signal.\nIn the \"block coding\" setting such encoding is possible due to the existence\nof large almost-Euclidean sections in $\\ell_1$ spaces (established in the work\nof Dvoretzky, Milman, Kašin, and Figiel, Lindenstrauss and Milman).\nOur main result is that an analogous result is achievable even online.\nEquivalently, we show a \"lower triangular\" version of $\\ell_1$ Dvoretzky\ntheorems. In terms of communication, the result has the following form: If the\nsignal is a stream of reals $x_1,\\ldots$, one per unit time, which we encode\ncausally into $\\rho$ (a constant) reals per unit time (forming altogether an\noutput stream $\\mathcal{E}(x)$), and if the adversarial noise added to this\nencoded stream up to time $s$ is a vector $\\vec{y}$, then at time $s$ the\ndecoder's reconstruction of the input prefix $x_{[s]}$ is accurate in a\ntime-weighted $\\ell_2$ norm, to within $s^{-1/2+\\delta}$ (any $\\delta>0$) times\nthe adversary's noise as measured in a time-weighted $\\ell_1$ norm. The\ntime-weighted decoding norm forces increasingly accurate reconstruction of the\ndistant past, while the time-weighted noise norm permits only vanishing effect\nfrom noise in the distant past.\nEncoding is linear, and decoding is performed by an LP analogous to those\nused in compressed sensing.\n",
"title": "Online codes for analog signals"
} | null | null | null | null | true | null | 20124 | null | Default | null | null |
null | {
"abstract": " Motivated by contemporary and rich applications of anomalous diffusion\nprocesses we propose a new statistical test for fractional Brownian motion,\nwhich is one of the most popular models for anomalous diffusion systems. The\ntest is based on detrending moving average statistic and its probability\ndistribution. Using the theory of Gaussian quadratic forms we determined it as\na generalized chi-squared distribution. The proposed test could be generalized\nfor statistical testing of any centered non-degenerate Gaussian process.\nFinally, we examine the test via Monte Carlo simulations for two exemplary\nscenarios of subdiffusive and superdiffusive dynamics.\n",
"title": "Statistical test for fractional Brownian motion based on detrending moving average algorithm"
} | null | null | null | null | true | null | 20125 | null | Default | null | null |
null | {
"abstract": " The main limitation that constrains the fast and comprehensive application of\nWireless Local Area Network (WLAN) based indoor localization systems with\nReceived Signal Strength (RSS) positioning algorithms is the building of the\nfingerprinting radio map, which is time-consuming especially when the indoor\nenvironment is large and/or with high frequent changes. Different approaches\nhave been proposed to reduce workload, including fingerprinting deployment and\nupdate efforts, but the performance degrades greatly when the workload is\nreduced below a certain level. In this paper, we propose an indoor localization\nscenario that applies metric learning and manifold alignment to realize direct\nmapping localization (DML) using a low resolution radio map with single sample\nof RSS that reduces the fingerprinting workload by up to 87\\%. Compared to\nprevious work. The proposed two localization approaches, DML and $k$ nearest\nneighbors based on reconstructed radio map (reKNN), were shown to achieve less\nthan 4.3\\ m and 3.7\\ m mean localization error respectively in a typical office\nenvironment with an area of approximately 170\\ m$^2$, while the unsupervised\nlocalization with perturbation algorithm was shown to achieve 4.7\\ m mean\nlocalization error with 8 times more workload than the proposed methods. As for\nthe room level localization application, both DML and reKNN can meet the\nrequirement with at most 9\\ m of localization error which is enough to tell\napart different rooms with over 99\\% accuracy.\n",
"title": "Fast Radio Map Construction and Position Estimation via Direct Mapping for WLAN Indoor Localization System"
} | null | null | null | null | true | null | 20126 | null | Default | null | null |
null | {
"abstract": " We consider the Cauchy problem for the repulsive Vlasov-Poisson system in the\nthree dimensional space, where the initial datum is the sum of a diffuse\ndensity, assumed to be bounded and integrable, and a point charge. Under some\ndecay assumptions for the diffuse density close to the point charge, under\nbounds on the total energy, and assuming that the initial total diffuse charge\nis strictly less than one, we prove existence of global Lagrangian solutions.\nOur result extends the Eulerian theory of [16], proving that solutions are\ntransported by the flow trajectories. The proof is based on the ODE theory\ndeveloped in [8] in the setting of vector fields with anisotropic regularity,\nwhere some components of the gradient of the vector field is a singular\nintegral of a measure.\n",
"title": "Lagrangian solutions to the Vlasov-Poisson system with a point charge"
} | null | null | null | null | true | null | 20127 | null | Default | null | null |
null | {
"abstract": " The growing pressure on cloud application scalability has accentuated storage\nperformance as a critical bottle- neck. Although cache replacement algorithms\nhave been extensively studied, cache prefetching - reducing latency by\nretrieving items before they are actually requested remains an underexplored\narea. Existing approaches to history-based prefetching, in particular, provide\ntoo few benefits for real systems for the resources they cost. We propose\nMITHRIL, a prefetching layer that efficiently exploits historical patterns in\ncache request associations. MITHRIL is inspired by sporadic association rule\nmining and only relies on the timestamps of requests. Through evaluation of 135\nblock-storage traces, we show that MITHRIL is effective, giving an average of a\n55% hit ratio increase over LRU and PROBABILITY GRAPH, a 36% hit ratio gain\nover AMP at reasonable cost. We further show that MITHRIL can supplement any\ncache replacement algorithm and be readily integrated into existing systems.\nFurthermore, we demonstrate the improvement comes from MITHRIL being able to\ncapture mid-frequency blocks.\n",
"title": "MITHRIL: Mining Sporadic Associations for Cache Prefetching"
} | null | null | null | null | true | null | 20128 | null | Default | null | null |
null | {
"abstract": " 2-level polytopes naturally appear in several areas of pure and applied\nmathematics, including combinatorial optimization, polyhedral combinatorics,\ncommunication complexity, and statistics. In this paper, we present a study of\nsome 2-level polytopes arising in combinatorial settings. Our first\ncontribution is proving that v(P)*f(P) is upper bounded by d*2^(d+1), for a\nlarge collection of families of such polytopes P. Here v(P) (resp. f(P)) is the\nnumber of vertices (resp. facets) of P, and d is its dimension. Whether this\nholds for all 2-level polytopes was asked in [Bohn et al., ESA 2015], and\nexperimental results from [Fiorini et al., ISCO 2016] showed it true up to\ndimension 7. The key to most of our proofs is a deeper understanding of the\nrelations among those polytopes and their underlying combinatorial structures.\nThis leads to a number of results that we believe to be of independent\ninterest: a trade-off formula for the number of cliques and stable sets in a\ngraph; a description of stable matching polytopes as affine projections of\ncertain order polytopes; and a linear-size description of the base polytope of\nmatroids that are 2-level in terms of cuts of an associated tree.\n",
"title": "On 2-level polytopes arising in combinatorial settings"
} | null | null | null | null | true | null | 20129 | null | Default | null | null |
null | {
"abstract": " Learning to learn has emerged as an important direction for achieving\nartificial intelligence. Two of the primary barriers to its adoption are an\ninability to scale to larger problems and a limited ability to generalize to\nnew tasks. We introduce a learned gradient descent optimizer that generalizes\nwell to new tasks, and which has significantly reduced memory and computation\noverhead. We achieve this by introducing a novel hierarchical RNN architecture,\nwith minimal per-parameter overhead, augmented with additional architectural\nfeatures that mirror the known structure of optimization tasks. We also develop\na meta-training ensemble of small, diverse optimization tasks capturing common\nproperties of loss landscapes. The optimizer learns to outperform RMSProp/ADAM\non problems in this corpus. More importantly, it performs comparably or better\nwhen applied to small convolutional neural networks, despite seeing no neural\nnetworks in its meta-training set. Finally, it generalizes to train Inception\nV3 and ResNet V2 architectures on the ImageNet dataset for thousands of steps,\noptimization problems that are of a vastly different scale than those it was\ntrained on. We release an open source implementation of the meta-training\nalgorithm.\n",
"title": "Learned Optimizers that Scale and Generalize"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 20130 | null | Validated | null | null |
null | {
"abstract": " This paper aims at one-shot learning of deep neural nets, where a highly\nparallel setting is considered to address the algorithm calibration problem -\nselecting the best neural architecture and learning hyper-parameter values\ndepending on the dataset at hand. The notoriously expensive calibration problem\nis optimally reduced by detecting and early stopping non-optimal runs. The\ntheoretical contribution regards the optimality guarantees within the multiple\nhypothesis testing framework. Experimentations on the Cifar10, PTB and Wiki\nbenchmarks demonstrate the relevance of the approach with a principled and\nconsistent improvement on the state of the art with no extra hyper-parameter.\n",
"title": "Toward Optimal Run Racing: Application to Deep Learning Calibration"
} | null | null | null | null | true | null | 20131 | null | Default | null | null |
null | {
"abstract": " The inverse problem of determining the unknown potential $f>0$ in the partial\ndifferential equation $$\\frac{\\Delta}{2} u - fu =0 \\text{ on } \\mathcal O\n~~\\text{s.t. } u = g \\text { on } \\partial \\mathcal O,$$ where $\\mathcal O$ is\na bounded $C^\\infty$-domain in $\\mathbb R^d$ and $g>0$ is a given function\nprescribing boundary values, is considered. The data consist of the solution\n$u$ corrupted by additive Gaussian noise. A nonparametric Bayesian prior for\nthe function $f$ is devised and a Bernstein - von Mises theorem is proved which\nentails that the posterior distribution given the observations is approximated\nin a suitable function space by an infinite-dimensional Gaussian measure that\nhas a `minimal' covariance structure in an information-theoretic sense. As a\nconsequence the posterior distribution performs valid and optimal frequentist\nstatistical inference on $f$ in the small noise limit.\n",
"title": "Bernstein - von Mises theorems for statistical inverse problems I: Schrödinger equation"
} | null | null | null | null | true | null | 20132 | null | Default | null | null |
null | {
"abstract": " Many engineering processes exist in the industry, text books and\ninternational standards. However, in practice rarely any of the processes are\nfollowed consistently and literally. It is observed across industries the\nprocesses are altered based on the requirements of the projects. Two features\ncommonly lacking from many engineering processes are, 1) the formal capacity to\nrapidly develop prototypes in the rudimentary stage of the project, 2)\ntransitioning of requirements into architectural designs, when and how to\nevaluate designs and how to use the throw away prototypes throughout the system\nlifecycle. Prototypes are useful for eliciting requirements, generating\ncustomer feedback and identifying, examining or mitigating risks in a project\nwhere the product concept is at a cutting edge or not fully perceived. Apart\nfrom the work that the product is intended to do, systemic properties like\navailability, performance and modifiability matter as much as functionality.\nArchitects must even these concerns with the method they select to promote\nthese systemic properties and at the same time equip the stakeholders with the\ndesired functionality. Architectural design and prototyping is one of the key\nways to build the right product embedded with the desired systemic properties.\nOnce the product is built it can be almost impossible to retrofit the system\nwith the desired attributes. This paper customizes the architecture centric\ndevelopment method with rapid prototyping to achieve the above-mentioned goals\nand reducing the number of iterations across the stages of ACDM.\n",
"title": "Tailoring Architecture Centric Design Method with Rapid Prototyping"
} | null | null | [
"Computer Science"
]
| null | true | null | 20133 | null | Validated | null | null |
null | {
"abstract": " We introduce a framework using Generative Adversarial Networks (GANs) for\nlikelihood--free inference (LFI) and Approximate Bayesian Computation (ABC)\nwhere we replace the black-box simulator model with an approximator network and\ngenerate a rich set of summary features in a data driven fashion. On benchmark\ndata sets, our approach improves on others with respect to scalability, ability\nto handle high dimensional data and complex probability distributions.\n",
"title": "Easy High-Dimensional Likelihood-Free Inference"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 20134 | null | Validated | null | null |
null | {
"abstract": " We propose and demonstrate a novel laser cooling mechanism applicable to\nparticles with narrow-linewidth optical transitions. By sweeping the frequency\nof counter-propagating laser beams in a sawtooth manner, we cause adiabatic\ntransfer back and forth between the ground state and a long-lived optically\nexcited state. The time-ordering of these adiabatic transfers is determined by\nDoppler shifts, which ensures that the associated photon recoils are in the\nopposite direction to the particle's motion. This ultimately leads to a robust\ncooling mechanism capable of exerting large forces via a weak transition and\nwith reduced reliance on spontaneous emission. We present a simple intuitive\nmodel for the resulting frictional force, and directly demonstrate its efficacy\nfor increasing the total phase-space density of an atomic ensemble. We rely on\nboth simulation and experimental studies using the 7.5~kHz linewidth $^1$S$_0$\nto $^3$P$_1$ transition in $^{88}$Sr. The reduced reliance on spontaneous\nemission may allow this adiabatic sweep method to be a useful tool for cooling\nparticles that lack closed cycling transitions, such as molecules.\n",
"title": "Narrow-line Laser Cooling by Adiabatic Transfer"
} | null | null | null | null | true | null | 20135 | null | Default | null | null |
null | {
"abstract": " We report on 176 close (<2\") stellar companions detected with high-resolution\nimaging near 170 hosts of Kepler Objects of Interest. These Kepler targets were\nprioritized for imaging follow-up based on the presence of small planets, so\nmost of the KOIs in these systems (176 out of 204) have nominal radii <6 R_E .\nEach KOI in our sample was observed in at least 2 filters with adaptive optics,\nspeckle imaging, lucky imaging, or HST. Multi-filter photometry provides color\ninformation on the companions, allowing us to constrain their stellar\nproperties and assess the probability that the companions are physically bound.\nWe find that 60 -- 80% of companions within 1\" are bound, and the bound\nfraction is >90% for companions within 0.5\"; the bound fraction decreases with\nincreasing angular separation. This picture is consistent with simulations of\nthe binary and background stellar populations in the Kepler field. We also\nreassess the planet radii in these systems, converting the observed\ndifferential magnitudes to a contamination in the Kepler bandpass and\ncalculating the planet radius correction factor, $X_R = R_p (true) / R_p\n(single)$. Under the assumption that planets in bound binaries are equally\nlikely to orbit the primary or secondary, we find a mean radius correction\nfactor for planets in stellar multiples of $X_R = 1.65$. If stellar\nmultiplicity in the Kepler field is similar to the solar neighborhood, then\nnearly half of all Kepler planets may have radii underestimated by an average\nof 65%, unless vetted using high resolution imaging or spectroscopy.\n",
"title": "Assessing the Effect of Stellar Companions from High-Resolution Imaging of Kepler Objects of Interest"
} | null | null | null | null | true | null | 20136 | null | Default | null | null |
null | {
"abstract": " We exploit a recently derived inversion scheme for arbitrary deep neural\nnetworks to develop a new semi-supervised learning framework that applies to a\nwide range of systems and problems. The approach outperforms current\nstate-of-the-art methods on MNIST reaching $99.14\\%$ of test set accuracy while\nusing $5$ labeled examples per class. Experiments with one-dimensional signals\nhighlight the generality of the method. Importantly, our approach is simple,\nefficient, and requires no change in the deep network architecture.\n",
"title": "Semi-Supervised Learning via New Deep Network Inversion"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 20137 | null | Validated | null | null |
null | {
"abstract": " The effectiveness of a statistical machine translation system (SMT) is very\ndependent upon the amount of parallel corpus used in the training phase. For\nlow-resource language pairs there are not enough parallel corpora to build an\naccurate SMT. In this paper, a novel approach is presented to extract bilingual\nPersian-Italian parallel sentences from a non-parallel (comparable) corpus. In\nthis study, English is used as the pivot language to compute the matching\nscores between source and target sentences and candidate selection phase.\nAdditionally, a new monolingual sentence similarity metric, Normalized Google\nDistance (NGD) is proposed to improve the matching process. Moreover, some\nextensions of the baseline system are applied to improve the quality of\nextracted sentences measured with BLEU. Experimental results show that using\nthe new pivot based extraction can increase the quality of bilingual corpus\nsignificantly and consequently improves the performance of the Persian-Italian\nSMT system.\n",
"title": "Using English as Pivot to Extract Persian-Italian Parallel Sentences from Non-Parallel Corpora"
} | null | null | null | null | true | null | 20138 | null | Default | null | null |
null | {
"abstract": " In this short note, we prove that if $F$ is a weak upper semicontinuous\nadmissible Finsler structure on a domain in $\\mathbb{R}^n$, $n\\geq 2$, then the\nintrinsic distance and differential structures coincide.\n",
"title": "Intrinsic geometry and analysis of Finsler structures"
} | null | null | null | null | true | null | 20139 | null | Default | null | null |
null | {
"abstract": " Monte Carlo Tree Search (MCTS) has been extended to many imperfect\ninformation games. However, due to the added complexity that uncertainty\nintroduces, these adaptations have not reached the same level of practical\nsuccess as their perfect information counterparts. In this paper we consider\nthe development of agents that perform well against humans in imperfect\ninformation games with partially observable actions. We introduce the\nSemi-Determinized-MCTS (SDMCTS), a variant of the Information Set MCTS\nalgorithm (ISMCTS). More specifically, SDMCTS generates a predictive model of\nthe unobservable portion of the opponent's actions from historical behavioral\ndata. Next, SDMCTS performs simulations on an instance of the game where the\nunobservable portion of the opponent's actions are determined. Thereby, it\nfacilitates the use of the predictive model in order to decrease uncertainty.\nWe present an implementation of the SDMCTS applied to the Cheat Game, a\nwell-known card game, with partially observable (and often deceptive) actions.\nResults from experiments with 120 subjects playing a head-to-head Cheat Game\nagainst our SDMCTS agents suggest that SDMCTS performs well against humans, and\nits performance improves as the predictive model's accuracy increases.\n",
"title": "Combining Prediction of Human Decisions with ISMCTS in Imperfect Information Games"
} | null | null | [
"Computer Science"
]
| null | true | null | 20140 | null | Validated | null | null |
null | {
"abstract": " We propose a search of galactic axions with mass about 0.2 microeV using a\nlarge volume resonant cavity, about 50 m^3, cooled down to 4 K and immersed in\na moderate axial magnetic field of about 0.6 T generated inside the\nsuperconducting magnet of the KLOE experiment located at the National\nLaboratory of Frascati of INFN. This experiment, called KLASH (KLoe magnet for\nAxion SearcH) in the following, has a potential sensitivity on the\naxion-to-photon coupling, g_agg, of about 6x10^-17 GeV-1, reaching the region\npredicted by KSVZ and DFSZ models of QCD axions.\n",
"title": "The KLASH Proposal"
} | null | null | null | null | true | null | 20141 | null | Default | null | null |
null | {
"abstract": " Datasets with significant proportions of noisy (incorrect) class labels\npresent challenges for training accurate Deep Neural Networks (DNNs). We\npropose a new perspective for understanding DNN generalization for such\ndatasets, by investigating the dimensionality of the deep representation\nsubspace of training samples. We show that from a dimensionality perspective,\nDNNs exhibit quite distinctive learning styles when trained with clean labels\nversus when trained with a proportion of noisy labels. Based on this finding,\nwe develop a new dimensionality-driven learning strategy, which monitors the\ndimensionality of subspaces during training and adapts the loss function\naccordingly. We empirically demonstrate that our approach is highly tolerant to\nsignificant proportions of noisy labels, and can effectively learn\nlow-dimensional local subspaces that capture the data distribution.\n",
"title": "Dimensionality-Driven Learning with Noisy Labels"
} | null | null | [
"Statistics"
]
| null | true | null | 20142 | null | Validated | null | null |
null | {
"abstract": " Adversarial examples have been shown to exist for a variety of deep learning\narchitectures. Deep reinforcement learning has shown promising results on\ntraining agent policies directly on raw inputs such as image pixels. In this\npaper we present a novel study into adversarial attacks on deep reinforcement\nlearning polices. We compare the effectiveness of the attacks using adversarial\nexamples vs. random noise. We present a novel method for reducing the number of\ntimes adversarial examples need to be injected for a successful attack, based\non the value function. We further explore how re-training on random noise and\nFGSM perturbations affects the resilience against adversarial examples.\n",
"title": "Delving into adversarial attacks on deep policies"
} | null | null | null | null | true | null | 20143 | null | Default | null | null |
null | {
"abstract": " We investigate two arithmetic functions naturally occurring in the study of\nthe Euler and Carmichael quotients. The functions are related to the frequency\nof vanishing of the Euler and Carmichael quotients. We obtain several results\nconcerning the relations between these functions as well as their typical and\nextreme values.\n",
"title": "On two functions arising in the study of the Euler and Carmichael quotients"
} | null | null | null | null | true | null | 20144 | null | Default | null | null |
null | {
"abstract": " Lurking variables represent hidden information, and preclude a full\nunderstanding of phenomena of interest. Detection is usually based on\nserendipity -- visual detection of unexplained, systematic variation. However,\nthese approaches are doomed to fail if the lurking variables do not vary. In\nthis article, we address these challenges by introducing formal hypothesis\ntests for the presence of lurking variables, based on Dimensional Analysis.\nThese procedures utilize a modified form of the Buckingham Pi theorem to\nprovide structure for a suitable null hypothesis. We present analytic tools for\nreasoning about lurking variables in physical phenomena, construct procedures\nto handle cases of increasing complexity, and present examples of their\napplication to engineering problems. The results of this work enable\nalgorithm-driven lurking variable detection, complementing a traditionally\ninspection-based approach.\n",
"title": "Lurking Variable Detection via Dimensional Analysis"
} | null | null | null | null | true | null | 20145 | null | Default | null | null |
null | {
"abstract": " Fourier ptychographic microscopy (FPM) is a recently proposed quantitative\nphase imaging technique with high resolution and wide field-of-view (FOV). In\ncurrent FPM imaging platforms, systematic error sources come from the\naberrations, LED intensity fluctuation, parameter imperfections and noise,\nwhich will severely corrupt the reconstruction results with artifacts. Although\nthese problems have been researched and some special methods have been proposed\nrespectively, there is no method to solve all of them. However, the systematic\nerror is a mixture of various sources in the real situation. It is difficult to\ndistinguish a kind of error source from another due to the similar artifacts.\nTo this end, we report a system calibration procedure, termed SC-FPM, based on\nthe simulated annealing (SA) algorithm, LED intensity correction and adaptive\nstep-size strategy, which involves the evaluation of an error matric at each\niteration step, followed by the re-estimation of accurate parameters. The great\nperformance has been achieved both in simulation and experiments. The reported\nsystem calibration scheme improves the robustness of FPM and relaxes the\nexperiment conditions, which makes the FPM more pragmatic.\n",
"title": "System calibration method for Fourier ptychographic microscopy"
} | null | null | null | null | true | null | 20146 | null | Default | null | null |
null | {
"abstract": " The primary goal of this paper is to recast the semantics of modal logic, and\ndynamic epistemic logic (DEL) in particular, in category-theoretic terms. We\nfirst review the category of relations and categories of Kripke frames, with\nparticular emphasis on the duality between relations and adjoint homomorphisms.\nUsing these categories, we then reformulate the semantics of DEL in a more\ncategorical and algebraic form. Several virtues of the new formulation will be\ndemonstrated: The DEL idea of updating a model into another is captured\nnaturally by the categorical perspective -- which emphasizes a family of\nobjects and structural relationships among them, as opposed to a single object\nand structure on it. Also, the categorical semantics of DEL can be merged\nstraightforwardly with a standard categorical semantics for first-order logic,\nproviding a semantics for first-order DEL.\n",
"title": "Categories for Dynamic Epistemic Logic"
} | null | null | null | null | true | null | 20147 | null | Default | null | null |
null | {
"abstract": " Robotic assistants in a home environment are expected to perform various\ncomplex tasks for their users. One particularly challenging task is pouring\ndrinks into cups, which for successful completion, requires the detection and\ntracking of the liquid level during a pour to determine when to stop. In this\npaper, we present a novel approach to autonomous pouring that tracks the liquid\nlevel using an RGB-D camera and adapts the rate of pouring based on the liquid\nlevel feedback. We thoroughly evaluate our system on various types of liquids\nand under different conditions, conducting over 250 pours with a PR2 robot. The\nresults demonstrate that our approach is able to pour liquids to a target\nheight with an accuracy of a few millimeters.\n",
"title": "Accurate Pouring with an Autonomous Robot Using an RGB-D Camera"
} | null | null | null | null | true | null | 20148 | null | Default | null | null |
null | {
"abstract": " Let $n$ and $k$ be natural numbers such that $2^k < n$. We study the\nrestriction to $\\mathfrak{S}_{n-2^k}$ of odd-degree irreducible characters of\nthe symmetric group $\\mathfrak{S}_n$. This analysis completes the study begun\nin [Ayyer A., Prasad A., Spallone S., Sem. Lothar. Combin. 75 (2015), Art.\nB75g, 13 pages] and recently developed in [Isaacs I.M., Navarro G., Olsson\nJ.B., Tiep P.H., J. Algebra 478 (2017), 271-282].\n",
"title": "Restriction of Odd Degree Characters of $\\mathfrak{S}_n$"
} | null | null | [
"Mathematics"
]
| null | true | null | 20149 | null | Validated | null | null |
null | {
"abstract": " We present a neural architecture that takes as input a 2D or 3D shape and\noutputs a program that generates the shape. The instructions in our program are\nbased on constructive solid geometry principles, i.e., a set of boolean\noperations on shape primitives defined recursively. Bottom-up techniques for\nthis shape parsing task rely on primitive detection and are inherently slow\nsince the search space over possible primitive combinations is large. In\ncontrast, our model uses a recurrent neural network that parses the input shape\nin a top-down manner, which is significantly faster and yields a compact and\neasy-to-interpret sequence of modeling instructions. Our model is also more\neffective as a shape detector compared to existing state-of-the-art detection\ntechniques. We finally demonstrate that our network can be trained on novel\ndatasets without ground-truth program annotations through policy gradient\ntechniques.\n",
"title": "CSGNet: Neural Shape Parser for Constructive Solid Geometry"
} | null | null | null | null | true | null | 20150 | null | Default | null | null |
null | {
"abstract": " We classify the band degeneracies in 3D crystals with screw symmetry $n_m$\nand broken $\\mathcal P*\\mathcal T$ symmetry, where $\\mathcal P$ stands for\nspatial inversion and $\\mathcal T$ for time reversal. The generic degeneracies\nalong symmetry lines are Weyl nodes: Chiral contact points between pairs of\nbands. They can be single nodes with a chiral charge of magnitude $|\\chi|=1$ or\ncomposite nodes with $|\\chi|=2$ or $3$, and the possible $\\chi$ values only\ndepend on the order $n$ of the axis, not on the pitch $m/n$ of the screw.\nDouble Weyl nodes require $n=4$ or 6, and triple nodes require $n=6$. In all\ncases the bands split linearly along the axis, and for composite nodes the\nsplitting is quadratic on the orthogonal plane. This is true for triple as well\nas double nodes, due to the presence in the effective two-band Hamiltonian of a\nnonchiral quadratic term that masks the chiral cubic dispersion. If $\\mathcal\nT$ symmetry is present and $\\mathcal P$ is broken there may exist on some\nsymmetry lines Weyl nodes pinned to $\\mathcal T$-invariant momenta, which in\nsome cases are unavoidable. In the absence of other symmetries their\nclassification depends on $n$, $m$, and the type of $\\mathcal T$ symmetry. With\nspinless $\\mathcal T$ such $\\mathcal T$-invariant Weyl nodes are always double\nnodes, while with spinful $\\mathcal T$ they can be single or triple nodes.\n$\\mathcal T$-invariant triples nodes can occur not only on 6-fold axes but also\non 3-fold ones, and their in-plane band splitting is cubic, not quadratic as in\nthe case of generic triple nodes. These rules are illustrated by means of\nfirst-principles calculations for hcp cobalt, a $\\mathcal T$-broken, $\\mathcal\nP$-invariant crystal with $6_3$ symmetry, and for trigonal tellurium and\nhexagonal NbSi$_2$, which are $\\mathcal T$-invariant, $\\mathcal P$-broken\ncrystals with 3-fold and 6-fold screw symmetry respectively.\n",
"title": "Composite Weyl nodes stabilized by screw symmetry with and without time reversal"
} | null | null | null | null | true | null | 20151 | null | Default | null | null |
null | {
"abstract": " Deep Neural Networks (DNNs) are very popular these days, and are the subject\nof a very intense investigation. A DNN is made by layers of internal units (or\nneurons), each of which computes an affine combination of the output of the\nunits in the previous layer, applies a nonlinear operator, and outputs the\ncorresponding value (also known as activation). A commonly-used nonlinear\noperator is the so-called rectified linear unit (ReLU), whose output is just\nthe maximum between its input value and zero. In this (and other similar cases\nlike max pooling, where the max operation involves more than one input value),\none can model the DNN as a 0-1 Mixed Integer Linear Program (0-1 MILP) where\nthe continuous variables correspond to the output values of each unit, and a\nbinary variable is associated with each ReLU to model its yes/no nature. In\nthis paper we discuss the peculiarity of this kind of 0-1 MILP models, and\ndescribe an effective bound-tightening technique intended to ease its solution.\nWe also present possible applications of the 0-1 MILP model arising in feature\nvisualization and in the construction of adversarial examples. Preliminary\ncomputational results are reported, aimed at investigating (on small DNNs) the\ncomputational performance of a state-of-the-art MILP solver when applied to a\nknown test case, namely, hand-written digit recognition.\n",
"title": "Deep Neural Networks as 0-1 Mixed Integer Linear Programs: A Feasibility Study"
} | null | null | null | null | true | null | 20152 | null | Default | null | null |
null | {
"abstract": " We propose a new approach based on a local Hilbert transform to design\nnon-Hermitian potentials generating arbitrary vector fields of directionality,\np(r), with desired shapes and topologies. We derive a local Hilbert transform\nto systematically build such potentials, by modifying background potentials\n(being either regular or random, extended or localized). In particular, we\nexplore particular directionality fields, for instance in the form of a focus\nto create sinks for probe fields (which could help to increase absorption at\nthe sink), or to generate vortices in the probe fields. Physically, the\nproposed directionality fields provide a flexible new mechanism for dynamically\nshaping and precise control over probe fields leading to novel effects in wave\ndynamics.\n",
"title": "Directionality Fields generated by a Local Hilbert Transform"
} | null | null | null | null | true | null | 20153 | null | Default | null | null |
null | {
"abstract": " This paper is the second chapter of three of the author's undergraduate\nthesis. In this paper, we consider the random matrix ensemble given by $(d_b,\nd_w)$-regular graphs on $M$ black vertices and $N$ white vertices, where $d_b\n\\in [N^{\\gamma}, N^{2/3 - \\gamma}]$ for any $\\gamma > 0$. We simultaneously\nprove that the bulk eigenvalue correlation statistics for both normalized\nadjacency matrices and their corresponding covariance matrices are stable for\nshort times. Combined with an ergodicity analysis of the Dyson Brownian motion\nin another paper, this proves universality of bulk eigenvalue correlation\nstatistics, matching normalized adjacency matrices with the GOE and the\ncorresponding covariance matrices with the Gaussian Wishart Ensemble.\n",
"title": "Bulk Eigenvalue Correlation Statistics of Random Biregular Bipartite Graphs"
} | null | null | null | null | true | null | 20154 | null | Default | null | null |
null | {
"abstract": " The well-known Axler-Zheng theorem characterizes compactness of finite sums\nof finite products of Toeplitz operators on the unit disk in terms of the\nBerezin transform of these operators. Subsequently this theorem was generalized\nto other domains and appeared in different forms, including domains in\n$\\mathbb{C}^n$ on which the $\\overline{\\partial}$-Neumann operator $N$ is\ncompact. In this work we remove the assumption on $N$, and we study weighted\nBergman spaces on smooth bounded pseudoconvex domains. We prove a local version\nof the Axler-Zheng theorem characterizing compactness of Toeplitz operators in\nthe algebra generated by symbols continuous up to the boundary in terms of the\nbehavior of the Berezin transform at strongly pseudoconvex points. We employ a\nForelli-Rudin type inflation method to handle the weights.\n",
"title": "A local weighted Axler-Zheng theorem in $\\mathbb{C}^n$"
} | null | null | null | null | true | null | 20155 | null | Default | null | null |
null | {
"abstract": " This paper presents a simple approach to increase the normal zone propagation\nvelocity in (RE)BaCuO thin films grown on a flexible metallic substrate, also\ncalled superconducting tapes. The key idea behind this approach is to use a\nspecific geometry of the silver thermal stabilizer that surrounds the\nsuperconducting tape. More specifically, a very thin layer of silver stabilizer\nis deposited on top of the superconductor layer, typically less than 100 nm,\nwhile the remaining stabilizer (still silver) is deposited on the substrate\nside. Normal zone propagation velocities up to 170 cm/s have been measured\nexperimentally, corresponding to a stabilizer thickness of 20 nm on top of the\nsuperconductor layer. This is one order of magnitude faster than the speed\nmeasured on actual commercial tapes. Our results clearly demonstrate that a\nvery thin stabilizer on top of the superconductor layer leads to high normal\nzone propagation velocities. The experimental values are in good agreement with\npredictions realized by finite element simulations. Furthermore, the\npropagation of the normal zone during the quench was recorded in situ and in\nreal time using a high-speed camera. Due to high Joule losses generated on both\nedges of the tape sample, a \"U-shaped\" profile could be observed at the\nboundaries between the superconducting and the normal zones, which matches very\nclosely the profile predicted by the simulations.\n",
"title": "Spatial modulation of Joule losses to increase the normal zone propagation velocity in (RE)BaCuO tapes"
} | null | null | [
"Physics"
]
| null | true | null | 20156 | null | Validated | null | null |
null | {
"abstract": " We introduce a new method of statistical analysis to characterise the\ndynamics of turbulent fluids in two dimensions. We establish that, in\nequilibrium, the vortex distributions can be uniquely connected to the\ntemperature of the vortex gas, and apply this vortex thermometry to\ncharacterise simulations of decaying superfluid turbulence. We confirm the\nhypothesis of vortex evaporative heating leading to Onsager vortices proposed\nin Phys. Rev. Lett. 113, 165302 (2014), and find previously unidentified vortex\npower-law distributions that emerge from the dynamics.\n",
"title": "Vortex Thermometry for Turbulent Two-Dimensional Fluids"
} | null | null | null | null | true | null | 20157 | null | Default | null | null |
null | {
"abstract": " We derive a closed formula for the determinant of the Hankel matrix whose\nentries are given by sums of negative powers of the zeros of the regular\nCoulomb wave function. This new identity applied together with results of\nGrommer and Chebotarev allows us to prove a Hurwitz-type theorem about the\nzeros of the regular Coulomb wave function. As a particular case, we obtain a\nnew proof of the classical Hurwitz's theorem from the theory of Bessel\nfunctions that is based on algebraic arguments. In addition, several Hankel\ndeterminants with entries given by the Rayleigh function and Bernoulli numbers\nare also evaluated.\n",
"title": "The Hurwitz-type theorem for the regular Coulomb wave function via Hankel determinants"
} | null | null | null | null | true | null | 20158 | null | Default | null | null |
null | {
"abstract": " In this technical report, we consider an approach that combines the PPO\nobjective and K-FAC natural gradient optimization, for which we call PPOKFAC.\nWe perform a range of empirical analysis on various aspects of the algorithm,\nsuch as sample complexity, training speed, and sensitivity to batch size and\ntraining epochs. We observe that PPOKFAC is able to outperform PPO in terms of\nsample complexity and speed in a range of MuJoCo environments, while being\nscalable in terms of batch size. In spite of this, it seems that adding more\nepochs is not necessarily helpful for sample efficiency, and PPOKFAC seems to\nbe worse than its A2C counterpart, ACKTR.\n",
"title": "An Empirical Analysis of Proximal Policy Optimization with Kronecker-factored Natural Gradients"
} | null | null | null | null | true | null | 20159 | null | Default | null | null |
null | {
"abstract": " The maximal density of a measurable subset of R^n avoiding Euclidean\ndistance1 is unknown except in the trivial case of dimension 1. In this paper,\nwe consider thecase of a distance associated to a polytope that tiles space,\nwhere it is likely that the setsavoiding distance 1 are of maximal density\n2^-n, as conjectured by Bachoc and Robins. We prove that this is true for n =\n2, and for the Voronoï regions of the lattices An, n >= 2.\n",
"title": "On the density of sets avoiding parallelohedron distance 1"
} | null | null | null | null | true | null | 20160 | null | Default | null | null |
null | {
"abstract": " Mass segmentation provides effective morphological features which are\nimportant for mass diagnosis. In this work, we propose a novel end-to-end\nnetwork for mammographic mass segmentation which employs a fully convolutional\nnetwork (FCN) to model a potential function, followed by a CRF to perform\nstructured learning. Because the mass distribution varies greatly with pixel\nposition, the FCN is combined with a position priori. Further, we employ\nadversarial training to eliminate over-fitting due to the small sizes of\nmammogram datasets. Multi-scale FCN is employed to improve the segmentation\nperformance. Experimental results on two public datasets, INbreast and\nDDSM-BCRP, demonstrate that our end-to-end network achieves better performance\nthan state-of-the-art approaches.\n\\footnote{this https URL}\n",
"title": "Adversarial Deep Structured Nets for Mass Segmentation from Mammograms"
} | null | null | null | null | true | null | 20161 | null | Default | null | null |
null | {
"abstract": " A transitive model $M$ of ZFC is called a ground if the universe $V$ is a set\nforcing extension of $M$. We show that the grounds of $V$ are downward\nset-directed. Consequently, we establish some fundamental theorems on the\nforcing method and the set-theoretic geology. For instance, (1) the mantle, the\nintersection of all grounds, must be a model of ZFC. (2) $V$ has only set many\ngrounds if and only if the mantle is a ground. We also show that if the\nuniverse has some very large cardinal, then the mantle must be a ground.\n",
"title": "The downward directed grounds hypothesis and very large cardinals"
} | null | null | [
"Mathematics"
]
| null | true | null | 20162 | null | Validated | null | null |
null | {
"abstract": " Wild-land fire fighting is a hazardous job. A key task for firefighters is to\nobserve the \"fire front\" to chart the progress of the fire and areas that will\nlikely spread next. Lack of information of the fire front causes many\naccidents. Using Unmanned Aerial Vehicles (UAVs) to cover wildfire is promising\nbecause it can replace humans in hazardous fire tracking and significantly\nreduce operation costs. In this paper we propose a distributed control\nframework designed for a team of UAVs that can closely monitor a wildfire in\nopen space, and precisely track its development. The UAV team, designed for\nflexible deployment, can effectively avoid in-flight collisions and cooperate\nwell with neighbors. They can maintain a certain height level to the ground for\nsafe flight above fire. Experimental results are conducted to demonstrate the\ncapabilities of the UAV team in covering a spreading wildfire.\n",
"title": "A Distributed Control Framework of Multiple Unmanned Aerial Vehicles for Dynamic Wildfire Tracking"
} | null | null | null | null | true | null | 20163 | null | Default | null | null |
null | {
"abstract": " High signal-to-noise and high-resolution light scattering spectra are\nmeasured for nitrous oxide (N$_2$O) gas at an incident wavelength of 403.00 nm,\nat 90$^\\circ$ scattering, at room temperature and at gas pressures in the range\n$0.5-4$ bar. The resulting Rayleigh-Brillouin light scattering spectra are\ncompared to a number of models describing in an approximate manner the\ncollisional dynamics and energy transfer in this gaseous medium of this\npolyatomic molecular species. The Tenti-S6 model, based on macroscopic gas\ntransport coefficients, reproduces the scattering profiles in the entire\npressure range at less than 2\\% deviation at a similar level as does the\nalternative kinetic Grad's 6-moment model, which is based on the internal\ncollisional relaxation as a decisive parameter. A hydrodynamic model fails to\nreproduce experimental spectra for the low pressures of 0.5-1 bar, but yields\nvery good agreement ($< 1$\\%) in the pressure range $2-4$ bar. While these\nthree models have a different physical basis the internal molecular relaxation\nderived can for all three be described in terms of a bulk viscosity of $\\eta_b\n\\sim (6 \\pm 2) \\times 10^{-5}$ Pa$\\cdot$s. A 'rough-sphere' model, previously\nshown to be effective to describe light scattering in SF$_6$ gas, is not found\nto be suitable, likely in view of the non-sphericity and asymmetry of the N-N-O\nstructured linear polyatomic molecule.\n",
"title": "Rayleigh-Brillouin light scattering spectroscopy of nitrous oxide (N$_2$O)"
} | null | null | null | null | true | null | 20164 | null | Default | null | null |
null | {
"abstract": " The Sachdev-Ye-Kitaev (SYK) model is a concrete solvable model to study\nnon-Fermi liquid properties, holographic duality and maximally chaotic\nbehavior. In this work, we consider a generalization of the SYK model that\ncontains two SYK models with different number of Majorana modes coupled by\nquadratic terms. This model is also solvable, and the solution shows a\nzero-temperature quantum phase transition between two non-Fermi liquid chaotic\nphases. This phase transition is driven by tuning the ratio of two mode\nnumbers, and a Fermi liquid non-chaotic phase sits at the critical point with\nequal mode number. At finite temperature, the Fermi liquid phase expands to a\nfinite regime. More intriguingly, a different non-Fermi liquid phase emerges at\nfinite temperature. We characterize the phase diagram in term of the spectral\nfunction, the Lyapunov exponent and the entropy. Our results illustrate a\nconcrete example of quantum phase transition and critical regime between two\nnon-Fermi liquid phases.\n",
"title": "Competition between Chaotic and Non-Chaotic Phases in a Quadratically Coupled Sachdev-Ye-Kitaev Model"
} | null | null | null | null | true | null | 20165 | null | Default | null | null |
null | {
"abstract": " The Ward identities associated with spontaneously broken symmetries can be\nsaturated by Goldstone bosons. However, when space-time symmetries are broken,\nthe number of Goldstone bosons necessary to non-linearly realize the symmetry\ncan be less than the number of broken generators. The loss of Goldstones may be\ndue to a redundancy or the generation of a gap. This phenomena is called an\nInverse Higgs Mechanism (IHM). However, there are cases when a Goldstone boson\nassociated with a broken generator does not appear in the low energy theory\ndespite the lack of the existence of an associated IHM. In this paper we will\nshow that in such cases the relevant broken symmetry can be realized, without\nthe aid of an associated Goldstone, if there exists a proper set of operator\nconstraints, which we call a Dynamical Inverse Higgs Mechanism (DIHM). We\nconsider the spontaneous breaking of boosts, rotations and conformal\ntransformations in the context of Fermi liquids, finding three possible paths\nto symmetry realization: pure Goldstones, no Goldstones and DIHM, or some\nmixture thereof. We show that in the two dimensional degenerate electron system\nthe DIHM route is the only consistent way to realize spontaneously broken\nboosts and dilatations, while in three dimensions these symmetries could just\nas well be realized via the inclusion of non-derivatively coupled Goldstone\nbosons. We have present the action, including the leading order\nnon-linearities, for the rotational Goldstone (angulon), and discuss the\nconstraint associated with the possible DIHM that would need to be imposed to\nremove it from the spectrum. Finally we discuss the conditions under which\nGoldstone bosons are non-derivatively coupled, a necessary condition for the\nexistence of a Dynamical Inverse Higgs Constraint (DIHC), generalizaing the\nresults for Vishwanath and Wantanabe.\n",
"title": "Symmetry Realization via a Dynamical Inverse Higgs Mechanism"
} | null | null | null | null | true | null | 20166 | null | Default | null | null |
null | {
"abstract": " We report on the growth of epitaxial Sr2RuO4 films using a hybrid molecular\nbeam epitaxy approach in which a volatile precursor containing RuO4 is used to\nsupply ruthenium and oxygen. The use of the precursor overcomes a number of\nissues encountered in traditional MBE that uses elemental metal sources.\nPhase-pure, epitaxial thin films of Sr2RuO4 are obtained. At high substrate\ntemperatures, growth proceeds in a layer-by-layer mode with intensity\noscillations observed in reflection high-energy electron diffraction. Films are\nof high structural quality, as documented by x-ray diffraction, atomic force\nmicroscopy, and transmission electron microscopy. The method should be suitable\nfor the growth of other complex oxides containing ruthenium, opening up\nopportunities to investigate thin films that host rich exotic ground states.\n",
"title": "Growth of strontium ruthenate films by hybrid molecular beam epitaxy"
} | null | null | null | null | true | null | 20167 | null | Default | null | null |
null | {
"abstract": " The antiproton-to-proton ratio in the cosmic-ray spectrum is a sensitive\nprobe of new physics. Using recent measurements of the cosmic-ray antiproton\nand proton fluxes in the energy range of 1-1000 GeV, we study the contribution\nto the $\\bar{p}/p$ ratio from secondary antiprotons that are produced and\nsubsequently accelerated within individual supernova remnants. We consider\nseveral well-motivated models for cosmic-ray propagation in the interstellar\nmedium and marginalize our results over the uncertainties related to the\nantiproton production cross section and the time-, charge-, and\nenergy-dependent effects of solar modulation. We find that the increase in the\n$\\bar{p}/p$ ratio observed at rigidities above $\\sim$ 100 GV cannot be\naccounted for within the context of conventional cosmic-ray propagation models,\nbut is consistent with scenarios in which cosmic-ray antiprotons are produced\nand subsequently accelerated by shocks within a given supernova remnant. In\nlight of this, the acceleration of secondary cosmic rays in supernova remnants\nis predicted to substantially contribute to the cosmic-ray positron spectrum,\naccounting for a significant fraction of the observed positron excess.\n",
"title": "Possible Evidence for the Stochastic Acceleration of Secondary Antiprotons by Supernova Remnants"
} | null | null | null | null | true | null | 20168 | null | Default | null | null |
null | {
"abstract": " The precise localization of the repeating fast radio burst (FRB 121102) has\nprovided the first unambiguous association (chance coincidence probability\n$p\\lesssim3\\times10^{-4}$) of an FRB with an optical and persistent radio\ncounterpart. We report on optical imaging and spectroscopy of the counterpart\nand find that it is an extended ($0.6^{\\prime\\prime}-0.8^{\\prime\\prime}$)\nobject displaying prominent Balmer and [OIII] emission lines. Based on the\nspectrum and emission line ratios, we classify the counterpart as a\nlow-metallicity, star-forming, $m_{r^\\prime} = 25.1$ AB mag dwarf galaxy at a\nredshift of $z=0.19273(8)$, corresponding to a luminosity distance of 972 Mpc.\nFrom the angular size, the redshift, and luminosity, we estimate the host\ngalaxy to have a diameter $\\lesssim4$ kpc and a stellar mass of\n$M_*\\sim4-7\\times 10^{7}\\,M_\\odot$, assuming a mass-to-light ratio between 2 to\n3$\\,M_\\odot\\,L_\\odot^{-1}$. Based on the H$\\alpha$ flux, we estimate the star\nformation rate of the host to be $0.4\\,M_\\odot\\,\\mathrm{yr^{-1}}$ and a\nsubstantial host dispersion measure depth $\\lesssim 324\\,\\mathrm{pc\\,cm^{-3}}$.\nThe net dispersion measure contribution of the host galaxy to FRB 121102 is\nlikely to be lower than this value depending on geometrical factors. We show\nthat the persistent radio source at FRB 121102's location reported by Marcote\net al (2017) is offset from the galaxy's center of light by $\\sim$200 mas and\nthe host galaxy does not show optical signatures for AGN activity. If FRB\n121102 is typical of the wider FRB population and if future interferometric\nlocalizations preferentially find them in dwarf galaxies with low metallicities\nand prominent emission lines, they would share such a preference with long\ngamma ray bursts and superluminous supernovae.\n",
"title": "The Host Galaxy and Redshift of the Repeating Fast Radio Burst FRB 121102"
} | null | null | null | null | true | null | 20169 | null | Default | null | null |
null | {
"abstract": " Thicket density is a new measure of the complexity of a set system, having\nthe same relationship to stable formulas that VC density has to NIP formulas.\nIt satisfies a Sauer-Shelah type dichotomy that has applications in both model\ntheory and the theory of algorithms\n",
"title": "Thicket Density"
} | null | null | null | null | true | null | 20170 | null | Default | null | null |
null | {
"abstract": " A $(\\gamma,n)$-gonal pair is a pair $(S,f)$, where $S$ is a closed Riemann\nsurface and $f:S \\to R$ is a degree $n$ holomorphic map onto a closed Riemann\nsurface $R$ of genus $\\gamma$. If the signature of $(S,f)$ is of hyperbolic\ntype, then there is pair $(\\Gamma,G)$, called an uniformization of $(S,f)$,\nwhere $G$ is a Fuchsian group acting on the unit disc ${\\mathbb D}$ containing\n$\\Gamma$ as an index $n$ subgroup, so that $f$ is induced by the inclusion of\n$\\Gamma <G$. The uniformization is uniquely determined by $(S,f)$, up to\nconjugation by holomorphic automorphisms of ${\\mathbb D}$, and it permits to\nprovide natural complex orbifold structures on the Hurwitz spaces parametrizing\n(twisted) isomorphic classes of pairs topologically equivalent to $(S,f)$. In\norder to produce certain compactifications of these Hurwitz spaces, one needs\nto consider the so called stable $(\\gamma,n)$-gonal pairs, which are natural\ngeometrical deformations of $(\\gamma,n)$-gonal pairs. Due to the above, it\nseems interesting to search for uniformizations of stable $(\\gamma,n)$-gonal\npairs, in terms of certain class of Kleinian groups. In this paper we review\nsuch uniformizations by using noded Fuchsian groups, which are (geometric)\nlimits of quasiconformal deformations of Fuchsian groups, and which provide\nuniformizations of stable Riemann orbifolds. These uniformizations permit to\nobtain a compactification of the Hurwitz spaces with a complex orbifold\nstructure, these being quotients of the augmented Teichmüller space of $G$ by\na suitable finite index subgroup of its modular group.\n",
"title": "Uniformizations of stable $(γ,n)$-gonal Riemann surfaces"
} | null | null | null | null | true | null | 20171 | null | Default | null | null |
null | {
"abstract": " Online advertising is progressively moving towards a programmatic model in\nwhich ads are matched to actual interests of individuals collected as they\nbrowse the web. Letting the huge debate around privacy aside, a very important\nquestion in this area, for which little is known, is: How much do advertisers\npay to reach an individual? In this study, we develop a first of its kind\nmethodology for computing exactly that -- the price paid for a web user by the\nad ecosystem -- and we do that in real time. Our approach is based on tapping\non the Real Time Bidding (RTB) protocol to collect cleartext and encrypted\nprices for winning bids paid by advertisers in order to place targeted ads. Our\nmain technical contribution is a method for tallying winning bids even when\nthey are encrypted. We achieve this by training a model using as ground truth\nprices obtained by running our own \"probe\" ad-campaigns. We design our\nmethodology through a browser extension and a back-end server that provides it\nwith fresh models for encrypted bids. We validate our methodology using a one\nyear long trace of 1600 mobile users and demonstrate that it can estimate a\nuser's advertising worth with more than 82% accuracy.\n",
"title": "If you are not paying for it, you are the product: How much do advertisers pay to reach you?"
} | null | null | [
"Computer Science"
]
| null | true | null | 20172 | null | Validated | null | null |
null | {
"abstract": " The quantum nature of light-matter interactions in a circularly polarized\nvacuum field was probed by spontaneous emission from quantum dots in\nthree-dimensional chiral photonic crystals. Due to the circularly polarized\neigenmodes along the helical axis in the GaAs-based mirror-asymmetric\nstructures we studied, we observed highly circularly polarized emission from\nthe quantum dots. Both spectroscopic and time-resolved measurements confirmed\nthat the obtained circularly polarized light was influenced by a large\ndifference in the photonic density of states between the orthogonal components\nof the circular polarization in the vacuum field.\n",
"title": "Circularly polarized vacuum field in three-dimensional chiral photonic crystals probed by quantum dot emission"
} | null | null | null | null | true | null | 20173 | null | Default | null | null |
null | {
"abstract": " The critical properties of the single-crystalline semiconducting ferromagnet\nCrGeTe$_3$ were investigated by bulk dc magnetization around the paramagnetic\nto ferromagnetic phase transition. Critical exponents $\\beta = 0.200\\pm0.003$\nwith critical temperature $T_c = 62.65\\pm0.07$ K and $\\gamma = 1.28\\pm0.03$\nwith $T_c = 62.75\\pm0.06$ K are obtained by the Kouvel-Fisher method whereas\n$\\delta = 7.96\\pm0.01$ is obtained by the critical isotherm analysis at $T_c =\n62.7$ K. These critical exponents obey the Widom scaling relation $\\delta =\n1+\\gamma/\\beta$, indicating self-consistency of the obtained values. With these\ncritical exponents the isotherm $M(H)$ curves below and above the critical\ntemperatures collapse into two independent universal branches, obeying the\nsingle scaling equation $m = f_\\pm(h)$, where $m$ and $h$ are renormalized\nmagnetization and field, respectively. The determined exponents match well with\nthose calculated from the results of renormalization group approach for a\ntwo-dimensional Ising system coupled with long-range interaction between spins\ndecaying as $J(r)\\approx r^{-(d+\\sigma)}$ with $\\sigma=1.52$.\n",
"title": "Critical behavior of quasi-two-dimensional semiconducting ferromagnet CrGeTe$_3$"
} | null | null | null | null | true | null | 20174 | null | Default | null | null |
null | {
"abstract": " We implement a scale-free version of the pivot algorithm and use it to sample\npairs of three-dimensional self-avoiding walks, for the purpose of efficiently\ncalculating an observable that corresponds to the probability that pairs of\nself-avoiding walks remain self-avoiding when they are concatenated. We study\nthe properties of this Markov chain, and then use it to find the critical\nexponent $\\gamma$ for self-avoiding walks to unprecedented accuracy. Our final\nestimate for $\\gamma$ is $1.15695300(95)$.\n",
"title": "Scale-free Monte Carlo method for calculating the critical exponent $γ$ of self-avoiding walks"
} | null | null | null | null | true | null | 20175 | null | Default | null | null |
null | {
"abstract": " M-convex functions, which are a generalization of valuated matroids, play a\ncentral role in discrete convex analysis. Quadratic M-convex functions\nconstitute a basic and important subclass of M-convex functions, which has a\nclose relationship with phylogenetics as well as valued constraint satisfaction\nproblems. In this paper, we consider the quadratic M-convexity testing problem\n(QMCTP), which is the problem of deciding whether a given quadratic function on\n$\\{0,1\\}^n$ is M-convex. We show that QMCTP is co-NP-complete in general, but\nis polynomial-time solvable under a natural assumption. Furthermore, we propose\nan $O(n^2)$-time algorithm for solving QMCTP in the polynomial-time solvable\ncase.\n",
"title": "The quadratic M-convexity testing problem"
} | null | null | null | null | true | null | 20176 | null | Default | null | null |
null | {
"abstract": " Bayesian matrix factorization (BMF) is a powerful tool for producing low-rank\nrepresentations of matrices and for predicting missing values and providing\nconfidence intervals. Scaling up the posterior inference for massive-scale\nmatrices is challenging and requires distributing both data and computation\nover many workers, making communication the main computational bottleneck.\nEmbarrassingly parallel inference would remove the communication needed, by\nusing completely independent computations on different data subsets, but it\nsuffers from the inherent unidentifiability of BMF solutions. We introduce a\nhierarchical decomposition of the joint posterior distribution, which couples\nthe subset inferences, allowing for embarrassingly parallel computations in a\nsequence of at most three stages. Using an efficient approximate\nimplementation, we show improvements empirically on both real and simulated\ndata. Our distributed approach is able to achieve a speed-up of almost an order\nof magnitude over the full posterior, with a negligible effect on predictive\naccuracy. Our method outperforms state-of-the-art embarrassingly parallel MCMC\nmethods in accuracy, and achieves results competitive to other available\ndistributed and parallel implementations of BMF.\n",
"title": "Distributed Bayesian Matrix Factorization with Limited Communication"
} | null | null | null | null | true | null | 20177 | null | Default | null | null |
null | {
"abstract": " The successful deployment of safe and trustworthy Connected and Autonomous\nVehicles (CAVs) will highly depend on the ability to devise robust and\neffective security solutions to resist sophisticated cyber attacks and patch up\ncritical vulnerabilities. Pseudonym Public Key Infrastructure (PPKI) is a\npromising approach to secure vehicular networks as well as ensure data and\nlocation privacy, concealing the vehicles' real identities. Nevertheless,\npseudonym distribution and management affect PPKI scalability due to the\nsignificant number of digital certificates required by a single vehicle. In\nthis paper, we focus on the certificate revocation process and propose a\nversatile and low-complexity framework to facilitate the distribution of the\nCertificate Revocation Lists (CRL) issued by the Certification Authority (CA).\nCRL compression is achieved through optimized Bloom filters, which guarantee a\nconsiderable overhead reduction with a configurable rate of false positives.\nOur results show that the distribution of compressed CRLs can significantly\nenhance the system scalability without increasing the complexity of the\nrevocation process.\n",
"title": "Optimized Certificate Revocation List Distribution for Secure V2X Communications"
} | null | null | [
"Computer Science"
]
| null | true | null | 20178 | null | Validated | null | null |
null | {
"abstract": " The XENON1T experiment is the most recent stage of the XENON Dark Matter\nSearch, aiming for the direct detection of Weakly Interacting Massive Particles\n(WIMPs). To reach its projected sensitivity, the background has to be reduced\nby two orders of magnitude compared to its predecessor XENON100. This requires\na water Cherenkov muon veto surrounding the XENON1T TPC, both to shield\nexternal backgrounds and to tag muon-induced energetic neutrons through\ndetection of a passing muon or the secondary shower induced by a muon\ninteracting in the surrounding rock. The muon veto is instrumented with $84$\n$8\"$ PMTs with high quantum efficiency (QE) in the Cherenkov regime and the\nwalls of the watertank are clad with the highly reflective DF2000MA foil by 3M.\nHere, we present a study of the reflective properties of this foil, as well as\nthe measurement of its wavelength shifting (WLS) properties. Further, we\npresent the impact of reflectance and WLS on the detection efficiency of the\nmuon veto, using a Monte Carlo simulation carried out with Geant4. The\nmeasurements yield a specular reflectance of $\\approx100\\%$ for wavelengths\nlarger than $400\\,$nm, while $\\approx90\\%$ of the incoming light below\n$370\\,$nm is absorbed by the foil. Approximately $3-7.5\\%$ of the light hitting\nthe foil within the wavelength range $250\\,$nm $\\leq \\lambda \\leq 390\\,$nm is\nused for the WLS process. The intensity of the emission spectrum of the WLS\nlight is slightly dependent on the absorbed wavelength and shows the shape of a\nrotational-vibrational fluorescence spectrum, peaking at around $\\lambda\n\\approx 420\\,$nm. Adjusting the reflectance values to the measured ones in the\nMonte Carlo simulation originally used for the muon veto design, the veto\ndetection efficiency remains unchanged. Including the wavelength shifting in\nthe Monte Carlo simulation leads to an increase of the efficiency of\napproximately $0.5\\%$.\n",
"title": "Optical response of highly reflective film used in the water Cherenkov muon veto of the XENON1T dark matter experiment"
} | null | null | null | null | true | null | 20179 | null | Default | null | null |
null | {
"abstract": " We argue that the standard graph Laplacian is preferable for spectral\npartitioning of signed graphs compared to the signed Laplacian. Simple examples\ndemonstrate that partitioning based on signs of components of the leading\neigenvectors of the signed Laplacian may be meaningless, in contrast to\npartitioning based on the Fiedler vector of the standard graph Laplacian for\nsigned graphs. We observe that negative eigenvalues are beneficial for spectral\npartitioning of signed graphs, making the Fiedler vector easier to compute.\n",
"title": "On spectral partitioning of signed graphs"
} | null | null | [
"Computer Science",
"Mathematics",
"Statistics"
]
| null | true | null | 20180 | null | Validated | null | null |
null | {
"abstract": " Public space utilization is crucial for urban developers to understand how\nefficient a place is being occupied in order to improve existing or future\ninfrastructures. In a smart cities approach, implementing public space\nmonitoring with Internet-of-Things (IoT) sensors appear to be a viable\nsolution. However, choice of sensors often is a challenging problem and often\nlinked with scalability, coverage, energy consumption, accuracy, and privacy.\nTo get the most from low cost sensor with aforementioned design in mind, we\nproposed data processing modules for capturing public space utilization with\nRenewable Wireless Sensor Network (RWSN) platform using pyroelectric infrared\n(PIR) and analog sound sensor. We first proposed a calibration process to\nremove false alarm of PIR sensor due to the impact of weather and environment.\nWe then demonstrate how the sounds sensor can be processed to provide various\ninsight of a public space. Lastly, we fused both sensors and study a particular\npublic space utilization based on one month data to unveil its usage.\n",
"title": "Sensor Fusion for Public Space Utilization Monitoring in a Smart City"
} | null | null | null | null | true | null | 20181 | null | Default | null | null |
null | {
"abstract": " Two families of symplectic methods specially designed for second-order\ntime-dependent linear systems are presented. Both are obtained from the Magnus\nexpansion of the corresponding first-order equation, but otherwise they differ\nin significant aspects. The first family is addressed to problems with low to\nmoderate dimension, whereas the second is more appropriate when the dimension\nis large, in particular when the system corresponds to a linear wave equation\npreviously discretised in space. Several numerical experiments illustrate the\nmain features of the new schemes.\n",
"title": "Symplectic integrators for second-order linear non-autonomous equations"
} | null | null | null | null | true | null | 20182 | null | Default | null | null |
null | {
"abstract": " The recent breakthroughs of deep reinforcement learning (DRL) technique in\nAlpha Go and playing Atari have set a good example in handling large state and\nactions spaces of complicated control problems. The DRL technique is comprised\nof (i) an offline deep neural network (DNN) construction phase, which derives\nthe correlation between each state-action pair of the system and its value\nfunction, and (ii) an online deep Q-learning phase, which adaptively derives\nthe optimal action and updates value estimates. In this paper, we first present\nthe general DRL framework, which can be widely utilized in many applications\nwith different optimization objectives. This is followed by the introduction of\nthree specific applications: the cloud computing resource allocation problem,\nthe residential smart grid task scheduling problem, and building HVAC system\noptimal control problem. The effectiveness of the DRL technique in these three\ncyber-physical applications have been validated. Finally, this paper\ninvestigates the stochastic computing-based hardware implementations of the DRL\nframework, which consumes a significant improvement in area efficiency and\npower consumption compared with binary-based implementation counterparts.\n",
"title": "Deep Reinforcement Learning: Framework, Applications, and Embedded Implementations"
} | null | null | [
"Computer Science"
]
| null | true | null | 20183 | null | Validated | null | null |
null | {
"abstract": " In a Dirac nodal line semimetal, the bulk conduction and valence bands touch\nat extended lines in the Brillouin zone. To date, most of the theoretically\npredicted and experimentally discovered nodal lines derive from the bulk bands\nof two- and three-dimensional materials. Here, based on combined angle-resolved\nphotoemission spectroscopy measurements and first-principles calculations, we\nreport the discovery of node-line-like surface states on the (001) surface of\nLaBi. These bands derive from the topological surface states of LaBi and bridge\nthe band gap opened by spin-orbit coupling and band inversion. Our\nfirst-principles calculations reveal that these \"nodal lines\" have a tiny gap,\nwhich is beyond typical experimental resolution. These results may provide\nimportant information to understand the extraordinary physical properties of\nLaBi, such as the extremely large magnetoresistance and resistivity plateau.\n",
"title": "Experimental observation of node-line-like surface states in LaBi"
} | null | null | null | null | true | null | 20184 | null | Default | null | null |
null | {
"abstract": " Let $n$ be a positive multiple of $4$. We establish an asymptotic formula for\nthe number of rational points of bounded height on singular cubic hypersurfaces\n$S_n$ defined by $$ x^3=(y_1^2 + \\cdots + y_n^2)z . $$ This result is new in\ntwo aspects: first, it can be viewed as a modest start on the study of density\nof rational points on those singular cubic hypersurfaces which are not covered\nby the classical theorems of Davenport or Heath-Brown; second, it proves\nManin's conjecture for singular cubic hypersurfaces $S_n$ defined above.\n",
"title": "Manin's conjecture for a class of singular cubic hypersurfaces"
} | null | null | null | null | true | null | 20185 | null | Default | null | null |
null | {
"abstract": " The process of designing neural architectures requires expert knowledge and\nextensive trial and error. While automated architecture search may simplify\nthese requirements, the recurrent neural network (RNN) architectures generated\nby existing methods are limited in both flexibility and components. We propose\na domain-specific language (DSL) for use in automated architecture search which\ncan produce novel RNNs of arbitrary depth and width. The DSL is flexible enough\nto define standard architectures such as the Gated Recurrent Unit and Long\nShort Term Memory and allows the introduction of non-standard RNN components\nsuch as trigonometric curves and layer normalization. Using two different\ncandidate generation techniques, random search with a ranking function and\nreinforcement learning, we explore the novel architectures produced by the RNN\nDSL for language modeling and machine translation domains. The resulting\narchitectures do not follow human intuition yet perform well on their targeted\ntasks, suggesting the space of usable RNN architectures is far larger than\npreviously assumed.\n",
"title": "A Flexible Approach to Automated RNN Architecture Generation"
} | null | null | null | null | true | null | 20186 | null | Default | null | null |
null | {
"abstract": " Traditionally, most complex intelligence architectures are extremely\nnon-convex, which could not be well performed by convex optimization. However,\nthis paper decomposes complex structures into three types of nodes: operators,\nalgorithms and functions. Iteratively, propagating from node to node along\nedge, we prove that \"regarding the tree-structured neural graph, it is nearly\nconvex in each variable, when the other variables are fixed.\" In fact, the\nnon-convex properties stem from circles and functions, which could be\ntransformed to be convex with our proposed \\textit{\\textbf{scale mechanism}}.\nExperimentally, we justify our theoretical analysis by two practical\napplications.\n",
"title": "Convexification of Neural Graph"
} | null | null | null | null | true | null | 20187 | null | Default | null | null |
null | {
"abstract": " A central challenge in modern condensed matter physics is developing the\ntools for understanding nontrivial yet unordered states of matter. One\nimportant idea to emerge in this context is that of a \"pseudogap\": the fact\nthat under appropriate circumstances the normal state displays a suppression of\nthe single particle spectral density near the Fermi level, reminiscent of the\ngaps seen in ordered states of matter. While these concepts arose in a solid\nstate context, it is now being explored in cold gases. This article reviews the\ncurrent experimental and theoretical understanding of the normal state of\nstrongly interacting Fermi gases, with particular focus on the phenomonology\nwhich is traditionally associated with the pseudogap.\n",
"title": "Pseudogaps in strongly interacting Fermi gases"
} | null | null | null | null | true | null | 20188 | null | Default | null | null |
null | {
"abstract": " We consider the path planning problem for a 2-link robot amidst polygonal\nobstacles. Our robot is parametrizable by the lengths $\\ell_1, \\ell_2>0$ of its\ntwo links, the thickness $\\tau \\ge 0$ of the links, and an angle $\\kappa$ that\nconstrains the angle between the 2 links to be strictly greater than $\\kappa$.\nThe case $\\tau>0$ and $\\kappa \\ge 0$ corresponds to \"thick non-crossing\"\nrobots. This results in a novel 4DOF configuration space ${\\mathbb R}^2\\times\n({\\mathbb T}\\setminus\\Delta(\\kappa))$ where ${\\mathbb T}$ is the torus and\n$\\Delta(\\kappa)$ the diagonal band of width $\\kappa$. We design a\nresolution-exact planner for this robot using the framework of Soft Subdivision\nSearch (SSS). First, we provide an analysis of the space of forbidden angles,\nleading to a soft predicate for classifying configuration boxes. We further\nexploit the T/R splitting technique which was previously introduced for\nself-crossing thin 2-link robots. Our open-source implementation in Core\nLibrary achieves real-time performance for a suite of combinatorially\nnon-trivial obstacle sets. Experimentally, our algorithm is significantly\nbetter than any of the state-of-art sampling algorithms we looked at, in timing\nand in success rate.\n",
"title": "Resolution-Exact Planner for Thick Non-Crossing 2-Link Robots"
} | null | null | null | null | true | null | 20189 | null | Default | null | null |
null | {
"abstract": " Heterogeneous information networks (HINs) are ubiquitous in real-world\napplications. In the meantime, network embedding has emerged as a convenient\ntool to mine and learn from networked data. As a result, it is of interest to\ndevelop HIN embedding methods. However, the heterogeneity in HINs introduces\nnot only rich information but also potentially incompatible semantics, which\nposes special challenges to embedding learning in HINs. With the intention to\npreserve the rich yet potentially incompatible information in HIN embedding, we\npropose to study the problem of comprehensive transcription of heterogeneous\ninformation networks. The comprehensive transcription of HINs also provides an\neasy-to-use approach to unleash the power of HINs, since it requires no\nadditional supervision, expertise, or feature engineering. To cope with the\nchallenges in the comprehensive transcription of HINs, we propose the HEER\nalgorithm, which embeds HINs via edge representations that are further coupled\nwith properly-learned heterogeneous metrics. To corroborate the efficacy of\nHEER, we conducted experiments on two large-scale real-words datasets with an\nedge reconstruction task and multiple case studies. Experiment results\ndemonstrate the effectiveness of the proposed HEER model and the utility of\nedge representations and heterogeneous metrics. The code and data are available\nat this https URL.\n",
"title": "Easing Embedding Learning by Comprehensive Transcription of Heterogeneous Information Networks"
} | null | null | null | null | true | null | 20190 | null | Default | null | null |
null | {
"abstract": " Dynamical phase transitions are crucial features of the fluctuations of\nstatistical systems, corresponding to boundaries between qualitatively\ndifferent mechanisms of maintaining unlikely values of dynamical observables\nover long periods of time. They manifest themselves in the form of\nnon-analyticities in the large deviation function of those observables. In this\npaper, we look at bulk-driven exclusion processes with open boundaries. It is\nknown that the standard asymmetric simple exclusion process exhibits a\ndynamical phase transition in the large deviations of the current of particles\nflowing through it. That phase transition has been described thanks to specific\ncalculation methods relying on the model being exactly solvable, but more\ngeneral methods have also been used to describe the extreme large deviations of\nthat current, far from the phase transition. We extend those methods to a large\nclass of models based on the ASEP, where we add arbitrary spatial\ninhomogeneities in the rates and short-range potentials between the particles.\nWe show that, as for the regular ASEP, the large deviation function of the\ncurrent scales differently with the size of the system if one considers very\nhigh or very low currents, pointing to the existence of a dynamical phase\ntransition between those two regimes: high current large deviations are\nextensive in the system size, and the typical states associated to them are\nCoulomb gases, which are correlated ; low current large deviations do not\ndepend on the system size, and the typical states associated to them are\nanti-shocks, consistently with a hydrodynamic behaviour. Finally, we illustrate\nour results numerically on a simple example, and we interpret the transition in\nterms of the current pushing beyond its maximal hydrodynamic value, as well as\nrelate it to the appearance of Tracy-Widom distributions in the relaxation\nstatistics of such models.\n",
"title": "Generic Dynamical Phase Transition in One-Dimensional Bulk-Driven Lattice Gases with Exclusion"
} | null | null | [
"Physics"
]
| null | true | null | 20191 | null | Validated | null | null |
null | {
"abstract": " Due to their capability to reduce turbulent transport in magnetized plasmas,\nunderstanding the dynamics of zonal flows is an important problem in the fusion\nprogramme. Since the pioneering work by Rosenbluth and Hinton in axisymmetric\ntokamaks, it is known that studying the linear and collisionless relaxation of\nzonal flow perturbations gives valuable information and physical insight.\nRecently, the problem has been investigated in stellarators and it has been\nfound that in these devices the relaxation process exhibits a characteristic\nfeature: a damped oscillation. The frequency of this oscillation might be a\nrelevant parameter in the regulation of turbulent transport, and therefore its\nefficient and accurate calculation is important. Although an analytical\nexpression can be derived for the frequency, its numerical evaluation is not\nsimple and has not been exploited systematically so far. Here, a numerical\nmethod for its evaluation is considered, and the results are compared with\nthose obtained by calculating the frequency from gyrokinetic simulations. This\n\"semianalytical\" approach for the determination of the zonal-flow frequency\nreveals accurate and faster than the one based on gyrokinetic simulations.\n",
"title": "Semianalytical calculation of the zonal-flow oscillation frequency in stellarators"
} | null | null | null | null | true | null | 20192 | null | Default | null | null |
null | {
"abstract": " We predict a geometric quantum phase shift of a moving electric dipole in the\npresence of an external magnetic field at a distance. On the basis of the\nLorentz-covariant field interaction approach, we show that a geometric phase\nappears under the condition that the dipole is moving in the field-free region,\nwhich is distinct from the topological He-McKellar-Wilkens phase generated by a\ndirect overlap of the dipole and the field. We discuss the experimental\nfeasibility of detecting this phase with atomic interferometry and argue that\ndetection of this phase would result in a deeper understanding of the locality\nin quantum electromagnetic interaction.\n",
"title": "Geometric phase of a moving dipole under a magnetic field at a distance"
} | null | null | [
"Physics"
]
| null | true | null | 20193 | null | Validated | null | null |
null | {
"abstract": " For well-generated complex reflection groups, Chapuy and Stump gave a simple\nproduct for a generating function counting reflection factorizations of a\nCoxeter element by their length. This is refined here to record the number of\nreflections used from each orbit of hyperplanes. The proof is case-by-case via\nthe classification of well-generated groups. It implies a new expression for\nthe Coxeter number, expressed via data coming from a hyperplane orbit; a\ncase-free proof of this due to J. Michel is included.\n",
"title": "A refined count of Coxeter element factorizations"
} | null | null | null | null | true | null | 20194 | null | Default | null | null |
null | {
"abstract": " The magnetic properties of BaFe$_{2}$As$_{2}$(001) surface have been studied\nby using first-principles electronic structure calculations. We find that for\nAs-terminated surface the magnetic ground state of the top-layer FeAs is in the\nstaggered dimer antiferromagnetic (AFM) order, while for Ba-terminated surface\nthe collinear (single stripe) AFM order is the most stable. When a certain\ncoverage of Ba or K atoms are deposited onto the As-terminated surface, the\ncalculated energy differences among different AFM orders for the top-layer FeAs\non BaFe$_{2}$As$_{2}$(001) can be much reduced, indicating enhanced spin\nfluctuations. To identify the novel staggered dimer AFM order for the As\ntermination, we have simulated the scanning tunneling microscopy (STM) image\nfor this state, which shows a different $\\sqrt{2}\\times\\sqrt{2}$ pattern from\nthe case of half Ba coverage. Our results suggest: i) the magnetic properties\nof the top-layer FeAs on BaFe$_{2}$As$_{2}$(001) can be tuned effectively by\nsurface doping; ii) both the surface termination and the AFM order in the\ntop-layer FeAs can affect the STM image of BaFe$_{2}$As$_{2}$(001).\n",
"title": "Tuning the magnetism of the top-layer FeAs on BaFe$_{2}$As$_{2}$(001): First-principles study"
} | null | null | null | null | true | null | 20195 | null | Default | null | null |
null | {
"abstract": " We obtain an explicit error expansion for the solution of Backward Stochastic\nDifferential Equations (BSDEs) using the cubature on Wiener spaces method. The\nresult is proved under a mild strengthening of the assumptions needed for the\napplication of the cubature method. The explicit expansion can then be used to\nconstruct implementable higher order approximations via Richardson-Romberg\nextrapolation. To allow for an effective efficiency improvement of the\ninterpolated algorithm, we introduce an additional projection on sparse grids,\nand study the resulting complexity reduction. Numerical examples are provided\nto illustrate our results.\n",
"title": "Cubature methods to solve BSDEs: Error expansion and complexity control"
} | null | null | null | null | true | null | 20196 | null | Default | null | null |
null | {
"abstract": " We combine $Spitzer$ and ground-based KMTNet microlensing observations to\nidentify and precisely measure an Earth-mass ($1.43^{+0.45}_{-0.32} M_\\oplus$)\nplanet OGLE-2016-BLG-1195Lb at $1.16^{+0.16}_{-0.13}$ AU orbiting a\n$0.078^{+0.016}_{-0.012} M_\\odot$ ultracool dwarf. This is the lowest-mass\nmicrolensing planet to date. At $3.91^{+0.42}_{-0.46}$ kpc, it is the third\nconsecutive case among the $Spitzer$ \"Galactic distribution\" planets toward the\nGalactic bulge that lies in the Galactic disk as opposed to the bulge itself,\nhinting at a skewed distribution of planets. Together with previous\nmicrolensing discoveries, the seven Earth-size planets orbiting the ultracool\ndwarf TRAPPIST-1, and the detection of disks around young brown dwarfs,\nOGLE-2016-BLG-1195Lb suggests that such planets might be common around\nultracool dwarfs. It therefore sheds light on the formation of both ultracool\ndwarfs and planetary systems at the limit of low-mass protoplanetary disks.\n",
"title": "An Earth-mass Planet in a 1-AU Orbit around an Ultracool Dwarf"
} | null | null | null | null | true | null | 20197 | null | Default | null | null |
null | {
"abstract": " This paper presents a class of new algorithms for distributed statistical\nestimation that exploit divide-and-conquer approach. We show that one of the\nkey benefits of the divide-and-conquer strategy is robustness, an important\ncharacteristic for large distributed systems. We establish connections between\nperformance of these distributed algorithms and the rates of convergence in\nnormal approximation, and prove non-asymptotic deviations guarantees, as well\nas limit theorems, for the resulting estimators. Our techniques are illustrated\nthrough several examples: in particular, we obtain new results for the\nmedian-of-means estimator, as well as provide performance guarantees for\ndistributed maximum likelihood estimation.\n",
"title": "Distributed Statistical Estimation and Rates of Convergence in Normal Approximation"
} | null | null | null | null | true | null | 20198 | null | Default | null | null |
null | {
"abstract": " The Atacama Large Millimeter/submilimeter Array (ALMA) recently revealed a\nset of nearly concentric gaps in the protoplanetary disk surrounding the young\nstar HL Tau. If these are carved by forming gas giants, this provides the first\nset of orbital initial conditions for planets as they emerge from their birth\ndisks. Using N-body integrations, we have followed the evolution of the system\nfor 5 Gyr to explore the possible outcomes. We find that HL Tau initial\nconditions scaled down to the size of typically observed exoplanet orbits\nnaturally produce several populations in the observed exoplanet sample. First,\nfor a plausible range of planetary masses, we can match the observed\neccentricity distribution of dynamically excited radial velocity giant planets\nwith eccentricities $>$ 0.2. Second, we roughly obtain the observed rate of hot\nJupiters around FGK stars. Finally, we obtain a large efficiency of planetary\nejections of $\\approx 2$ per HL Tau-like system, but the small fraction of\nstars observed to host giant planets makes it hard to match the rate of\nfree-floating planets inferred from microlensing observations. In view of\nupcoming GAIA results, we also provide predictions for the expected mutual\ninclination distribution, which is significantly broader than the absolute\ninclination distributions typically considered by previous studies.\n",
"title": "Connecting HL Tau to the Observed Exoplanet Sample"
} | null | null | null | null | true | null | 20199 | null | Default | null | null |
null | {
"abstract": " Endowing robots with the capability of assessing risk and making risk-aware\ndecisions is widely considered a key step toward ensuring safety for robots\noperating under uncertainty. But, how should a robot quantify risk? A natural\nand common approach is to consider the framework whereby costs are assigned to\nstochastic outcomes - an assignment captured by a cost random variable.\nQuantifying risk then corresponds to evaluating a risk metric, i.e., a mapping\nfrom the cost random variable to a real number. Yet, the question of what\nconstitutes a \"good\" risk metric has received little attention within the\nrobotics community. The goal of this paper is to explore and partially address\nthis question by advocating axioms that risk metrics in robotics applications\nshould satisfy in order to be employed as rational assessments of risk. We\ndiscuss general representation theorems that precisely characterize the class\nof metrics that satisfy these axioms (referred to as distortion risk metrics),\nand provide instantiations that can be used in applications. We further discuss\npitfalls of commonly used risk metrics in robotics, and discuss additional\nproperties that one must consider in sequential decision making tasks. Our hope\nis that the ideas presented here will lead to a foundational framework for\nquantifying risk (and hence safety) in robotics applications.\n",
"title": "How Should a Robot Assess Risk? Towards an Axiomatic Theory of Risk in Robotics"
} | null | null | [
"Computer Science"
]
| null | true | null | 20200 | null | Validated | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.