text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " The subject of Polynomiography deals with algorithmic visualization of\npolynomial equations, having many applications in STEM and art, see [Kal04].\nHere we consider the polynomiography of the partial sums of the exponential\nseries. While the exponential function is taught in standard calculus courses,\nit is unlikely that properties of zeros of its partial sums are considered in\nsuch courses, let alone their visualization as science or art. The Monthly\narticle Zemyan discusses some mathematical properties of these zeros. Here we\nexhibit some fractal and non-fractal polynomiographs of the partial sums while\nalso presenting a brief introduction of the underlying concepts.\nPolynomiography establishes a different kind of appreciation of the\nsignificance of polynomials in STEM, as well as in art. It helps in the\nteaching of various topics at diverse levels. It also leads to new discoveries\non polynomials and inspires new applications. We also present a link for the\neducator to get access to a demo polynomiography software together with a\nmodule that helps teach basic topics to middle and high school students, as\nwell as undergraduates.\n",
"title": "An Invitation to Polynomiography via Exponential Series"
}
| null | null | null | null | true | null |
3701
| null |
Default
| null | null |
null |
{
"abstract": " This is a no brainer. Using bicycles to commute is the most sustainable form\nof transport, is the least expensive to use and are pollution-free. Towns and\ncities have to be made bicycle-friendly to encourage their wide usage.\nTherefore, cycling paths should be more convenient, comfortable, and safe to\nride. This paper investigates a smartphone application, which passively\nmonitors the road conditions during cyclists ride. To overcome the problems of\nmonitoring roads, we present novel algorithms that sense the rough cycling\npaths and locate road bumps. Each event is detected in real time to improve the\nuser friendliness of the application. Cyclists may keep their smartphones at\nany random orientation and placement. Moreover, different smartphones sense the\nsame incident dissimilarly and hence report discrepant sensor values. We\nfurther address the aforementioned difficulties that limit such crowd-sourcing\napplication. We evaluate our sensing application on cycling paths in Singapore,\nand show that it can successfully detect such bad road conditions.\n",
"title": "Towards Comfortable Cycling: A Practical Approach to Monitor the Conditions in Cycling Paths"
}
| null | null |
[
"Computer Science"
] | null | true | null |
3702
| null |
Validated
| null | null |
null |
{
"abstract": " We define a kind of moduli space of nested surfaces and mappings, which we\ncall a comparison moduli space. We review examples of such spaces in geometric\nfunction theory and modern Teichmueller theory, and illustrate how a wide range\nof phenomena in complex analysis are captured by this notion of moduli space.\nThe paper includes a list of open problems in classical and modern function\ntheory and Teichmueller theory ranging from general theoretical questions to\nspecific technical problems.\n",
"title": "Comparison moduli spaces of Riemann surfaces"
}
| null | null |
[
"Mathematics"
] | null | true | null |
3703
| null |
Validated
| null | null |
null |
{
"abstract": " Manually labeled corpora are expensive to create and often not available for\nlow-resource languages or domains. Automatic labeling approaches are an\nalternative way to obtain labeled data in a quicker and cheaper way. However,\nthese labels often contain more errors which can deteriorate a classifier's\nperformance when trained on this data. We propose a noise layer that is added\nto a neural network architecture. This allows modeling the noise and train on a\ncombination of clean and noisy data. We show that in a low-resource NER task we\ncan improve performance by up to 35% by using additional, noisy data and\nhandling the noise.\n",
"title": "Training a Neural Network in a Low-Resource Setting on Automatically Annotated Noisy Data"
}
| null | null |
[
"Statistics"
] | null | true | null |
3704
| null |
Validated
| null | null |
null |
{
"abstract": " Spectral topic modeling algorithms operate on matrices/tensors of word\nco-occurrence statistics to learn topic-specific word distributions. This\napproach removes the dependence on the original documents and produces\nsubstantial gains in efficiency and provable topic inference, but at a cost:\nthe model can no longer provide information about the topic composition of\nindividual documents. Recently Thresholded Linear Inverse (TLI) is proposed to\nmap the observed words of each document back to its topic composition. However,\nits linear characteristics limit the inference quality without considering the\nimportant prior information over topics. In this paper, we evaluate Simple\nProbabilistic Inverse (SPI) method and novel Prior-aware Dual Decomposition\n(PADD) that is capable of learning document-specific topic compositions in\nparallel. Experiments show that PADD successfully leverages topic correlations\nas a prior, notably outperforming TLI and learning quality topic compositions\ncomparable to Gibbs sampling on various data.\n",
"title": "Prior-aware Dual Decomposition: Document-specific Topic Inference for Spectral Topic Models"
}
| null | null | null | null | true | null |
3705
| null |
Default
| null | null |
null |
{
"abstract": " Future observations of terrestrial exoplanet atmospheres will occur for\nplanets at different stages of geological evolution. We expect to observe a\nwide variety of atmospheres and planets with alternative evolutionary paths,\nwith some planets resembling Earth at different epochs. For an Earth-like\natmospheric time trajectory, we simulate planets from prebiotic to current\natmosphere based on geological data. We use a stellar grid F0V to M8V\n($T_\\mathrm{eff}$ = 7000$\\mskip3mu$K to 2400$\\mskip3mu$K) to model four\ngeological epochs of Earth's history corresponding to a prebiotic world\n(3.9$\\mskip3mu$Ga), the rise of oxygen at 2.0$\\mskip3mu$Ga and at\n0.8$\\mskip3mu$Ga, and the modern Earth. We show the VIS - IR spectral features,\nwith a focus on biosignatures through geological time for this grid of Sun-like\nhost stars and the effect of clouds on their spectra.\nWe find that the observability of biosignature gases reduces with increasing\ncloud cover and increases with planetary age. The observability of the visible\nO$_2$ feature for lower concentrations will partly depend on clouds, which\nwhile slightly reducing the feature increase the overall reflectivity thus the\ndetectable flux of a planet. The depth of the IR ozone feature contributes\nsubstantially to the opacity at lower oxygen concentrations especially for the\nhigh near-UV stellar environments around F stars. Our results are a grid of\nmodel spectra for atmospheres representative of Earth's geological history to\ninform future observations and instrument design and are publicly available\nonline.\n",
"title": "Spectra of Earth-like Planets Through Geological Evolution Around FGKM Stars"
}
| null | null | null | null | true | null |
3706
| null |
Default
| null | null |
null |
{
"abstract": " Although the definition of what empathetic preferences exactly are is still\nevolving, there is a general consensus in the psychology, science and\nengineering communities that the evolution toward players' behaviors in\ninteractive decision-making problems will be accompanied by the exploitation of\ntheir empathy, sympathy, compassion, antipathy, spitefulness, selfishness,\naltruism, and self-abnegating states in the payoffs. In this article, we study\none-shot bimatrix games from a psychological game theory viewpoint. A new\nempathetic payoff model is calculated to fit empirical observations and both\npure and mixed equilibria are investigated. For a realized empathy structure,\nthe bimatrix game is categorized among four generic class of games. Number of\ninteresting results are derived. A notable level of involvement can be observed\nin the empathetic one-shot game compared the non-empathetic one and this holds\neven for games with dominated strategies. Partial altruism can help in breaking\nsymmetry, in reducing payoff-inequality and in selecting social welfare and\nmore efficient outcomes. By contrast, partial spite and self-abnegating may\nworsen payoff equity. Empathetic evolutionary game dynamics are introduced to\ncapture the resulting empathetic evolutionarily stable strategies under wide\nrange of revision protocols including Brown-von Neumann-Nash, Smith, imitation,\nreplicator, and hybrid dynamics. Finally, mutual support and Berge solution are\ninvestigated and their connection with empathetic preferences are established.\nWe show that pure altruism is logically inconsistent, only by balancing it with\nsome partial selfishness does it create a consistent psychology.\n",
"title": "Empathy in Bimatrix Games"
}
| null | null |
[
"Computer Science"
] | null | true | null |
3707
| null |
Validated
| null | null |
null |
{
"abstract": " We construct non-commutative analogs of transport maps among free Gibbs state\nsatisfying a certain convexity condition. Unlike previous constructions, our\napproach is non-perturbative in nature and thus can be used to construct\ntransport maps between free Gibbs states associated to potentials which are far\nfrom quadratic, i.e., states which are far from the semicircle law. An\nessential technical ingredient in our approach is the extension of free\nstochastic analysis to non-commutative spaces of functions based on the\nHaagerup tensor product.\n",
"title": "Free transport for convex potentials"
}
| null | null | null | null | true | null |
3708
| null |
Default
| null | null |
null |
{
"abstract": " Support vector data description (SVDD) is a popular anomaly detection\ntechnique. The SVDD classifier partitions the whole data space into an\n$\\textit{inlier}$ region, which consists of the region $\\textit{near}$ the\ntraining data, and an $\\textit{outlier}$ region, which consists of points\n$\\textit{away}$ from the training data. The computation of the SVDD classifier\nrequires a kernel function, for which the Gaussian kernel is a common choice.\nThe Gaussian kernel has a bandwidth parameter, and it is important to set the\nvalue of this parameter correctly for good results. A small bandwidth leads to\noverfitting such that the resulting SVDD classifier overestimates the number of\nanomalies, whereas a large bandwidth leads to underfitting and an inability to\ndetect many anomalies. In this paper, we present a new unsupervised method for\nselecting the Gaussian kernel bandwidth. Our method, which exploits the\nlow-rank representation of the kernel matrix to suggest a kernel bandwidth\nvalue, is competitive with existing bandwidth selection methods.\n",
"title": "The Trace Criterion for Kernel Bandwidth Selection for Support Vector Data Description"
}
| null | null | null | null | true | null |
3709
| null |
Default
| null | null |
null |
{
"abstract": " A trigonal phase existing only as small patches on chemically exfoliated few\nlayer, thermodynamically stable 1H phase of MoS2 is believed to influence\ncritically properties of MoS2 based devices. This phase has been most often\nattributed to the metallic 1T phase. We investigate the electronic structure of\nchemically exfoliated MoS2 few layered systems using spatially resolved (lesser\nthan 120 nm resolution) photoemission spectroscopy and Raman spectroscopy in\nconjunction with state-of-the-art electronic structure calculations. On the\nbasis of these results, we establish that the ground state of this phase is a\nsmall gap (~90 meV) semiconductor in contrast to most claims in the literature;\nwe also identify the specific trigonal (1T') structure it has among many\nsuggested ones.\n",
"title": "Chemical exfoliation of MoS2 leads to semiconducting 1T' phase and not the metallic 1T phase"
}
| null | null |
[
"Physics"
] | null | true | null |
3710
| null |
Validated
| null | null |
null |
{
"abstract": " We study the problem of estimating an unknown vector $\\theta$ from an\nobservation $X$ drawn according to the normal distribution with mean $\\theta$\nand identity covariance matrix under the knowledge that $\\theta$ belongs to a\nknown closed convex set $\\Theta$. In this general setting, Chatterjee (2014)\nproved that the natural constrained least squares estimator is \"approximately\nadmissible\" for every $\\Theta$. We extend this result by proving that the same\nproperty holds for all convex penalized estimators as well. Moreover, we\nsimplify and shorten the original proof considerably. We also provide explicit\nupper and lower bounds for the universal constant underlying the notion of\napproximate admissibility.\n",
"title": "A note on the approximate admissibility of regularized estimators in the Gaussian sequence model"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
3711
| null |
Validated
| null | null |
null |
{
"abstract": " By a labeled graph $C^*$-algebra we mean a $C^*$-algebra associated to a\nlabeled space $(E,\\mathcal L,\\mathcal E)$ consisting of a labeled graph\n$(E,\\mathcal L)$ and the smallest normal accommodating set $\\mathcal E$ of\nvertex subsets. Every graph $C^*$-algebra $C^*(E)$ is a labeled graph\n$C^*$-algebra and it is well known that $C^*(E)$ is simple if and only if the\ngraph $E$ is cofinal and satisfies Condition (L). Bates and Pask extend these\nconditions of graphs $E$ to labeled spaces, and show that if a set-finite and\nreceiver set-finite labeled space $(E,\\mathcal L, \\mathcal E)$ is cofinal and\ndisagreeable, then its $C^*$-algebra $C^*(E,\\mathcal L, \\mathcal E)$ is simple.\nIn this paper, we show that the converse is also true.\n",
"title": "Simple labeled graph $C^*$-algebras are associated to disagreeable labeled spaces"
}
| null | null | null | null | true | null |
3712
| null |
Default
| null | null |
null |
{
"abstract": " 2 Diabetes is a leading worldwide public health concern, and its increasing\nprevalence has significant health and economic importance in all nations. The\ncondition is a multifactorial disorder with a complex aetiology. The genetic\ndeterminants remain largely elusive, with only a handful of identified\ncandidate genes. Genome wide association studies (GWAS) promised to\nsignificantly enhance our understanding of genetic based determinants of common\ncomplex diseases. To date, 83 single nucleotide polymorphisms (SNPs) for type 2\ndiabetes have been identified using GWAS. Standard statistical tests for single\nand multi-locus analysis such as logistic regression, have demonstrated little\neffect in understanding the genetic architecture of complex human diseases.\nLogistic regression is modelled to capture linear interactions but neglects the\nnon-linear epistatic interactions present within genetic data. There is an\nurgent need to detect epistatic interactions in complex diseases as this may\nexplain the remaining missing heritability in such diseases. In this paper, we\npresent a novel framework based on deep learning algorithms that deal with\nnon-linear epistatic interactions that exist in genome wide association data.\nLogistic association analysis under an additive genetic model, adjusted for\ngenomic control inflation factor, is conducted to remove statistically\nimprobable SNPs to minimize computational overheads.\n",
"title": "Extracting Epistatic Interactions in Type 2 Diabetes Genome-Wide Data Using Stacked Autoencoder"
}
| null | null | null | null | true | null |
3713
| null |
Default
| null | null |
null |
{
"abstract": " In recent years, randomized methods for numerical linear algebra have\nreceived growing interest as a general approach to large-scale problems.\nTypically, the essential ingredient of these methods is some form of randomized\ndimension reduction, which accelerates computations, but also creates random\napproximation error. In this way, the dimension reduction step encodes a\ntradeoff between cost and accuracy. However, the exact numerical relationship\nbetween cost and accuracy is typically unknown, and consequently, it may be\ndifficult for the user to precisely know (1) how accurate a given solution is,\nor (2) how much computation is needed to achieve a given level of accuracy. In\nthe current paper, we study randomized matrix multiplication (sketching) as a\nprototype setting for addressing these general problems. As a solution, we\ndevelop a bootstrap method for {directly estimating} the accuracy as a function\nof the reduced dimension (as opposed to deriving worst-case bounds on the\naccuracy in terms of the reduced dimension). From a computational standpoint,\nthe proposed method does not substantially increase the cost of standard\nsketching methods, and this is made possible by an \"extrapolation\" technique.\nIn addition, we provide both theoretical and empirical results to demonstrate\nthe effectiveness of the proposed method.\n",
"title": "A Bootstrap Method for Error Estimation in Randomized Matrix Multiplication"
}
| null | null | null | null | true | null |
3714
| null |
Default
| null | null |
null |
{
"abstract": " Results of investigations of the near-horizontal muons in the range of zenith\nangles of 85-95 degrees are presented. In this range, so-called \"albedo\" muons\n(atmospheric muons scattered in the ground into the upper hemisphere) are\ndetected. Albedo muons are one of the main sources of the background in\nneutrino experiments. Experimental data of two series of measurements conducted\nat the experimental complex NEVOD-DECOR with the duration of about 30 thousand\nhours \"live\" time are analyzed. The results of measurements of the muon flux\nintensity are compared with simulation results using Monte-Carlo on the basis\nof two multiple Coulomb scattering models: model of point-like nuclei and model\ntaking into account finite size of nuclei.\n",
"title": "Results of measurements of the flux of albedo muons with NEVOD-DECOR experimental complex"
}
| null | null | null | null | true | null |
3715
| null |
Default
| null | null |
null |
{
"abstract": " The demand for low-dissipation nanoscale memory devices is as strong as ever.\nAs Moore's Law is staggering, and the demand for a low-power-consuming\nsupercomputer is high, the goal of making information processing circuits out\nof superconductors is one of the central goals of modern technology and\nphysics. So far, digital superconducting circuits could not demonstrate their\nimmense potential. One important reason for this is that a dense\nsuperconducting memory technology is not yet available. Miniaturization of\ntraditional superconducting quantum interference devices is difficult below a\nfew micrometers because their operation relies on the geometric inductance of\nthe superconducting loop. Magnetic memories do allow nanometer-scale\nminiaturization, but they are not purely superconducting (Baek et al 2014 Nat.\nCommun. 5 3888). Our approach is to make nanometer scale memory cells based on\nthe kinetic inductance (and not geometric inductance) of superconducting\nnanowire loops, which have already shown many fascinating properties (Aprili\n2006 Nat. Nanotechnol. 1 15; Hopkins et al 2005 Science 308 1762). This allows\nmuch smaller devices and naturally eliminates magnetic-field cross-talk. We\ndemonstrate that the vorticity, i.e., the winding number of the order\nparameter, of a closed superconducting loop can be used for realizing a\nnanoscale nonvolatile memory device. We demonstrate how to alter the vorticity\nin a controlled fashion by applying calibrated current pulses. A reliable\nread-out of the memory is also demonstrated. We present arguments that such\nmemory can be developed to operate without energy dissipation.\n",
"title": "Nanoscale superconducting memory based on the kinetic inductance of asymmetric nanowire loops"
}
| null | null | null | null | true | null |
3716
| null |
Default
| null | null |
null |
{
"abstract": " Licas (lightweight internet-based communication for autonomic services) is a\ndistributed framework for building service-based systems. The framework\nprovides a p2p server and more intelligent processing of information through\nits AI algorithms. Distributed communication includes XML-RPC, REST, HTTP and\nWeb Services. It can now provide a robust platform for building different types\nof system, where Microservices or SOA would be possible. However, the system\nmay be equally suited for the IoT, as it provides classes to connect with\nexternal sources and has an optional Autonomic Manager with a MAPE control loop\nintegrated into the communication process. The system is also mobile-compatible\nwith Android. This paper focuses in particular on the autonomic setup and how\nthat might be used. A novel linking mechanism has been described previously [5]\nthat can be used to dynamically link sources and this is also considered, as\npart of the autonomous framework.\n",
"title": "The Autonomic Architecture of the Licas System"
}
| null | null | null | null | true | null |
3717
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we introduce new technique for determining some necessary and\nsufficient conditions of the normalized Bessel functions $j_{\\nu}$, normalized\nStruve functions $h_{\\nu}$ and normalized Lommel functions $s_{\\mu,\\nu}$ of the\nfirst kind, to be in the subclasses of starlike and convex functions of order\n$\\alpha$ and type $\\beta$.\n",
"title": "Some Characterizations on the Normalized Lommel, Struve and Bessel Functions of the First Kind"
}
| null | null | null | null | true | null |
3718
| null |
Default
| null | null |
null |
{
"abstract": " This manuscript is a preprint version of Part 1 (General Introduction and\nSynopsis) of the book Applied Evaluative Informetrics, to be published by\nSpringer in the summer of 2017. This book presents an introduction to the field\nof applied evaluative informetrics, and is written for interested scholars and\nstudents from all domains of science and scholarship. It sketches the field's\nhistory, recent achievements, and its potential and limits. It explains the\nnotion of multi-dimensional research performance, and discusses the pros and\ncons of 28 citation-, patent-, reputation- and altmetrics-based indicators. In\naddition, it presents quantitative research assessment as an evaluation\nscience, and focuses on the role of extra-informetric factors in the\ndevelopment of indicators, and on the policy context of their application. It\nalso discusses the way forward, both for users and for developers of\ninformetric tools.\n",
"title": "Applied Evaluative Informetrics: Part 1"
}
| null | null | null | null | true | null |
3719
| null |
Default
| null | null |
null |
{
"abstract": " We describe general multilevel Monte Carlo methods that estimate the price of\nan Asian option monitored at $m$ fixed dates. Our approach yields unbiased\nestimators with standard deviation $O(\\epsilon)$ in $O(m + (1/\\epsilon)^{2})$\nexpected time for a variety of processes including the Black-Scholes model,\nMerton's jump-diffusion model, the Square-Root diffusion model, Kou's double\nexponential jump-diffusion model, the variance gamma and NIG exponential Levy\nprocesses and, via the Milstein scheme, processes driven by scalar stochastic\ndifferential equations. Using the Euler scheme, our approach estimates the\nAsian option price with root mean square error $O(\\epsilon)$ in\n$O(m+(\\ln(\\epsilon)/\\epsilon)^{2})$ expected time for processes driven by\nmultidimensional stochastic differential equations. Numerical experiments\nconfirm that our approach outperforms the conventional Monte Carlo method by a\nfactor of order $m$.\n",
"title": "General multilevel Monte Carlo methods for pricing discretely monitored Asian options"
}
| null | null | null | null | true | null |
3720
| null |
Default
| null | null |
null |
{
"abstract": " The sensing of magnetic fields has important applications in medicine,\nparticularly to the sensing of signals in the heart and brain. The fields\nassociated with biomagnetism are exceptionally weak, being many orders of\nmagnitude smaller than the Earth's magnetic field. To measure them requires\nthat we use the most sensitive detection techniques, however, to be\ncommercially viable this must be done at an affordable cost. The current state\nof the art uses costly SQUID magnetometers, although they will likely be\nsuperseded by less costly, but otherwise limited, alkali vapour magnetometers.\nHere, we discuss the application of diamond magnetometers to medical\napplications. Diamond magnetometers are robust, solid state devices that work\nin a broad range of environments, with the potential for sensitivity comparable\nto the leading technologies.\n",
"title": "Medical applications of diamond magnetometry: commercial viability"
}
| null | null |
[
"Physics"
] | null | true | null |
3721
| null |
Validated
| null | null |
null |
{
"abstract": " For more than a century, it has been believed that all hydraulic jumps are\ncreated due to gravity. However, we found that thin-film hydraulic jumps are\nnot induced by gravity. This study explores the initiation of thin-film\nhydraulic jumps. For circular jumps produced by the normal impingement of a jet\nonto a solid surface, we found that the jump is formed when surface tension and\nviscous forces balance the momentum in the film and gravity plays no\nsignificant role. Experiments show no dependence on the orientation of the\nsurface and a scaling relation balancing viscous forces and surface tension\ncollapses the experimental data. Experiments on thin film planar jumps in a\nchannel also show that the predominant balance is with surface tension,\nalthough for the thickness of the films we studied gravity also played a role\nin the jump formation. A theoretical analysis shows that the downstream\ntransport of surface tension energy is the previously neglected, critical\ningredient in these flows and that capillary waves play the role of gravity\nwaves in a traditional jump in demarcating the transition from the\nsupercritical to subcritical flow associated with these jumps.\n",
"title": "On the origin of the hydraulic jump in a thin liquid film"
}
| null | null | null | null | true | null |
3722
| null |
Default
| null | null |
null |
{
"abstract": " When sexual violence is a product of organized crime or social imaginary, the\nlinks between sexual violence episodes can be understood as a latent structure.\nWith this assumption in place, we can use data science to uncover complex\npatterns. In this paper we focus on the use of data mining techniques to unveil\ncomplex anomalous spatiotemporal patterns of sexual violence. We illustrate\ntheir use by analyzing all reported rapes in El Salvador over a period of nine\nyears. Through our analysis, we are able to provide evidence of phenomena that,\nto the best of our knowledge, have not been previously reported in literature.\nWe devote special attention to a pattern we discover in the East, where\nunderage victims report their boyfriends as perpetrators at anomalously high\nrates. Finally, we explain how such analyzes could be conducted in real-time,\nenabling early detection of emerging patterns to allow law enforcement agencies\nand policy makers to react accordingly.\n",
"title": "Discovery of Complex Anomalous Patterns of Sexual Violence in El Salvador"
}
| null | null | null | null | true | null |
3723
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the problem of truth discovery based on opinions from multiple\nagents who may be unreliable or biased. We consider the case where agents'\nreliabilities or biases are correlated if they belong to the same community,\nwhich defines a group of agents with similar opinions regarding a particular\nevent. An agent can belong to different communities for different events, and\nthese communities are unknown a priori. We incorporate knowledge of the agents'\nsocial network in our truth discovery framework and develop Laplace variational\ninference methods to estimate agents' reliabilities, communities, and the event\nstates. We also develop a stochastic variational inference method to scale our\nmodel to large social networks. Simulations and experiments on real data\nsuggest that when observations are sparse, our proposed methods perform better\nthan several other inference methods, including majority voting, TruthFinder,\nAccuSim, the Confidence-Aware Truth Discovery method, the Bayesian Classifier\nCombination (BCC) method, and the Community BCC method.\n",
"title": "Using Social Network Information in Bayesian Truth Discovery"
}
| null | null | null | null | true | null |
3724
| null |
Default
| null | null |
null |
{
"abstract": " This document presents HiPS, a hierarchical scheme for the description,\nstorage and access of sky survey data. The system is based on hierarchical\ntiling of sky regions at finer and finer spatial resolution which facilitates a\nprogressive view of a survey, and supports multi-resolution zooming and\npanning. HiPS uses the HEALPix tessellation of the sky as the basis for the\nscheme and is implemented as a simple file structure with a direct indexing\nscheme that leads to practical implementations.\n",
"title": "IVOA Recommendation: HiPS - Hierarchical Progressive Survey"
}
| null | null |
[
"Physics"
] | null | true | null |
3725
| null |
Validated
| null | null |
null |
{
"abstract": " Applying deep learning methods to mammography assessment has remained a\nchallenging topic. Dense noise with sparse expressions, mega-pixel raw data\nresolution, lack of diverse examples have all been factors affecting\nperformance. The lack of pixel-level ground truths have especially limited\nsegmentation methods in pushing beyond approximately bounding regions. We\npropose a classification approach grounded in high performance tissue\nassessment as an alternative to all-in-one localization and assessment models\nthat is also capable of pinpointing the causal pixels. First, the objective of\nthe mammography assessment task is formalized in the context of local tissue\nclassifiers. Then, the accuracy of a convolutional neural net is evaluated on\nclassifying patches of tissue with suspicious findings at varying scales, where\nhighest obtained AUC is above $0.9$. The local evaluations of one such expert\ntissue classifier is used to augment the results of a heatmap regression model\nand additionally recover the exact causal regions at high resolution as a\nsaliency image suitable for clinical settings.\n",
"title": "Mammography Assessment using Multi-Scale Deep Classifiers"
}
| null | null | null | null | true | null |
3726
| null |
Default
| null | null |
null |
{
"abstract": " In order to optimize the performance of the CRV, reflection studies and aging\nstudies were conducted.\n",
"title": "Studies to Understand and Optimize the Performance of Scintillation Counters for the Mu2e Cosmic Ray Veto System"
}
| null | null | null | null | true | null |
3727
| null |
Default
| null | null |
null |
{
"abstract": " Linear parameter-varying (LPV) models form a powerful model class to analyze\nand control a (nonlinear) system of interest. Identifying an LPV model of a\nnonlinear system can be challenging due to the difficulty of selecting the\nscheduling variable(s) a priori, especially if a first principles based\nunderstanding of the system is unavailable. Converting a nonlinear model to an\nLPV form is also non-trivial and requires systematic methods to automate the\nprocess.\nInspired by these challenges, a systematic LPV embedding approach starting\nfrom multiple-input multiple-output (MIMO) linear fractional representations\nwith a nonlinear feedback block (NLFR) is proposed. This NLFR model class is\nembedded into the LPV model class by an automated factorization of the\n(possibly MIMO) static nonlinear block present in the model. As a result of the\nfactorization, an LPV-LFR or an LPV state-space model with affine dependency on\nthe scheduling is obtained. This approach facilitates the selection of the\nscheduling variable and the connected mapping of system variables. Such a\nconversion method enables to use nonlinear identification tools to estimate LPV\nmodels.\nThe potential of the proposed approach is illustrated on a 2-DOF nonlinear\nmass-spring-damper example.\n",
"title": "Linear Parameter Varying Representation of a class of MIMO Nonlinear Systems"
}
| null | null |
[
"Computer Science"
] | null | true | null |
3728
| null |
Validated
| null | null |
null |
{
"abstract": " We perform a set of general relativistic, radiative, magneto-hydrodynamical\nsimulations (GR-RMHD) to study the transition from radiatively inefficient to\nefficient state of accretion on a non-rotating black hole. We study ion to\nelectron temperature ratios ranging from $T_{\\rm i}/T_{\\rm e}=10$ to $100$, and\nsimulate flows corresponding to accretion rates as low as $10^{-6}\\dot M_{\\rm\nEdd}$, and as high as $10^{-2}\\dot M_{\\rm Edd}$. We have found that the\nradiative output of accretion flows increases with accretion rate, and that the\ntransition occurs earlier for hotter electrons (lower $T_{\\rm i}/T_{\\rm e}$\nratio). At the same time, the mechanical efficiency hardly changes and accounts\nto ${\\approx}\\,3\\%$ of the accreted rest mass energy flux, even at the highest\nsimulated accretion rates. This is particularly important for the mechanical\nAGN feedback regulating massive galaxies, groups, and clusters. Comparison with\nrecent observations of radiative and mechanical AGN luminosities suggests that\nthe ion to electron temperature ratio in the inner, collisionless accretion\nflow should fall within $10<T_{\\rm i}/T_{\\rm e}<30$, i.e., the electron\ntemperature should be several percent of the ion temperature.\n",
"title": "Kinetic and radiative power from optically thin accretion flows"
}
| null | null | null | null | true | null |
3729
| null |
Default
| null | null |
null |
{
"abstract": " Correlated oxide heterostructures pose a challenging problem in condensed\nmatter research due to their structural complexity interweaved with demanding\nelectron states beyond the effective single-particle picture. By exploring the\ncorrelated electronic structure of SmTiO$_3$ doped with few layers of SrO, we\nprovide an insight into the complexity of such systems. Furthermore, it is\nshown how the advanced combination of band theory on the level of Kohn-Sham\ndensity functional theory with explicit many-body theory on the level of\ndynamical mean-field theory provides an adequate tool to cope with the problem.\nCoexistence of band-insulating, metallic and Mott-critical electronic regions\nis revealed in individual heterostructures with multi-orbital manifolds.\nIntriguing orbital polarizations, that qualitatively vary between the metallic\nand the Mott layers are also encountered.\n",
"title": "First-Principles Many-Body Investigation of Correlated Oxide Heterostructures: Few-Layer-Doped SmTiO$_3$"
}
| null | null | null | null | true | null |
3730
| null |
Default
| null | null |
null |
{
"abstract": " The two state-of-the-art implementations of boosted trees: XGBoost and\nLightGBM, can process large training sets extremely fast. However, this\nperformance requires that memory size is sufficient to hold a 2-3 multiple of\nthe training set size. This paper presents an alternative approach to\nimplementing boosted trees. which achieves a significant speedup over XGBoost\nand LightGBM, especially when memory size is small. This is achieved using a\ncombination of two techniques: early stopping and stratified sampling, which\nare explained and analyzed in the paper. We describe our implementation and\npresent experimental results to support our claims.\n",
"title": "Faster Boosting with Smaller Memory"
}
| null | null | null | null | true | null |
3731
| null |
Default
| null | null |
null |
{
"abstract": " We consider the task of automated estimation of facial expression intensity.\nThis involves estimation of multiple output variables (facial action units ---\nAUs) that are structurally dependent. Their structure arises from statistically\ninduced co-occurrence patterns of AU intensity levels. Modeling this structure\nis critical for improving the estimation performance; however, this performance\nis bounded by the quality of the input features extracted from face images. The\ngoal of this paper is to model these structures and estimate complex feature\nrepresentations simultaneously by combining conditional random field (CRF)\nencoded AU dependencies with deep learning. To this end, we propose a novel\nCopula CNN deep learning approach for modeling multivariate ordinal variables.\nOur model accounts for $ordinal$ structure in output variables and their\n$non$-$linear$ dependencies via copula functions modeled as cliques of a CRF.\nThese are jointly optimized with deep CNN feature encoding layers using a newly\nintroduced balanced batch iterative training algorithm. We demonstrate the\neffectiveness of our approach on the task of AU intensity estimation on two\nbenchmark datasets. We show that joint learning of the deep features and the\ntarget output structure results in significant performance gains compared to\nexisting deep structured models for analysis of facial expressions.\n",
"title": "Deep Structured Learning for Facial Action Unit Intensity Estimation"
}
| null | null | null | null | true | null |
3732
| null |
Default
| null | null |
null |
{
"abstract": " Ages and masses of young stars are often estimated by comparing their\nluminosities and effective temperatures to pre-main sequence stellar evolution\ntracks, but magnetic fields and starspots complicate both the observations and\nevolution. To understand their influence, we study the heavily-spotted\nweak-lined T-Tauri star LkCa 4 by searching for spectral signatures of\nradiation originating from the starspot or starspot groups. We introduce a new\nmethodology for constraining both the starspot filling factor and the spot\ntemperature by fitting two-temperature stellar atmosphere models constructed\nfrom Phoenix synthetic spectra to a high-resolution near-IR IGRINS spectrum.\nClearly discernable spectral features arise from both a hot photospheric\ncomponent $T_{\\mathrm{hot}} \\sim4100$ K and to a cool component\n$T_{\\mathrm{cool}} \\sim2700-3000$ K, which covers $\\sim80\\%$ of the visible\nsurface. This mix of hot and cool emission is supported by analyses of the\nspectral energy distribution, rotational modulation of colors and of TiO band\nstrengths, and features in low-resolution optical/near-IR spectroscopy.\nAlthough the revised effective temperature and luminosity make LkCa 4 appear\nmuch younger and lower mass than previous estimates from unspotted stellar\nevolution models, appropriate estimates will require the production and\nadoption of spotted evolutionary models. Biases from starspots likely afflict\nmost fully convective young stars and contribute to uncertainties in ages and\nage spreads of open clusters. In some spectral regions starspots act as a\nfeatureless veiling continuum owing to high rotational broadening and heavy\nline-blanketing in cool star spectra. Some evidence is also found for an\nanti-correlation between the velocities of the warm and cool components.\n",
"title": "Placing the spotted T Tauri star LkCa 4 on an HR diagram"
}
| null | null |
[
"Physics"
] | null | true | null |
3733
| null |
Validated
| null | null |
null |
{
"abstract": " We report a study of the structural phase transitions induced by pressure in\nbulk black phosphorus by using both synchrotron x-ray diffraction for pressures\nup to 12.2 GPa and Raman spectroscopy up to 18.2 GPa. Very recently black\nphosphorus attracted large attention because of the unique properties of\nfewlayers samples (phosphorene), but some basic questions are still open in the\ncase of the bulk system. As concerning the presence of a Raman spectrum above\n10 GPa, which should not be observed in an elemental simple cubic system, we\npropose a new explanation by attributing a key role to the non-hydrostatic\nconditions occurring in Raman experiments. Finally, a combined analysis of\nRaman and XRD data allowed us to obtain quantitative information on presence\nand extent of coexistences between different structural phases from ~5 up to\n~15 GPa. This information can have an important role in theoretical studies on\npressure-induced structural and electronic phase transitions in black\nphosphorus.\n",
"title": "Coexistence of pressure-induced structural phases in bulk black phosphorus: a combined x-ray diffraction and Raman study up to 18 GPa"
}
| null | null | null | null | true | null |
3734
| null |
Default
| null | null |
null |
{
"abstract": " The solution path of the 1D fused lasso for an $n$-dimensional input is\npiecewise linear with $\\mathcal{O}(n)$ segments (Hoefling et al. 2010 and\nTibshirani et al 2011). However, existing proofs of this bound do not hold for\nthe weighted fused lasso. At the same time, results for the generalized lasso,\nof which the weighted fused lasso is a special case, allow $\\Omega(3^n)$\nsegments (Mairal et al. 2012). In this paper, we prove that the number of\nsegments in the solution path of the weighted fused lasso is\n$\\mathcal{O}(n^2)$, and that, for some instances, it is $\\Omega(n^2)$. We also\ngive a new, very simple, proof of the $\\mathcal{O}(n)$ bound for the fused\nlasso.\n",
"title": "On the Complexity of the Weighted Fused Lasso"
}
| null | null | null | null | true | null |
3735
| null |
Default
| null | null |
null |
{
"abstract": " Hydrogen-rich compounds are important for understanding the dissociation of\ndense molecular hydrogen, as well as searching for room temperature\nBardeen-Cooper-Schrieffer (BCS) superconductors. A recent high pressure\nexperiment reported the successful synthesis of novel insulating lithium\npolyhydrides when above 130 GPa. However, the results are in sharp contrast to\nprevious theoretical prediction by PBE functional that around this pressure\nrange all lithium polyhydrides (LiHn (n = 2-8)) should be metallic. In order to\naddress this discrepancy, we perform unbiased structure search with first\nprinciples calculation by including the van der Waals interaction that was\nignored in previous prediction to predict the high pressure stable structures\nof LiHn (n = 2-11, 13) up to 200 GPa. We reproduce the previously predicted\nstructures, and further find novel compositions that adopt more stable\nstructures. The van der Waals functional (vdW-DF) significantly alters the\nrelative stability of lithium polyhydrides, and predicts that the stable\nstoichiometries for the ground-state should be LiH2 and LiH9 at 130-170 GPa,\nand LiH2, LiH8 and LiH10 at 180-200 GPa. Accurate electronic structure\ncalculation with GW approximation indicates that LiH, LiH2, LiH7, and LiH9 are\ninsulative up to at least 208 GPa, and all other lithium polyhydrides are\nmetallic. The calculated vibron frequencies of these insulating phases are also\nin accordance with the experimental infrared (IR) data. This reconciliation\nwith the experimental observation suggests that LiH2, LiH7, and LiH9 are the\npossible candidates for lithium polyhydrides synthesized in that experiment.\nOur results reinstate the credibility of density functional theory in\ndescription H-rich compounds, and demonstrate the importance of considering van\nder Waals interaction in this class of materials.\n",
"title": "Prediction of Stable Ground-State Lithium Polyhydrides under High Pressures"
}
| null | null | null | null | true | null |
3736
| null |
Default
| null | null |
null |
{
"abstract": " The Future Circular Collider (FCC), currently in the design phase, will\naddress many outstanding questions in particle physics. The technology to\nsucceed in this 100 km circumference collider goes beyond present limits.\nUltra-high vacuum conditions in the beam pipe is one essential requirement to\nprovide a smooth operation. Different physics phenomena as photon-, ion- and\nelectron- induced desorption and thermal outgassing of the chamber walls\nchallenge this requirement. This paper presents an analytical model and a\ncomputer code PyVASCO that supports the design of a stable vacuum system by\nproviding an overview of all the gas dynamics happening inside the beam pipes.\nA mass balance equation system describes the density distribution of the four\ndominating gas species $\\text{H}_2, \\text{CH}_4$, $\\text{CO}$ and\n$\\text{CO}_2$. An appropriate solving algorithm is discussed in detail and a\nvalidation of the model including a comparison of the output to the readings of\nLHC gauges is presented. This enables the evaluation of different designs for\nthe FCC.\n",
"title": "Analytical methods for vacuum simulations in high energy accelerators for future machines based on the LHC performance"
}
| null | null | null | null | true | null |
3737
| null |
Default
| null | null |
null |
{
"abstract": " Let $\\xi(t\\,,x)$ denote space-time white noise and consider a\nreaction-diffusion equation of the form \\[\n\\dot{u}(t\\,,x)=\\tfrac12 u\"(t\\,,x) + b(u(t\\,,x)) + \\sigma(u(t\\,,x))\n\\xi(t\\,,x), \\] on $\\mathbb{R}_+\\times[0\\,,1]$, with homogeneous Dirichlet\nboundary conditions and suitable initial data, in the case that there exists\n$\\varepsilon>0$ such that $\\vert b(z)\\vert \\ge|z|(\\log|z|)^{1+\\varepsilon}$ for\nall sufficiently-large values of $|z|$. When $\\sigma\\equiv 0$, it is well known\nthat such PDEs frequently have non-trivial stationary solutions. By contrast,\nBonder and Groisman (2009) have recently shown that there is finite-time blowup\nwhen $\\sigma$ is a non-zero constant. In this paper, we prove that the\nBonder--Groisman condition is unimproveable by showing that the\nreaction-diffusion equation with noise is \"typically\" well posed when $\\vert\nb(z) \\vert =O(|z|\\log_+|z|)$ as $|z|\\to\\infty$. We interpret the word\n\"typically\" in two essentially-different ways without altering the conclusions\nof our assertions.\n",
"title": "Global solutions to reaction-diffusion equations with super-linear drift and multiplicative noise"
}
| null | null | null | null | true | null |
3738
| null |
Default
| null | null |
null |
{
"abstract": " For an arbitrary group $G$, it is shown that either the semigroup rank $G{\\rm\nrk}S$ equals the group rank $G{\\rm rk}G$, or $G{\\rm rk}S = G{\\rm rk}G+1$. This\nis the starting point for the rest of the article, where the semigroup rank for\ndiverse kinds of groups is analysed. The semigroup rank of relatively free\ngroups, for any variety of groups, is computed. For a finitely generated\nabelian group~$G$, it is proven that $G{\\rm rk}S = G{\\rm rk}G+1$ if and only if\n$G$ is torsion-free. In general, this is not true. Partial results are obtained\nin the nilpotent case. It is also proven that if $M$ is a connected closed\nsurface, then $(\\pi_1(M)){\\rm rk}S = (\\pi_1(M)){\\rm rk}G+1$ if and only if $M$\nis orientable.\n",
"title": "On the semigroup rank of a group"
}
| null | null |
[
"Mathematics"
] | null | true | null |
3739
| null |
Validated
| null | null |
null |
{
"abstract": " Starting from a dataset with input/output time series generated by multiple\ndeterministic linear dynamical systems, this paper tackles the problem of\nautomatically clustering these time series. We propose an extension to the\nso-called Martin cepstral distance, that allows to efficiently cluster these\ntime series, and apply it to simulated electrical circuits data. Traditionally,\ntwo ways of handling the problem are used. The first class of methods employs a\ndistance measure on time series (e.g. Euclidean, Dynamic Time Warping) and a\nclustering technique (e.g. k-means, k-medoids, hierarchical clustering) to find\nnatural groups in the dataset. It is, however, often not clear whether these\ndistance measures effectively take into account the specific temporal\ncorrelations in these time series. The second class of methods uses the\ninput/output data to identify a dynamic system using an identification scheme,\nand then applies a model norm-based distance (e.g. H2, H-infinity) to find out\nwhich systems are similar. This, however, can be very time consuming for large\namounts of long time series data. We show that the new distance measure\npresented in this paper performs as good as when every input/output pair is\nmodelled explicitly, but remains computationally much less complex. The\ncomplexity of calculating this distance between two time series of length N is\nO(N logN).\n",
"title": "A time series distance measure for efficient clustering of input output signals by their underlying dynamics"
}
| null | null | null | null | true | null |
3740
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we propose a 'knee-like' approximation of the lateral\ndistribution of the Cherenkov light from extensive air showers in the energy\nrange 30-3000 TeV and study a possibility of its practical application in high\nenergy ground-based gamma-ray astronomy experiments (in particular, in\nTAIGA-HiSCORE). The approximation has a very good accuracy for individual\nshowers and can be easily simplified for practical application in the HiSCORE\nwide angle timing array in the condition of a limited number of triggered\nstations.\n",
"title": "Parametric Analysis of Cherenkov Light LDF from EAS for High Energy Gamma Rays and Nuclei: Ways of Practical Application"
}
| null | null | null | null | true | null |
3741
| null |
Default
| null | null |
null |
{
"abstract": " In this study, an alloy phase-field model is used to simulate solidification\nmicrostructures at different locations within a solidified molten pool. The\ntemperature gradient $G$ and the solidification velocity $V$ are obtained from\na macroscopic heat transfer finite element simulation and provided as input to\nthe phase-field model. The effects of laser beam speed and the location within\nthe melt pool on the primary arm spacing and on the extent of Nb partitioning\nat the cell tips are investigated. Simulated steady-state primary spacings are\ncompared with power law and geometrical models. Cell tip compositions are\ncompared to a dendrite growth model. The extent of non-equilibrium interface\npartitioning of the phase-field model is investigated. Although the phase-field\nmodel has an anti-trapping solute flux term meant to maintain local interface\nequilibrium, we have found that during simulations it was insufficient at\nmaintaining equilibrium. This is due to the fact that the additive\nmanufacturing solidification conditions fall well outside the allowed limits of\nthis flux term.\n",
"title": "On the primary spacing and microsegregation of cellular dendrites in laser deposited Ni-Nb alloys"
}
| null | null | null | null | true | null |
3742
| null |
Default
| null | null |
null |
{
"abstract": " We have previously proposed the partial quantile regression (PQR) prediction\nprocedure for functional linear model by using partial quantile covariance\ntechniques and developed the simple partial quantile regression (SIMPQR)\nalgorithm to efficiently extract PQR basis for estimating functional\ncoefficients. However, although the PQR approach is considered as an attractive\nalternative to projections onto the principal component basis, there are\ncertain limitations to uncovering the corresponding asymptotic properties\nmainly because of its iterative nature and the non-differentiability of the\nquantile loss function. In this article, we propose and implement an\nalternative formulation of partial quantile regression (APQR) for functional\nlinear model by using block relaxation method and finite smoothing techniques.\nThe proposed reformulation leads to insightful results and motivates new\ntheory, demonstrating consistency and establishing convergence rates by\napplying advanced techniques from empirical process theory. Two simulations and\ntwo real data from ADHD-200 sample and ADNI are investigated to show the\nsuperiority of our proposed methods.\n",
"title": "An Alternative Approach to Functional Linear Partial Quantile Regression"
}
| null | null | null | null | true | null |
3743
| null |
Default
| null | null |
null |
{
"abstract": " We present PFDCMSS, a novel message-passing based parallel algorithm for\nmining time-faded heavy hitters. The algorithm is a parallel version of the\nrecently published FDCMSS sequential algorithm. We formally prove its\ncorrectness by showing that the underlying data structure, a sketch augmented\nwith a Space Saving stream summary holding exactly two counters, is mergeable.\nWhilst mergeability of traditional sketches derives immediately from theory, we\nshow that merging our augmented sketch is non trivial. Nonetheless, the\nresulting parallel algorithm is fast and simple to implement. To the best of\nour knowledge, PFDCMSS is the first parallel algorithm solving the problem of\nmining time-faded heavy hitters on message-passing parallel architectures.\nExtensive experimental results confirm that PFDCMSS retains the extreme\naccuracy and error bound provided by FDCMSS whilst providing excellent parallel\nscalability.\n",
"title": "Parallel mining of time-faded heavy hitters"
}
| null | null | null | null | true | null |
3744
| null |
Default
| null | null |
null |
{
"abstract": " The Minkowski inequality is a classical inequality in differential geometry,\ngiving a bound from below, on the total mean curvature of a convex surface in\nEuclidean space, in terms of its area. Recently there has been interest in\nproving versions of this inequality for manifolds other than R^n; for example,\nsuch an inequality holds for surfaces in spatial Schwarzschild and\nAdS-Schwarzschild manifolds. In this note, we adapt a recent analysis of Y. Wei\nto prove a Minkowski-like inequality for general static asymptotically flat\nmanifolds.\n",
"title": "On a Minkowski-like inequality for asymptotically flat static manifolds"
}
| null | null | null | null | true | null |
3745
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies the concept of instantaneous arbitrage in continuous time\nand its relation to the instantaneous CAPM. Absence of instantaneous arbitrage\nis equivalent to the existence of a trading strategy which satisfies the CAPM\nbeta pricing relation in place of the market. Thus the difference between the\narbitrage argument and the CAPM argument in Black and Scholes (1973) is this:\nthe arbitrage argument assumes that there exists some portfolio satisfying the\ncapm equation, whereas the CAPM argument assumes, in addition, that this\nportfolio is the market portfolio.\n",
"title": "Instantaneous Arbitrage and the CAPM"
}
| null | null | null | null | true | null |
3746
| null |
Default
| null | null |
null |
{
"abstract": " We study the complexity of approximating Wassertein barycenter of $m$\ndiscrete measures, or histograms of size $n$ by contrasting two alternative\napproaches, both using entropic regularization. The first approach is based on\nthe Iterative Bregman Projections (IBP) algorithm for which our novel analysis\ngives a complexity bound proportional to $\\frac{mn^2}{\\varepsilon^2}$ to\napproximate the original non-regularized barycenter.\nUsing an alternative accelerated-gradient-descent-based approach, we obtain a\ncomplexity proportional to $\\frac{mn^{2.5}}{\\varepsilon} $. As a byproduct, we\nshow that the regularization parameter in both approaches has to be\nproportional to $\\varepsilon$, which causes instability of both algorithms when\nthe desired accuracy is high. To overcome this issue, we propose a novel\nproximal-IBP algorithm, which can be seen as a proximal gradient method, which\nuses IBP on each iteration to make a proximal step. We also consider the\nquestion of scalability of these algorithms using approaches from distributed\noptimization and show that the first algorithm can be implemented in a\ncentralized distributed setting (master/slave), while the second one is\namenable to a more general decentralized distributed setting with an arbitrary\nnetwork topology.\n",
"title": "On the Complexity of Approximating Wasserstein Barycenter"
}
| null | null | null | null | true | null |
3747
| null |
Default
| null | null |
null |
{
"abstract": " Modern implicit generative models such as generative adversarial networks\n(GANs) are generally known to suffer from instability and lack of\ninterpretability as it is difficult to diagnose what aspects of the target\ndistribution are missed by the generative model. In this work, we propose a\ntheoretically grounded solution to these issues by augmenting the GAN's loss\nfunction with a kernel-based regularization term that magnifies local\ndiscrepancy between the distributions of generated and real samples. The\nproposed method relies on so-called witness points in the data space which are\njointly trained with the generator and provide an interpretable indication of\nwhere the two distributions locally differ during the training procedure. In\naddition, the proposed algorithm is scaled to higher dimensions by learning the\nwitness locations in a latent space of an autoencoder. We theoretically\ninvestigate the dynamics of the training procedure, prove that a desirable\nequilibrium point exists, and the dynamical system is locally stable around\nthis equilibrium. Finally, we demonstrate different aspects of the proposed\nalgorithm by numerical simulations of analytical solutions and empirical\nresults for low and high-dimensional datasets.\n",
"title": "Witnessing Adversarial Training in Reproducing Kernel Hilbert Spaces"
}
| null | null | null | null | true | null |
3748
| null |
Default
| null | null |
null |
{
"abstract": " Transfer learning leverages the knowledge in one domain, the source domain,\nto improve learning efficiency in another domain, the target domain. Existing\ntransfer learning research is relatively well-progressed, but only in\nsituations where the feature spaces of the domains are homogeneous and the\ntarget domain contains at least a few labeled instances. However, transfer\nlearning has not been well-studied in heterogeneous settings with an unlabeled\ntarget domain. To contribute to the research in this emerging field, this paper\npresents: (1) an unsupervised knowledge transfer theorem that prevents negative\ntransfer; and (2) a principal angle-based metric to measure the distance\nbetween two pairs of domains. The metric shows the extent to which homogeneous\nrepresentations have preserved the information in original source and target\ndomains. The unsupervised knowledge transfer theorem sets out the transfer\nconditions necessary to prevent negative transfer. Linear monotonic maps meet\nthe transfer conditions of the theorem and, hence, are used to construct\nhomogeneous representations of the heterogeneous domains, which in principle\nprevents negative transfer. The metric and the theorem have been implemented in\nan innovative transfer model, called a Grassmann-LMM-geodesic flow kernel\n(GLG), that is specifically designed for knowledge transfer across\nheterogeneous domains. The GLG model learns homogeneous representations of\nheterogeneous domains by minimizing the proposed metric. Knowledge is\ntransferred through these learned representations via a geodesic flow kernel.\nNotably, the theorem presented in this paper provides the sufficient transfer\nconditions needed to guarantee that knowledge is transferred from a source\ndomain to an unlabeled target domain with correctness.\n",
"title": "Heterogeneous Transfer Learning: An Unsupervised Approach"
}
| null | null | null | null | true | null |
3749
| null |
Default
| null | null |
null |
{
"abstract": " We develop a new class of path transformations for one-dimensional diffusions\nthat are tailored to alter their long-run behaviour from transient to recurrent\nor vice versa. This immediately leads to a formula for the distribution of the\nfirst exit times of diffusions, which is recently characterised by Karatzas and\nRuf \\cite{KR} as the minimal solution of an appropriate Cauchy problem under\nmore stringent conditions. A particular limit of these transformations also\nturn out to be instrumental in characterising the stochastic solutions of\nCauchy problems defined by the generators of strict local martingales, which\nare well-known for not having unique solutions even when one restricts\nsolutions to have linear growth. Using an appropriate diffusion transformation\nwe show that the aforementioned stochastic solution can be written in terms of\nthe unique classical solution of an {\\em alternative} Cauchy problem with\nsuitable boundary conditions. This in particular resolves the long-standing\nissue of non-uniqueness with the Black-Scholes equations in derivative pricing\nin the presence of {\\em bubbles}. Finally, we use these path transformations to\npropose a unified framework for solving explicitly the optimal stopping problem\nfor one-dimensional diffusions with discounting, which in particular is\nrelevant for the pricing and the computation of optimal exercise boundaries of\nperpetual American options.\n",
"title": "Diffusion transformations, Black-Scholes equation and optimal stopping"
}
| null | null | null | null | true | null |
3750
| null |
Default
| null | null |
null |
{
"abstract": " The visual representation of concepts or ideas through the use of simple\nshapes has always been explored in the history of Humanity, and it is believed\nto be the origin of writing. We focus on computational generation of visual\nsymbols to represent concepts. We aim to develop a system that uses background\nknowledge about the world to find connections among concepts, with the goal of\ngenerating symbols for a given concept. We are also interested in exploring the\nsystem as an approach to visual dissociation and visual conceptual blending.\nThis has a great potential in the area of Graphic Design as a tool to both\nstimulate creativity and aid in brainstorming in projects such as logo,\npictogram or signage design.\n",
"title": "Generation of concept-representative symbols"
}
| null | null | null | null | true | null |
3751
| null |
Default
| null | null |
null |
{
"abstract": " Schmidt's game is generally used to deduce qualitative information about the\nHausdorff dimensions of fractal sets and their intersections. However, one can\nalso ask about quantitative versions of the properties of winning sets. In this\npaper we show that such quantitative information has applications to various\nquestions including:\n* What is the maximal length of an arithmetic progression on the \"middle\n$\\epsilon$\" Cantor set?\n* What is the smallest $n$ such that there is some element of the ternary\nCantor set whose continued fraction partial quotients are all $\\leq n$?\n* What is the Hausdorff dimension of the set of $\\epsilon$-badly approximable\nnumbers on the Cantor set?\nWe show that a variant of Schmidt's game known as the $potential$ $game$ is\ncapable of providing better bounds on the answers to these questions than the\nclassical Schmidt's game. We also use the potential game to provide a new proof\nof an important lemma in the classical proof of the existence of Hall's Ray.\n",
"title": "Quantitative results using variants of Schmidt's game: Dimension bounds, arithmetic progressions, and more"
}
| null | null |
[
"Mathematics"
] | null | true | null |
3752
| null |
Validated
| null | null |
null |
{
"abstract": " Locally Checkable Labeling (LCL) problems include essentially all the classic\nproblems of $\\mathsf{LOCAL}$ distributed algorithms. In a recent enlightening\nrevelation, Chang and Pettie [arXiv 1704.06297] showed that any LCL (on bounded\ndegree graphs) that has an $o(\\log n)$-round randomized algorithm can be solved\nin $T_{LLL}(n)$ rounds, which is the randomized complexity of solving (a\nrelaxed variant of) the Lovász Local Lemma (LLL) on bounded degree $n$-node\ngraphs. Currently, the best known upper bound on $T_{LLL}(n)$ is $O(\\log n)$,\nby Chung, Pettie, and Su [PODC'14], while the best known lower bound is\n$\\Omega(\\log\\log n)$, by Brandt et al. [STOC'16]. Chang and Pettie conjectured\nthat there should be an $O(\\log\\log n)$-round algorithm.\nMaking the first step of progress towards this conjecture, and providing a\nsignificant improvement on the algorithm of Chung et al. [PODC'14], we prove\nthat $T_{LLL}(n)= 2^{O(\\sqrt{\\log\\log n})}$. Thus, any $o(\\log n)$-round\nrandomized distributed algorithm for any LCL problem on bounded degree graphs\ncan be automatically sped up to run in $2^{O(\\sqrt{\\log\\log n})}$ rounds.\nUsing this improvement and a number of other ideas, we also improve the\ncomplexity of a number of graph coloring problems (in arbitrary degree graphs)\nfrom the $O(\\log n)$-round results of Chung, Pettie and Su [PODC'14] to\n$2^{O(\\sqrt{\\log\\log n})}$. These problems include defective coloring, frugal\ncoloring, and list vertex-coloring.\n",
"title": "Sublogarithmic Distributed Algorithms for Lovász Local lemma, and the Complexity Hierarchy"
}
| null | null |
[
"Computer Science"
] | null | true | null |
3753
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we propose a novel continuous authentication system for\nsmartphone users. The proposed system entirely relies on unlabeled phone\nmovement patterns collected through smartphone accelerometer. The data was\ncollected in a completely unconstrained environment over five to twelve days.\nThe contexts of phone usage were identified using k-means clustering. Multiple\nprofiles, one for each context, were created for every user. Five machine\nlearning algorithms were employed for classification of genuine and impostors.\nThe performance of the system was evaluated over a diverse population of 57\nusers. The mean equal error rates achieved by Logistic Regression, Neural\nNetwork, kNN, SVM, and Random Forest were 13.7%, 13.5%, 12.1%, 10.7%, and 5.6%\nrespectively. A series of statistical tests were conducted to compare the\nperformance of the classifiers. The suitability of the proposed system for\ndifferent types of users was also investigated using the failure to enroll\npolicy.\n",
"title": "Continuous User Authentication via Unlabeled Phone Movement Patterns"
}
| null | null | null | null | true | null |
3754
| null |
Default
| null | null |
null |
{
"abstract": " A CM-order is a reduced order equipped with an involution that mimics complex\nconjugation. The Witt-Picard group of such an order is a certain group of ideal\nclasses that is closely related to the \"minus part\" of the class group. We\npresent a deterministic polynomial-time algorithm for the following problem,\nwhich may be viewed as a special case of the principal ideal testing problem:\ngiven a CM-order, decide whether two given elements of its Witt-Picard group\nare equal. In order to prevent coefficient blow-up, the algorithm operates with\nlattices rather than with ideals. An important ingredient is a technique\nintroduced by Gentry and Szydlo in a cryptographic context. Our application of\nit to lattices over CM-orders hinges upon a novel existence theorem for\nauxiliary ideals, which we deduce from a result of Konyagin and Pomerance in\nelementary number theory.\n",
"title": "Testing isomorphism of lattices over CM-orders"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
3755
| null |
Validated
| null | null |
null |
{
"abstract": " Cross-correlations in the activity in neural networks are commonly used to\ncharacterize their dynamical states and their anatomical and functional\norganizations. Yet, how these latter network features affect the spatiotemporal\nstructure of the correlations in recurrent networks is not fully understood.\nHere, we develop a general theory for the emergence of correlated neuronal\nactivity from the dynamics in strongly recurrent networks consisting of several\npopulations of binary neurons. We apply this theory to the case in which the\nconnectivity depends on the anatomical or functional distance between the\nneurons. We establish the architectural conditions under which the system\nsettles into a dynamical state where correlations are strong, highly robust and\nspatially modulated. We show that such strong correlations arise if the network\nexhibits an effective feedforward structure. We establish how this feedforward\nstructure determines the way correlations scale with the network size and the\ndegree of the connectivity. In networks lacking an effective feedforward\nstructure correlations are extremely small and only weakly depend on the number\nof connections per neuron. Our work shows how strong correlations can be\nconsistent with highly irregular activity in recurrent networks, two key\nfeatures of neuronal dynamics in the central nervous system.\n",
"title": "How strong are correlations in strongly recurrent neuronal networks?"
}
| null | null | null | null | true | null |
3756
| null |
Default
| null | null |
null |
{
"abstract": " Despite the fact that JSON is currently one of the most popular formats for\nexchanging data on the Web, there are very few studies on this topic and there\nare no agreement upon theoretical framework for dealing with JSON. There- fore\nin this paper we propose a formal data model for JSON documents and, based on\nthe common features present in available systems using JSON, we define a\nlightweight query language allowing us to navigate through JSON documents. We\nalso introduce a logic capturing the schema proposal for JSON and study the\ncomplexity of basic computational tasks associated with these two formalisms.\n",
"title": "JSON: data model, query languages and schema specification"
}
| null | null | null | null | true | null |
3757
| null |
Default
| null | null |
null |
{
"abstract": " The present study introduce the human capital component to the Fama and\nFrench five-factor model proposing an equilibrium six-factor asset pricing\nmodel. The study employs an aggregate of four sets of portfolios mimicking size\nand industry with varying dimensions. The first set consists of three set of\nsix portfolios each sorted on size to B/M, size to investment, and size to\nmomentum. The second set comprises of five index portfolios, third, a four-set\nof twenty-five portfolios each sorted on size to B/M, size to investment, size\nto profitability, and size to momentum, and the final set constitute thirty\nindustry portfolios. To estimate the parameters of six-factor asset pricing\nmodel for the four sets of variant portfolios, we use OLS and Generalized\nmethod of moments based robust instrumental variables technique (IVGMM). The\nresults obtained from the relevance, endogeneity, overidentifying restrictions,\nand the Hausman's specification, tests indicate that the parameter estimates of\nthe six-factor model using IVGMM are robust and performs better than the OLS\napproach. The human capital component shares equally the predictive power\nalongside the factors in the framework in explaining the variations in return\non portfolios. Furthermore, we assess the t-ratio of the human capital\ncomponent of each IVGMM estimates of the six-factor asset pricing model for the\nfour sets of variant portfolios. The t-ratio of the human capital of the\neighty-three IVGMM estimates are more than 3.00 with reference to the standard\nproposed by Harvey et al. (2016). This indicates the empirical success of the\nsix-factor asset-pricing model in explaining the variation in asset returns.\n",
"title": "A six-factor asset pricing model"
}
| null | null | null | null | true | null |
3758
| null |
Default
| null | null |
null |
{
"abstract": " In Diffusion Tensor Imaging (DTI) or High Angular Resolution Diffusion\nImaging (HARDI), a tensor field or a spherical function field (e.g., an\norientation distribution function field), can be estimated from measured\ndiffusion weighted images. In this paper, inspired by the microscopic\ntheoretical treatment of phases in liquid crystals, we introduce a novel\nmathematical framework, called Director Field Analysis (DFA), to study local\ngeometric structural information of white matter based on the reconstructed\ntensor field or spherical function field: 1) We propose a set of mathematical\ntools to process general director data, which consists of dyadic tensors that\nhave orientations but no direction. 2) We propose Orientational Order (OO) and\nOrientational Dispersion (OD) indices to describe the degree of alignment and\ndispersion of a spherical function in a single voxel or in a region,\nrespectively; 3) We also show how to construct a local orthogonal coordinate\nframe in each voxel exhibiting anisotropic diffusion; 4) Finally, we define\nthree indices to describe three types of orientational distortion (splay, bend,\nand twist) in a local spatial neighborhood, and a total distortion index to\ndescribe distortions of all three types. To our knowledge, this is the first\nwork to quantitatively describe orientational distortion (splay, bend, and\ntwist) in general spherical function fields from DTI or HARDI data. The\nproposed DFA and its related mathematical tools can be used to process not only\ndiffusion MRI data but also general director field data, and the proposed\nscalar indices are useful for detecting local geometric changes of white matter\nfor voxel-based or tract-based analysis in both DTI and HARDI acquisitions. The\nrelated codes and a tutorial for DFA will be released in DMRITool.\n",
"title": "Director Field Analysis (DFA): Exploring Local White Matter Geometric Structure in diffusion MRI"
}
| null | null | null | null | true | null |
3759
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents a new multi-objective deep reinforcement learning (MODRL)\nframework based on deep Q-networks. We propose the use of linear and non-linear\nmethods to develop the MODRL framework that includes both single-policy and\nmulti-policy strategies. The experimental results on two benchmark problems\nincluding the two-objective deep sea treasure environment and the\nthree-objective mountain car problem indicate that the proposed framework is\nable to converge to the optimal Pareto solutions effectively. The proposed\nframework is generic, which allows implementation of different deep\nreinforcement learning algorithms in different complex environments. This\ntherefore overcomes many difficulties involved with standard multi-objective\nreinforcement learning (MORL) methods existing in the current literature. The\nframework creates a platform as a testbed environment to develop methods for\nsolving various problems associated with the current MORL. Details of the\nframework implementation can be referred to\nthis http URL.\n",
"title": "A Multi-Objective Deep Reinforcement Learning Framework"
}
| null | null | null | null | true | null |
3760
| null |
Default
| null | null |
null |
{
"abstract": " Finding optimal correction of errors in generic stabilizer codes is a\ncomputationally hard problem, even for simple noise models. While this task can\nbe simplified for codes with some structure, such as topological stabilizer\ncodes, developing good and efficient decoders still remains a challenge. In our\nwork, we systematically study a very versatile class of decoders based on\nfeedforward neural networks. To demonstrate adaptability, we apply neural\ndecoders to the triangular color and toric codes under various noise models\nwith realistic features, such as spatially-correlated errors. We report that\nneural decoders provide significant improvement over leading efficient decoders\nin terms of the error-correction threshold. Using neural networks simplifies\nthe process of designing well-performing decoders, and does not require prior\nknowledge of the underlying noise model.\n",
"title": "Advantages of versatile neural-network decoding for topological codes"
}
| null | null | null | null | true | null |
3761
| null |
Default
| null | null |
null |
{
"abstract": " Finding semantically rich and computer-understandable representations for\ntextual dialogues, utterances and words is crucial for dialogue systems (or\nconversational agents), as their performance mostly depends on understanding\nthe context of conversations. Recent research aims at finding distributed\nvector representations (embeddings) for words, such that semantically similar\nwords are relatively close within the vector-space. Encoding the \"meaning\" of\ntext into vectors is a current trend, and text can range from words, phrases\nand documents to actual human-to-human conversations. In recent research\napproaches, responses have been generated utilizing a decoder architecture,\ngiven the vector representation of the current conversation. In this paper, the\nutilization of embeddings for answer retrieval is explored by using\nLocality-Sensitive Hashing Forest (LSH Forest), an Approximate Nearest Neighbor\n(ANN) model, to find similar conversations in a corpus and rank possible\ncandidates. Experimental results on the well-known Ubuntu Corpus (in English)\nand a customer service chat dataset (in Dutch) show that, in combination with a\ncandidate selection method, retrieval-based approaches outperform generative\nones and reveal promising future research directions towards the usability of\nsuch a system.\n",
"title": "A retrieval-based dialogue system utilizing utterance and context embeddings"
}
| null | null | null | null | true | null |
3762
| null |
Default
| null | null |
null |
{
"abstract": " For a safe, natural and effective human-robot social interaction, it is\nessential to develop a system that allows a robot to demonstrate the\nperceivable responsive behaviors to complex human behaviors. We introduce the\nMultimodal Deep Attention Recurrent Q-Network using which the robot exhibits\nhuman-like social interaction skills after 14 days of interacting with people\nin an uncontrolled real world. Each and every day during the 14 days, the\nsystem gathered robot interaction experiences with people through a\nhit-and-trial method and then trained the MDARQN on these experiences using\nend-to-end reinforcement learning approach. The results of interaction based\nlearning indicate that the robot has learned to respond to complex human\nbehaviors in a perceivable and socially acceptable manner.\n",
"title": "Show, Attend and Interact: Perceivable Human-Robot Social Interaction through Neural Attention Q-Network"
}
| null | null | null | null | true | null |
3763
| null |
Default
| null | null |
null |
{
"abstract": " Continuous-time trajectory representations are a powerful tool that can be\nused to address several issues in many practical simultaneous localization and\nmapping (SLAM) scenarios, like continuously collected measurements distorted by\nrobot motion, or during with asynchronous sensor measurements. Sparse Gaussian\nprocesses (GP) allow for a probabilistic non-parametric trajectory\nrepresentation that enables fast trajectory estimation by sparse GP regression.\nHowever, previous approaches are limited to dealing with vector space\nrepresentations of state only. In this technical report we extend the work by\nBarfoot et al. [1] to general matrix Lie groups, by applying constant-velocity\nprior, and defining locally linear GP. This enables using sparse GP approach in\na large space of practical SLAM settings. In this report we give the theory and\nleave the experimental evaluation in future publications.\n",
"title": "Sparse Gaussian Processes for Continuous-Time Trajectory Estimation on Matrix Lie Groups"
}
| null | null | null | null | true | null |
3764
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider testing the homogeneity for proportions in\nindependent binomial distributions especially when data are sparse for large\nnumber of groups. We provide broad aspects of our proposed tests such as\ntheoretical studies, simulations and real data application. We present the\nasymptotic null distributions and asymptotic powers for our proposed tests and\ncompare their performance with existing tests. Our simulation studies show that\nnone of tests dominate the others, however our proposed test and a few tests\nare expected to control given sizes and obtain significant powers. We also\npresent a real example regarding safety concerns associated with Avandiar\n(rosiglitazone) in Nissen and Wolsky (2007).\n",
"title": "Testing homogeneity of proportions from sparse binomial data with a large number of groups"
}
| null | null | null | null | true | null |
3765
| null |
Default
| null | null |
null |
{
"abstract": " We exhibit Borel probability measures on the unit sphere in $\\mathbb C^d$ for\n$d \\ge 2$ which are Henkin for the multiplier algebra of the Drury-Arveson\nspace, but not Henkin in the classical sense. This provides a negative answer\nto a conjecture of Clouâtre and Davidson.\n",
"title": "Henkin measures for the Drury-Arveson space"
}
| null | null | null | null | true | null |
3766
| null |
Default
| null | null |
null |
{
"abstract": " PCA is one of the most widely used dimension reduction techniques. A related\neasier problem is \"subspace learning\" or \"subspace estimation\". Given\nrelatively clean data, both are easily solved via singular value decomposition\n(SVD). The problem of subspace learning or PCA in the presence of outliers is\ncalled robust subspace learning or robust PCA (RPCA). For long data sequences,\nif one tries to use a single lower dimensional subspace to represent the data,\nthe required subspace dimension may end up being quite large. For such data, a\nbetter model is to assume that it lies in a low-dimensional subspace that can\nchange over time, albeit gradually. The problem of tracking such data (and the\nsubspaces) while being robust to outliers is called robust subspace tracking\n(RST). This article provides a magazine-style overview of the entire field of\nrobust subspace learning and tracking. In particular solutions for three\nproblems are discussed in detail: RPCA via sparse+low-rank matrix decomposition\n(S+LR), RST via S+LR, and \"robust subspace recovery (RSR)\". RSR assumes that an\nentire data vector is either an outlier or an inlier. The S+LR formulation\ninstead assumes that outliers occur on only a few data vector indices and hence\nare well modeled as sparse corruptions.\n",
"title": "Robust Subspace Learning: Robust PCA, Robust Subspace Tracking, and Robust Subspace Recovery"
}
| null | null | null | null | true | null |
3767
| null |
Default
| null | null |
null |
{
"abstract": " Associating image regions with text queries has been recently explored as a\nnew way to bridge visual and linguistic representations. A few pioneering\napproaches have been proposed based on recurrent neural language models trained\ngeneratively (e.g., generating captions), but achieving somewhat limited\nlocalization accuracy. To better address natural-language-based visual entity\nlocalization, we propose a discriminative approach. We formulate a\ndiscriminative bimodal neural network (DBNet), which can be trained by a\nclassifier with extensive use of negative samples. Our training objective\nencourages better localization on single images, incorporates text phrases in a\nbroad range, and properly pairs image regions with text phrases into positive\nand negative examples. Experiments on the Visual Genome dataset demonstrate the\nproposed DBNet significantly outperforms previous state-of-the-art methods both\nfor localization on single images and for detection on multiple images. We we\nalso establish an evaluation protocol for natural-language visual detection.\n",
"title": "Discriminative Bimodal Networks for Visual Localization and Detection with Natural Language Queries"
}
| null | null | null | null | true | null |
3768
| null |
Default
| null | null |
null |
{
"abstract": " The open and closed \\textit{symmetrized polydisc} or, \\textit{symmetrized\n$n$-disc} for $n\\geq 2$, are the following subsets of $\\mathbb C^n$:\n\\begin{align*} \\mathbb G_n &=\\left\\{ \\left(\\sum_{1\\leq i\\leq n} z_i,\\sum_{1\\leq\ni<j\\leq n}z_iz_j,\\dots, \\prod_{i=1}^n z_i \\right): \\,|z_i|< 1, i=1,\\dots,n\n\\right \\}, \\Gamma_n & =\\left\\{ \\left(\\sum_{1\\leq i\\leq n} z_i,\\sum_{1\\leq\ni<j\\leq n}z_iz_j,\\dots, \\prod_{i=1}^n z_i \\right): \\,|z_i|\\leq 1, i=1,\\dots,n\n\\right \\}. \\end{align*} A tuple of commuting $n$ operators\n$(S_1,\\dots,S_{n-1},P)$ defined on a Hilbert space $\\mathcal H$ for which\n$\\Gamma_n$ is a spectral set is called a $\\Gamma_n$-contraction. In this\narticle, we show by a counter example that rational dilation fails on the\nsymmetrized $n$-disc for any $n\\geq 3$. We find new characterizations for the\npoints in $\\mathbb G_n$ and $\\Gamma_n$. We also present few new\ncharacterizations for the $\\Gamma_n$-unitaries and $\\Gamma_n$-isometries.\n",
"title": "The failure of rational dilation on the symmetrized $n$-disk for any $n\\geq 3$"
}
| null | null |
[
"Mathematics"
] | null | true | null |
3769
| null |
Validated
| null | null |
null |
{
"abstract": " Partial differential equations with distributional sources---in particular,\ninvolving (derivatives of) delta distributions---have become increasingly\nubiquitous in numerous areas of physics and applied mathematics. It is often of\nconsiderable interest to obtain numerical solutions for such equations, but any\nsingular (\"particle\"-like) source modeling invariably introduces nontrivial\ncomputational obstacles. A common method to circumvent these is through some\nform of delta function approximation procedure on the computational grid;\nhowever, this often carries significant limitations on the efficiency of the\nnumerical convergence rates, or sometimes even the resolvability of the problem\nat all.\nIn this paper, we present an alternative technique for tackling such\nequations which avoids the singular behavior entirely: the\n\"Particle-without-Particle\" method. Previously introduced in the context of the\nself-force problem in gravitational physics, the idea is to discretize the\ncomputational domain into two (or more) disjoint pseudospectral\n(Chebyshev-Lobatto) grids such that the \"particle\" is always at the interface\nbetween them; thus, one only needs to solve homogeneous equations in each\ndomain, with the source effectively replaced by jump (boundary) conditions\nthereon. We prove here that this method yields solutions to any linear PDE the\nsource of which is any linear combination of delta distributions and\nderivatives thereof supported on a one-dimensional subspace of the problem\ndomain. We then implement it to numerically solve a variety of relevant PDEs:\nhyperbolic (with applications to neuroscience and acoustics), parabolic (with\napplications to finance), and elliptic. We generically obtain improved\nconvergence rates relative to typical past implementations relying on delta\nfunction approximations.\n",
"title": "Particle-without-Particle: a practical pseudospectral collocation method for linear partial differential equations with distributional sources"
}
| null | null | null | null | true | null |
3770
| null |
Default
| null | null |
null |
{
"abstract": " Fracton order is a new kind of quantum order characterized by topological\nexcitations that exhibit remarkable mobility restrictions and a robust ground\nstate degeneracy (GSD) which can increase exponentially with system size. In\nthis paper, we present a generic lattice construction (in three dimensions) for\na generalized X-cube model of fracton order, where the mobility restrictions of\nthe subdimensional particles inherit the geometry of the lattice. This helps\nexplain a previous result that lattice curvature can produce a robust GSD, even\non a manifold with trivial topology. We provide explicit examples to show that\nthe (zero temperature) phase of matter is sensitive to the lattice geometry. In\none example, the lattice geometry confines the dimension-1 particles to small\nloops, which allows the fractons to be fully mobile charges, and the resulting\nphase is equivalent to (3+1)-dimensional toric code. However, the phase is\nsensitive to more than just lattice curvature; different lattices without\ncurvature (e.g. cubic or stacked kagome lattices) also result in different\nphases of matter, which are separated by phase transitions. Unintuitively\nhowever, according to a previous definition of phase [Chen, Gu, Wen 2010], even\njust a rotated or rescaled cubic lattice results in different phases of matter,\nwhich motivates us to propose a new and coarser definition of phase for gapped\nground states and fracton order. The new equivalence relation between ground\nstates is given by the composition of a local unitary transformation and a\nquasi-isometry (which can rotate and rescale the lattice); equivalently, ground\nstates are in the same phase if they can be adiabatically connected by varying\nboth the Hamiltonian and the positions of the degrees of freedom (via a\nquasi-isometry). In light of the importance of geometry, we further propose\nthat fracton orders should be regarded as a geometric order.\n",
"title": "X-Cube Fracton Model on Generic Lattices: Phases and Geometric Order"
}
| null | null | null | null | true | null |
3771
| null |
Default
| null | null |
null |
{
"abstract": " Min-SEIS-Cluster is an optimization problem which aims at minimizing the\ninfection spreading in networks. In this problem, nodes can be susceptible to\nan infection, exposed to an infection, or infectious. One of the main features\nof this problem is the fact that nodes have different dynamics when interacting\nwith other nodes from the same community. Thus, the problem is characterized by\ndistinct probabilities of infecting nodes from both the same and from different\ncommunities. This paper presents a new genetic algorithm that solves the\nMin-SEIS-Cluster problem. This genetic algorithm surpassed the current\nheuristic of this problem significantly, reducing the number of infected nodes\nduring the simulation of the epidemics. The results therefore suggest that our\nnew genetic algorithm is the state-of-the-art heuristic to solve this problem.\n",
"title": "Genetic Algorithm for Epidemic Mitigation by Removing Relationships"
}
| null | null | null | null | true | null |
3772
| null |
Default
| null | null |
null |
{
"abstract": " The statistical distribution of galaxies is a powerful probe to constrain\ncosmological models and gravity. In particular the matter power spectrum $P(k)$\nbrings information about the cosmological distance evolution and the galaxy\nclustering together. However the building of $P(k)$ from galaxy catalogues\nneeds a cosmological model to convert angles on the sky and redshifts into\ndistances, which leads to difficulties when comparing data with predicted\n$P(k)$ from other cosmological models, and for photometric surveys like LSST.\nThe angular power spectrum $C_\\ell(z_1,z_2)$ between two bins located at\nredshift $z_1$ and $z_2$ contains the same information than the matter power\nspectrum, is free from any cosmological assumption, but the prediction of\n$C_\\ell(z_1,z_2)$ from $P(k)$ is a costly computation when performed exactly.\nThe Angpow software aims at computing quickly and accurately the auto\n($z_1=z_2$) and cross ($z_1 \\neq z_2$) angular power spectra between redshift\nbins. We describe the developed algorithm, based on developments on the\nChebyshev polynomial basis and on the Clenshaw-Curtis quadrature method. We\nvalidate the results with other codes, and benchmark the performance. Angpow is\nflexible and can handle any user defined power spectra, transfer functions, and\nredshift selection windows. The code is fast enough to be embedded inside\nprograms exploring large cosmological parameter spaces through the\n$C_\\ell(z_1,z_2)$ comparison with data. We emphasize that the Limber's\napproximation, often used to fasten the computation, gives wrong $C_\\ell$\nvalues for cross-correlations.\n",
"title": "Angpow: a software for the fast computation of accurate tomographic power spectra"
}
| null | null | null | null | true | null |
3773
| null |
Default
| null | null |
null |
{
"abstract": " Causal mediation analysis can improve understanding of the mechanisms\nunderlying epidemiologic associations. However, the utility of natural direct\nand indirect effect estimation has been limited by the assumption of no\nconfounder of the mediator-outcome relationship that is affected by prior\nexposure---an assumption frequently violated in practice. We build on recent\nwork that identified alternative estimands that do not require this assumption\nand propose a flexible and double robust semiparametric targeted minimum\nloss-based estimator for data-dependent stochastic direct and indirect effects.\nThe proposed method treats the intermediate confounder affected by prior\nexposure as a time-varying confounder and intervenes stochastically on the\nmediator using a distribution which conditions on baseline covariates and\nmarginalizes over the intermediate confounder. In addition, we assume the\nstochastic intervention is given, conditional on observed data, which results\nin a simpler estimator and weaker identification assumptions. We demonstrate\nthe estimator's finite sample and robustness properties in a simple simulation\nstudy. We apply the method to an example from the Moving to Opportunity\nexperiment. In this application, randomization to receive a housing voucher is\nthe treatment/instrument that influenced moving to a low-poverty neighborhood,\nwhich is the intermediate confounder. We estimate the data-dependent stochastic\ndirect effect of randomization to the voucher group on adolescent marijuana use\nnot mediated by change in school district and the stochastic indirect effect\nmediated by change in school district. We find no evidence of mediation. Our\nestimator is easy to implement in standard statistical software, and we provide\nannotated R code to further lower implementation barriers.\n",
"title": "Robust and Flexible Estimation of Stochastic Mediation Effects: A Proposed Method and Example in a Randomized Trial Setting"
}
| null | null | null | null | true | null |
3774
| null |
Default
| null | null |
null |
{
"abstract": " Finding an intermediate-mass black hole (IMBH) in a globular cluster (or\nproving its absence) would provide valuable insights into our understanding of\ngalaxy formation and evolution. However, it is challenging to identify a unique\nsignature of an IMBH that cannot be accounted for by other processes.\nObservational claims of IMBH detection are indeed often based on analyses of\nthe kinematics of stars in the cluster core, the most common signature being a\nrise in the velocity dispersion profile towards the centre of the system.\nUnfortunately, this IMBH signal is degenerate with the presence of\nradially-biased pressure anisotropy in the globular cluster. To explore the\nrole of anisotropy in shaping the observational kinematics of clusters, we\nanalyse the case of omega Cen by comparing the observed profiles to those\ncalculated from the family of LIMEPY models, that account for the presence of\nanisotropy in the system in a physically motivated way. The best-fit radially\nanisotropic models reproduce the observational profiles well, and describe the\ncentral kinematics as derived from Hubble Space Telescope proper motions\nwithout the need for an IMBH.\n",
"title": "Radial anisotropy in omega Cen limiting the room for an intermediate-mass black hole"
}
| null | null | null | null | true | null |
3775
| null |
Default
| null | null |
null |
{
"abstract": " A successful grasp requires careful balancing of the contact forces. Deducing\nwhether a particular grasp will be successful from indirect measurements, such\nas vision, is therefore quite challenging, and direct sensing of contacts\nthrough touch sensing provides an appealing avenue toward more successful and\nconsistent robotic grasping. However, in order to fully evaluate the value of\ntouch sensing for grasp outcome prediction, we must understand how touch\nsensing can influence outcome prediction accuracy when combined with other\nmodalities. Doing so using conventional model-based techniques is exceptionally\ndifficult. In this work, we investigate the question of whether touch sensing\naids in predicting grasp outcomes within a multimodal sensing framework that\ncombines vision and touch. To that end, we collected more than 9,000 grasping\ntrials using a two-finger gripper equipped with GelSight high-resolution\ntactile sensors on each finger, and evaluated visuo-tactile deep neural network\nmodels to directly predict grasp outcomes from either modality individually,\nand from both modalities together. Our experimental results indicate that\nincorporating tactile readings substantially improve grasping performance.\n",
"title": "The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
3776
| null |
Validated
| null | null |
null |
{
"abstract": " Let $\\mathcal{A}$ be a $C^*$-algebra of bounded uniformly continuous\nfunctions on $X=\\mathbb{R}^d$ such that $\\mathcal{A}$ is stable under\ntranslations and contains the continuous functions that have a limit at\ninfinity. Denote $\\mathcal{A}^\\dagger$ the boundary of $X$ in the character\nspace of $\\mathcal{A}$. Then the crossed product\n$\\mathscr{A}=\\mathcal{A}\\rtimes X$ of $\\mathcal{A}$ by the natural action of\n$X$ on $\\mathcal{A}$ is a well defined $C^*$-algebra and to each operator\n$A\\in\\mathscr{A}$ one may naturally associate a family of bounded operators\n$A_\\varkappa$ on $L^2(X)$ indexed by the characters\n$\\varkappa\\in\\mathcal{A}^\\dagger$. We show that the essential spectrum of $A$\nis the union of the spectra of the operators $A_\\varkappa$. The applications\ncover very general classes of singular elliptic operators.\n",
"title": "On the essential spectrum of elliptic differential operators"
}
| null | null | null | null | true | null |
3777
| null |
Default
| null | null |
null |
{
"abstract": " We present Shrinking Horizon Model Predictive Control (SHMPC) for\ndiscrete-time linear systems with Signal Temporal Logic (STL) specification\nconstraints under stochastic disturbances. The control objective is to maximize\nan optimization function under the restriction that a given STL specification\nis satisfied with high probability against stochastic uncertainties. We\nformulate a general solution, which does not require precise knowledge of the\nprobability distributions of the (possibly dependent) stochastic disturbances;\nonly the bounded support intervals of the density functions and moment\nintervals are used. For the specific case of disturbances that are independent\nand normally distributed, we optimize the controllers further by utilizing\nknowledge of the disturbance probability distributions. We show that in both\ncases, the control law can be obtained by solving optimization problems with\nlinear constraints at each step. We experimentally demonstrate effectiveness of\nthis approach by synthesizing a controller for an HVAC system.\n",
"title": "Shrinking Horizon Model Predictive Control with Signal Temporal Logic Constraints under Stochastic Disturbances"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
3778
| null |
Validated
| null | null |
null |
{
"abstract": " Scientific evaluation is a determinant of how scientists, institutions and\nfunders behave, and as such is a key element in the making of science. In this\narticle, we propose an alternative to the current norm of evaluating research\nwith journal rank. Following a well-defined notion of scientific value, we\nintroduce qualitative processes that can also be quantified and give rise to\nmeaningful and easy-to-use article-level metrics. In our approach, the goal of\na scientist is transformed from convincing an editorial board through a\nvertical process to convincing peers through an horizontal one. We argue that\nsuch an evaluation system naturally provides the incentives and logic needed to\nconstantly promote quality, reproducibility, openness and collaboration in\nscience. The system is legally and technically feasible and can gradually lead\nto the self-organized reappropriation of the scientific process by the\nscholarly community and its institutions. We propose an implementation of our\nevaluation system with the platform \"the Self-Journals of Science\"\n(www.sjscience.org).\n",
"title": "Novel processes and metrics for a scientific evaluation rooted in the principles of science - Version 1"
}
| null | null | null | null | true | null |
3779
| null |
Default
| null | null |
null |
{
"abstract": " Topological effects typically discussed in the context of quantum physics are\nemerging as one of the central paradigms of physics. Here, we demonstrate the\nrole of topology in energy transport through dimerized micro- and\nnano-mechanical lattices in the classical regime, i.e., essentially \"masses and\nsprings\". We show that the thermal conductance factorizes into topological and\nnon-topological components. The former takes on three discrete values and\narises due to the appearance of edge modes that prevent good contact between\nthe heat reservoirs and the bulk, giving a length-independent reduction of the\nconductance. In essence, energy input at the boundary mostly stays there, an\neffect robust against disorder and nonlinearity. These results bridge two\nseemingly disconnected disciplines of physics, namely topology and thermal\ntransport, and suggest ways to engineer thermal contacts, opening a direction\nto explore the ramifications of topological properties on nanoscale technology.\n",
"title": "Topological quantization of energy transport in micro- and nano-mechanical lattices"
}
| null | null | null | null | true | null |
3780
| null |
Default
| null | null |
null |
{
"abstract": " A high degree of consensus exists in the climate sciences over the role that\nhuman interference with the atmosphere is playing in changing the climate.\nFollowing the Paris Agreement, a similar consensus exists in the policy\ncommunity over the urgency of policy solutions to the climate problem. The\ncontext for climate policy is thus moving from agenda setting, which has now\nbeen mostly established, to impact assessment, in which we identify policy\npathways to implement the Paris Agreement. Most integrated assessment models\ncurrently used to address the economic and technical feasibility of avoiding\nclimate change are based on engineering perspectives with a normative systems\noptimisation philosophy, suitable for agenda setting, but unsuitable to assess\nthe socio-economic impacts of a realistic baskets of climate policies. Here, we\nintroduce a fully descriptive, simulation-based integrated assessment model\ndesigned specifically to assess policies, formed by the combination of (1) a\nhighly disaggregated macro-econometric simulation of the global economy based\non time series regressions (E3ME), (2) a family of bottom-up evolutionary\nsimulations of technology diffusion based on cross-sectional discrete choice\nmodels (FTT), and (3) a carbon cycle and atmosphere circulation model of\nintermediate complexity (GENIE-1). We use this combined model to create a\ndetailed global and sectoral policy map and scenario that sets the economy on a\npathway that achieves the goals of the Paris Agreement with >66% probability of\nnot exceeding 2$^\\circ$C of global warming. We propose a blueprint for a new\nrole for integrated assessment models in this upcoming policy assessment\ncontext.\n",
"title": "Environmental impact assessment for climate change policy with the simulation-based integrated assessment model E3ME-FTT-GENIE"
}
| null | null | null | null | true | null |
3781
| null |
Default
| null | null |
null |
{
"abstract": " In the Convex Body Chasing problem, we are given an initial point $v_0$ in\n$R^d$ and an online sequence of $n$ convex bodies $F_1, ..., F_n$. When we\nreceive $F_i$, we are required to move inside $F_i$. Our goal is to minimize\nthe total distance travelled. This fundamental online problem was first studied\nby Friedman and Linial (DCG 1993). They proved an $\\Omega(\\sqrt{d})$ lower\nbound on the competitive ratio, and conjectured that a competitive ratio\ndepending only on d is possible. However, despite much interest in the problem,\nthe conjecture remains wide open.\nWe consider the setting in which the convex bodies are nested: $F_1 \\supset\n... \\supset F_n$. The nested setting is closely related to extending the online\nLP framework of Buchbinder and Naor (ESA 2005) to arbitrary linear constraints.\nMoreover, this setting retains much of the difficulty of the general setting\nand captures an essential obstacle in resolving Friedman and Linial's\nconjecture. In this work, we give the first $f(d)$-competitive algorithm for\nchasing nested convex bodies in $R^d$.\n",
"title": "Nested Convex Bodies are Chaseable"
}
| null | null | null | null | true | null |
3782
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a notion of cocycle-induction for strong uniform approximate\nlattices in locally compact second countable groups and use it to relate\n(relative) Kazhdan- and Haagerup-type of approximate lattices to the\ncorresponding properties of the ambient locally compact groups. Our approach\napplies to large classes of uniform approximate lattices (though not all of\nthem) and is flexible enough to cover the $L^p$-versions of Property (FH) and\na-(FH)-menability as well as quasified versions thereof a la Burger--Monod and\nOzawa.\n",
"title": "Analytic properties of approximate lattices"
}
| null | null | null | null | true | null |
3783
| null |
Default
| null | null |
null |
{
"abstract": " Auxiliary variables are often needed for verifying that an implementation is\ncorrect with respect to a higher-level specification. They augment the formal\ndescription of the implementation without changing its semantics--that is, the\nset of behaviors that it describes. This paper explains rules for adding\nhistory, prophecy, and stuttering variables to TLA+ specifications, ensuring\nthat the augmented specification is equivalent to the original one. The rules\nare explained with toy examples, and they are used to verify the correctness of\na simplified version of a snapshot algorithm due to Afek et al.\n",
"title": "Auxiliary Variables in TLA+"
}
| null | null |
[
"Computer Science"
] | null | true | null |
3784
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we will present a homological model for Coloured Jones\nPolynomials. For each color $N \\in \\N$, we will describe the invariant\n$J_N(L,q)$ as a graded intersection pairing of certain homological classes in a\ncovering of the configuration space on the punctured disk. This construction is\nbased on the Lawrence representation and a result due to Kohno that relates\nquantum representations and homological representations of the braid groups.\n",
"title": "A Homological model for the coloured Jones polynomials"
}
| null | null | null | null | true | null |
3785
| null |
Default
| null | null |
null |
{
"abstract": " In this contribution we are concerned with the asymptotic behaviour as $u\\to\n\\infty$ of $\\mathbb{P}\\{\\sup_{t\\in [0,T]} X_u(t)> u\\}$, where $X_u(t),t\\in\n[0,T],u>0$ is a family of centered Gaussian processes with continuous\ntrajectories. A key application of our findings concerns\n$\\mathbb{P}\\{\\sup_{t\\in [0,T]} (X(t)+ g(t))> u\\}$ as $u\\to\\infty$, for $X$ a\ncentered Gaussian process and $g$ some measurable trend function. Further\napplications include the approximation of both the ruin time and the ruin\nprobability of the Brownian motion risk model with constant force of interest.\n",
"title": "Extremes of threshold-dependent Gaussian processes"
}
| null | null | null | null | true | null |
3786
| null |
Default
| null | null |
null |
{
"abstract": " This paper proposes novel tests for the absence of jumps in a univariate\nsemimartingale and for the absence of common jumps in a bivariate\nsemimartingale. Our methods rely on ratio statistics of power variations based\non irregular observations, sampled at different frequencies. We develop central\nlimit theorems for the statistics under the respective null hypotheses and\napply bootstrap procedures to assess the limiting distributions. Further we\ndefine corrected statistics to improve the finite sample performance.\nSimulations show that the test based on our corrected statistic yields good\nresults and even outperforms existing tests in the case of regular\nobservations.\n",
"title": "The null hypothesis of common jumps in case of irregular and asynchronous observations"
}
| null | null | null | null | true | null |
3787
| null |
Default
| null | null |
null |
{
"abstract": " Deriving the optimal safety stock quantity with which to meet customer\nsatisfaction is one of the most important topics in stock management. However,\nit is difficult to control the stock management of correlated marketable\nmerchandise when using an inventory control method that was developed under the\nassumption that the demands are not correlated. For this, we propose a\ndeterministic approach that uses a probability inequality to derive a\nreasonable safety stock for the case in which we know the correlation between\nvarious commodities. Moreover, over a given lead time, the relation between the\nappropriate safety stock and the allowable stockout rate is analytically\nderived, and the potential of our proposed procedure is validated by numerical\nexperiments.\n",
"title": "Property Safety Stock Policy for Correlated Commodities Based on Probability Inequality"
}
| null | null | null | null | true | null |
3788
| null |
Default
| null | null |
null |
{
"abstract": " In this article, we prove Carleman estimates for the generalized\ntime-fractional advection-diffusion equations by considering the fractional\nderivative as perturbation for the first order time-derivative. As a direct\napplication of the Carleman estimates, we show a conditional stability of a\nlateral Cauchy problem for the time-fractional advection-diffusion equation,\nand we also investigate the stability of an inverse source problem.\n",
"title": "Carleman estimates for the time-fractional advection-diffusion equations and applications"
}
| null | null | null | null | true | null |
3789
| null |
Default
| null | null |
null |
{
"abstract": " Unsupervised domain mapping has attracted substantial attention in recent\nyears due to the success of models based on the cycle-consistency assumption.\nThese models map between two domains by fooling a probabilistic discriminator,\nthereby matching the probability distributions of the real and generated data.\nInstead of this probabilistic approach, we cast the problem in terms of\naligning the geometry of the manifolds of the two domains. We introduce the\nManifold Geometry Matching Generative Adversarial Network (MGM GAN), which adds\ntwo novel mechanisms to facilitate GANs sampling from the geometry of the\nmanifold rather than the density and then aligning two manifold geometries: (1)\nan importance sampling technique that reweights points based on their density\non the manifold, making the discriminator only able to discern geometry and (2)\na penalty adapted from traditional manifold alignment literature that\nexplicitly enforces the geometry to be preserved. The MGM GAN leverages the\nmanifolds arising from a pre-trained autoencoder to bridge the gap between\nformal manifold alignment literature and existing GAN work, and demonstrate the\nadvantages of modeling the manifold geometry over its density.\n",
"title": "Generating and Aligning from Data Geometries with Generative Adversarial Networks"
}
| null | null | null | null | true | null |
3790
| null |
Default
| null | null |
null |
{
"abstract": " The spin of Wolf-Rayet (WR) stars at low metallicity (Z) is most relevant for\nour understanding of gravitational wave sources such as GW 150914, as well as\nthe incidence of long-duration gamma-ray bursts (GRBs). Two scenarios have been\nsuggested for both phenomena: one of them involves rapid rotation and\nquasi-chemical homogeneous evolution (CHE), the other invokes classical\nevolution through mass loss in single and binary systems. WR spin rates might\nenable us to test these two scenarios. In order to obtain empirical constraints\non black hole progenitor spin, we infer wind asymmetries in all 12 known WR\nstars in the Small Magellanic Cloud (SMC) at Z = 1/5 Zsun, as well as within a\nsignificantly enlarged sample of single and binary WR stars in the Large\nMagellanic Cloud (LMC at Z = 1/2 Zsun), tripling the sample of Vink (2007).\nThis brings the total LMC sample to 39, making it appropriate for comparison to\nthe Galactic sample. We measure WR wind asymmetries with VLT-FORS linear\nspectropolarimetry. We report the detection of new line effects in the LMC WN\nstar BAT99-43 and the WC star BAT99-70, as well as the famous WR/LBV HD 5980 in\nthe SMC, which might be evolving chemically homogeneously. With the previous\nreported line effects in the late-type WNL (Ofpe/WN9) objects BAT99-22 and\nBAT99-33, this brings the total LMC WR sample to 4, i.e. a frequency of ~10%.\nPerhaps surprisingly, the incidence of line effects amongst low-Z WR stars is\nnot found to be any higher than amongst the Galactic WR sample, challenging the\nrotationally-induced CHE model. As WR mass loss is likely Z-dependent, our\nMagellanic Cloud line-effect WR stars may maintain their surface rotation and\nfulfill the basic conditions for producing long GRBs, both via the classical\npost-red supergiant (RSG) or luminous blue variable (LBV) channel, as well as\nresulting from CHE due to physics specific to very massive stars (VMS).\n",
"title": "Wolf-Rayet spin at low metallicity and its implication for Black Hole formation channels"
}
| null | null | null | null | true | null |
3791
| null |
Default
| null | null |
null |
{
"abstract": " There is surprisingly little known about agenda setting for international\ndevelopment in the United Nations (UN) despite it having a significant\ninfluence on the process and outcomes of development efforts. This paper\naddresses this shortcoming using a novel approach that applies natural language\nprocessing techniques to countries' annual statements in the UN General Debate.\nEvery year UN member states deliver statements during the General Debate on\ntheir governments' perspective on major issues in world politics. These\nspeeches provide invaluable information on state preferences on a wide range of\nissues, including international development, but have largely been overlooked\nin the study of global politics. This paper identifies the main international\ndevelopment topics that states raise in these speeches between 1970 and 2016,\nand examine the country-specific drivers of international development rhetoric.\n",
"title": "What Drives the International Development Agenda? An NLP Analysis of the United Nations General Debate 1970-2016"
}
| null | null | null | null | true | null |
3792
| null |
Default
| null | null |
null |
{
"abstract": " Complex networks are often used to represent systems that are not static but\ngrow with time: people make new friendships, new papers are published and refer\nto the existing ones, and so forth. To assess the statistical significance of\nmeasurements made on such networks, we propose a randomization methodology---a\ntime-respecting null model---that preserves both the network's degree sequence\nand the time evolution of individual nodes' degree values. By preserving the\ntemporal linking patterns of the analyzed system, the proposed model is able to\nfactor out the effect of the system's temporal patterns on its structure. We\napply the model to the citation network of Physical Review scholarly papers and\nthe citation network of US movies. The model reveals that the two datasets are\nstrikingly different with respect to their degree-degree correlations, and we\ndiscuss the important implications of this finding on the information provided\nby paradigmatic node centrality metrics such as indegree and Google's PageRank.\nThe randomization methodology proposed here can be used to assess the\nsignificance of any structural property in growing networks, which could bring\nnew insights into the problems where null models play a critical role, such as\nthe detection of communities and network motifs.\n",
"title": "Randomizing growing networks with a time-respecting null model"
}
| null | null | null | null | true | null |
3793
| null |
Default
| null | null |
null |
{
"abstract": " The choice of tuning parameter in Bayesian variable selection is a critical\nproblem in modern statistics. Especially in the related work of nonlocal prior\nin regression setting, the scale parameter reflects the dispersion of the\nnon-local prior density around zero, and implicitly determines the size of the\nregression coefficients that will be shrunk to zero. In this paper, we\nintroduce a fully Bayesian approach with the pMOM nonlocal prior where we place\nan appropriate Inverse-Gamma prior on the tuning parameter to analyze a more\nrobust model that is comparatively immune to misspecification of scale\nparameter. Under standard regularity assumptions, we extend the previous work\nwhere $p$ is bounded by the number of observations $n$ and establish strong\nmodel selection consistency when $p$ is allowed to increase at a polynomial\nrate with $n$. Through simulation studies, we demonstrate that our model\nselection procedure outperforms commonly used penalized likelihood methods in a\nrange of simulation settings.\n",
"title": "High-dimensional posterior consistency for hierarchical non-local priors in regression"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
3794
| null |
Validated
| null | null |
null |
{
"abstract": " We introduce a metric of mutual energy for adelic measures associated to the\nArakelov-Zhang pairing. Using this metric and potential theoretic techniques\ninvolving discrete approximations to energy integrals, we prove an effective\nbound on a problem of Baker and DeMarco on unlikely intersections of dynamical\nsystems, specifically, for the set of complex parameters $c$ for which $z=0$\nand $1$ are both preperiodic under iteration of $f_c(z)=z^2 + c$.\n",
"title": "A metric of mutual energy and unlikely intersections for dynamical systems"
}
| null | null | null | null | true | null |
3795
| null |
Default
| null | null |
null |
{
"abstract": " Volvox barberi is a multicellular green alga forming spherical colonies of\n10000-50000 differentiated somatic and germ cells. Here, I show that these\ncolonies actively self-organize over minutes into \"flocks\" that can contain\nmore than 100 colonies moving and rotating collectively for hours. The colonies\nin flocks form two-dimensional, irregular, \"active crystals\", with lattice\nangles and colony diameters both following log-normal distributions. Comparison\nwith a dynamical simulation of soft spheres with diameters matched to the\nVolvox samples, and a weak long-range attractive force, show that the Volvox\nflocks achieve optimal random close-packing. A dye tracer in the Volvox medium\nrevealed large hydrodynamic vortices generated by colony and flock rotations,\nproviding a likely source of the forces leading to flocking and optimal\npacking.\n",
"title": "Volvox barberi flocks, forming near-optimal, two-dimensional, polydisperse lattice packings"
}
| null | null | null | null | true | null |
3796
| null |
Default
| null | null |
null |
{
"abstract": " Accuracy is one of the basic principles of journalism. However, it is\nincreasingly hard to manage due to the diversity of news media. Some editors of\nonline news tend to use catchy headlines which trick readers into clicking.\nThese headlines are either ambiguous or misleading, degrading the reading\nexperience of the audience. Thus, identifying inaccurate news headlines is a\ntask worth studying. Previous work names these headlines \"clickbaits\" and\nmainly focus on the features extracted from the headlines, which limits the\nperformance since the consistency between headlines and news bodies is\nunderappreciated. In this paper, we clearly redefine the problem and identify\nambiguous and misleading headlines separately. We utilize class sequential\nrules to exploit structure information when detecting ambiguous headlines. For\nthe identification of misleading headlines, we extract features based on the\ncongruence between headlines and bodies. To make use of the large unlabeled\ndata set, we apply a co-training method and gain an increase in performance.\nThe experiment results show the effectiveness of our methods. Then we use our\nclassifiers to detect inaccurate headlines crawled from different sources and\nconduct a data analysis.\n",
"title": "Learning to Identify Ambiguous and Misleading News Headlines"
}
| null | null | null | null | true | null |
3797
| null |
Default
| null | null |
null |
{
"abstract": " It has recently become possible to study the dynamics of information\ndiffusion in techno-social systems at scale, due to the emergence of online\nplatforms, such as Twitter, with millions of users. One question that\nsystematically recurs is whether information spreads according to simple or\ncomplex dynamics: does each exposure to a piece of information have an\nindependent probability of a user adopting it (simple contagion), or does this\nprobability depend instead on the number of sources of exposure, increasing\nabove some threshold (complex contagion)? Most studies to date are\nobservational and, therefore, unable to disentangle the effects of confounding\nfactors such as social reinforcement, homophily, limited attention, or network\ncommunity structure. Here we describe a novel controlled experiment that we\nperformed on Twitter using `social bots' deployed to carry out coordinated\nattempts at spreading information. We propose two Bayesian statistical models\ndescribing simple and complex contagion dynamics, and test the competing\nhypotheses. We provide experimental evidence that the complex contagion model\ndescribes the observed information diffusion behavior more accurately than\nsimple contagion. Future applications of our results include more effective\ndefenses against malicious propaganda campaigns on social media, improved\nmarketing and advertisement strategies, and design of effective network\nintervention techniques.\n",
"title": "Evidence of Complex Contagion of Information in Social Media: An Experiment Using Twitter Bots"
}
| null | null |
[
"Computer Science",
"Physics"
] | null | true | null |
3798
| null |
Validated
| null | null |
null |
{
"abstract": " A set is called recurrent if its minimal automaton is strongly connected and\nbirecurrent if it is recurrent as well as its reversal. We prove a series of\nresults concerning birecurrent sets. It is already known that any birecurrent\nset is completely reducible (that is, such that the minimal representation of\nits characteristic series is completely reducible). The main result of this\npaper characterizes completely reducible sets as linear combinations of\nbirecurrent sets\n",
"title": "Birecurrent sets"
}
| null | null | null | null | true | null |
3799
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents the concept of an In situ Fabricator, a mobile robot\nintended for on-site manufacturing, assembly and digital fabrication. We\npresent an overview of a prototype system, its capabilities, and highlight the\nimportance of high-performance control, estimation and planning algorithms for\nachieving desired construction goals. Next, we detail on two architectural\napplication scenarios: first, building a full-size undulating brick wall, which\nrequired a number of repositioning and autonomous localisation manoeuvres.\nSecond, the Mesh Mould concrete process, which shows that an In situ Fabricator\nin combination with an innovative digital fabrication tool can be used to\nenable completely novel building technologies. Subsequently, important\nlimitations and disadvantages of our approach are discussed. Based on that, we\nidentify the need for a new type of robotic actuator, which facilitates the\ndesign of novel full-scale construction robots. We provide brief insight into\nthe development of this actuator and conclude the paper with an outlook on the\nnext-generation In situ Fabricator, which is currently under development.\n",
"title": "Mobile Robotic Fabrication at 1:1 scale: the In situ Fabricator"
}
| null | null | null | null | true | null |
3800
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.