text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " In this paper we demonstrate how genetic algorithms can be used to reverse\nengineer an evaluation function's parameters for computer chess. Our results\nshow that using an appropriate mentor, we can evolve a program that is on par\nwith top tournament-playing chess programs, outperforming a two-time World\nComputer Chess Champion. This performance gain is achieved by evolving a\nprogram with a smaller number of parameters in its evaluation function to mimic\nthe behavior of a superior mentor which uses a more extensive evaluation\nfunction. In principle, our mentor-assisted approach could be used in a wide\nrange of problems for which appropriate mentors are available.\n", "title": "Genetic Algorithms for Mentor-Assisted Evaluation Function Optimization" }
null
null
null
null
true
null
20901
null
Default
null
null
null
{ "abstract": " We derive a lower bound on the location of global extrema of eigenfunctions\nfor a large class of non-local Schrödinger operators in convex domains under\nDirichlet exterior conditions, featuring the symbol of the kinetic term, the\nstrength of the potential, and the corresponding eigenvalue, and involving a\nnew universal constant. We show a number of probabilistic and spectral\ngeometric implications, and derive a Faber-Krahn type inequality for non-local\noperators. Our study also extends to potentials with compact support, and we\nestablish bounds on the location of extrema relative to the boundary edge of\nthe support or level sets around minima of the potential.\n", "title": "Universal Constraints on the Location of Extrema of Eigenfunctions of Non-Local Schrödinger Operators" }
null
null
null
null
true
null
20902
null
Default
null
null
null
{ "abstract": " This paper presents the InScript corpus (Narrative Texts Instantiating Script\nstructure). InScript is a corpus of 1,000 stories centered around 10 different\nscenarios. Verbs and noun phrases are annotated with event and participant\ntypes, respectively. Additionally, the text is annotated with coreference\ninformation. The corpus shows rich lexical variation and will serve as a unique\nresource for the study of the role of script knowledge in natural language\nprocessing.\n", "title": "InScript: Narrative texts annotated with script information" }
null
null
null
null
true
null
20903
null
Default
null
null
null
{ "abstract": " $\\alpha$-(BEDT-TTF)$_2$I$_3$ is a prominent example of charge ordering among\norganic conductors. In this work we explore the details of transport within the\ncharge-ordered as well as semimetallic phase at ambient pressure. In the\nhigh-temperature semimetallic phase, the mobilities and concentrations of both\nelectrons and holes conspire in such a way to create an almost\ntemperature-independent conductivity as well as a low Hall effect. We explain\nthese phenomena as a consequence of a predominantly inter-pocket scattering\nwhich equalizes mobilities of the two types of charge carriers. At low\ntemperatures, within the insulating charge-ordered phase two channels of\nconduction can be discerned: a temperature-dependent activation which follows\nthe mean-field behavior, and a nearest-neighbor hopping contribution. Together\nwith negative magnetoresistance, the latter relies on the presence of disorder.\nThe charge-ordered phase also features a prominent dielectric peak which bears\na similarity to relaxor ferroelectrics. Its dispersion is determined by\nfree-electron screening and pushed by disorder well below the transition\ntemperature. The source of this disorder can be found in the anion layers which\nrandomly perturb BEDT-TTF molecules through hydrogen bonds.\n", "title": "Semimetallic and charge-ordered $α$-(BEDT-TTF)$_2$I$_3$: on the role of disorder in dc transport and dielectric properties" }
null
null
null
null
true
null
20904
null
Default
null
null
null
{ "abstract": " Photometric stereo is a method for estimating the normal vectors of an object\nfrom images of the object under varying lighting conditions. Motivated by\nseveral recent works that extend photometric stereo to more general objects and\nlighting conditions, we study a new robust approach to photometric stereo that\nutilizes dictionary learning. Specifically, we propose and analyze two\napproaches to adaptive dictionary regularization for the photometric stereo\nproblem. First, we propose an image preprocessing step that utilizes an\nadaptive dictionary learning model to remove noise and other non-idealities\nfrom the image dataset before estimating the normal vectors. We also propose an\nalternative model where we directly apply the adaptive dictionary\nregularization to the normal vectors themselves during estimation. We study the\npractical performance of both methods through extensive simulations, which\ndemonstrate the state-of-the-art performance of both methods in the presence of\nnoise.\n", "title": "Robust Photometric Stereo Using Learned Image and Gradient Dictionaries" }
null
null
[ "Statistics" ]
null
true
null
20905
null
Validated
null
null
null
{ "abstract": " We present a micro aerial vehicle (MAV) system, built with inexpensive\noff-the-shelf hardware, for autonomously following trails in unstructured,\noutdoor environments such as forests. The system introduces a deep neural\nnetwork (DNN) called TrailNet for estimating the view orientation and lateral\noffset of the MAV with respect to the trail center. The DNN-based controller\nachieves stable flight without oscillations by avoiding overconfident behavior\nthrough a loss function that includes both label smoothing and entropy reward.\nIn addition to the TrailNet DNN, the system also utilizes vision modules for\nenvironmental awareness, including another DNN for object detection and a\nvisual odometry component for estimating depth for the purpose of low-level\nobstacle detection. All vision systems run in real time on board the MAV via a\nJetson TX1. We provide details on the hardware and software used, as well as\nimplementation details. We present experiments showing the ability of our\nsystem to navigate forest trails more robustly than previous techniques,\nincluding autonomous flights of 1 km.\n", "title": "Toward Low-Flying Autonomous MAV Trail Navigation using Deep Neural Networks for Environmental Awareness" }
null
null
null
null
true
null
20906
null
Default
null
null
null
{ "abstract": " The high efficiency of charge generation within organic photovoltaic blends\napparently contrasts with the strong \"classical\" attraction between newly\nformed electron-hole pairs. Several factors have been identified as possible\nfacilitators of charge dissociation, such as quantum mechanical coherence and\ndelocalization, structural and energetic disorder, built-in electric fields,\nnanoscale intermixing of the donor and acceptor components of the blends. Our\nmesoscale quantum-chemical model allows an unbiased assessment of their\nrelative importance, through excited-state calculations on systems containing\nthousands of donor and acceptor sites. The results on several model\nheterojunctions confirm that the classical model severely overestimates the\nbinding energy of the electron-hole pairs, produced by vertical excitation from\nthe electronic ground state. Using physically sensible parameters for the\nindividual materials, we find that the quantum mechanical energy difference\nbetween the lowest interfacial charge transfer states and the fully separated\nelectron and hole is of the order of the thermal energy.\n", "title": "Origin of Charge Separation at Organic Photovoltaic Heterojunctions: A Mesoscale Quantum Mechanical View" }
null
null
null
null
true
null
20907
null
Default
null
null
null
{ "abstract": " We construct a one-parameter family of Laplacians on the Sierpinski Gasket\nthat are symmetric and self-similar for the 9-map iterated function system\nobtained by iterating the standard 3-map iterated function system. Our main\nresult is the fact that all these Laplacians satisfy a version of spectral\ndecimation that builds a precise catalog of eigenvalues and eigenfunctions for\nany choice of the parameter. We give a number of applications of this spectral\ndecimation. We also prove analogous results for fractal Laplacians on the unit\nInterval, and this yields an analogue of the classical Sturm-Liouville theory\nfor the eigenfunctions of these one-dimensional Laplacians.\n", "title": "Spectral Decimation for Families of Self-Similar Symmetric Laplacians on the Sierpinski Gasket" }
null
null
null
null
true
null
20908
null
Default
null
null
null
{ "abstract": " Learning the model parameters of a multi-object dynamical system from partial\nand perturbed observations is a challenging task. Despite recent numerical\nadvancements in learning these parameters, theoretical guarantees are extremely\nscarce. In this article, we study the identifiability of these parameters and\nthe consistency of the corresponding maximum likelihood estimate (MLE) under\nassumptions on the different components of the underlying multi-object system.\nIn order to understand the impact of the various sources of observation noise\non the ability to learn the model parameters, we study the asymptotic variance\nof the MLE through the associated Fisher information matrix. For example, we\nshow that specific aspects of the multi-target tracking (MTT) problem such as\ndetection failures and unknown data association lead to a loss of information\nwhich is quantified in special cases of interest.\n", "title": "Identification of multi-object dynamical systems: consistency and Fisher information" }
null
null
null
null
true
null
20909
null
Default
null
null
null
{ "abstract": " Given an elliptic curve $E$ over a finite field $\\mathbb{F}_q$ we study the\nfinite extensions $\\mathbb{F}_{q^n}$ of $\\mathbb{F}_q$ such that the number of\n$\\mathbb{F}_{q^n}$-rational points on $E$ attains the Hasse upper bound. We\nobtain an upper bound on the degree $n$ for $E$ ordinary using an estimate for\nlinear forms in logarithms, which allows us to compute the pairs of isogeny\nclasses of such curves and degree $n$ for small $q$. Using a consequence of\nSchmidt's Subspace Theorem, we improve the upper bound to $n\\leq 11$ for\nsufficiently large $q$. We also show that there are infinitely many isogeny\nclasses of ordinary elliptic curves with $n=3$.\n", "title": "Elliptic curves maximal over extensions of finite base fields" }
null
null
null
null
true
null
20910
null
Default
null
null
null
{ "abstract": " Missing data recovery is an important and yet challenging problem in imaging\nand data science. Successful models often adopt certain carefully chosen\nregularization. Recently, the low dimension manifold model (LDMM) was\nintroduced by S.Osher et al. and shown effective in image inpainting. They\nobserved that enforcing low dimensionality on image patch manifold serves as a\ngood image regularizer. In this paper, we observe that having only the low\ndimension manifold regularization is not enough sometimes, and we need\nsmoothness as well. For that, we introduce a new regularization by combining\nthe low dimension manifold regularization with a higher order Curvature\nRegularization, and we call this new regularization CURE for short. The key\nstep of solving CURE is to solve a biharmonic equation on a manifold. We\nfurther introduce a weighted version of CURE, called WeCURE, in a similar\nmanner as the weighted nonlocal Laplacian (WNLL) method. Numerical experiments\nfor image inpainting and semi-supervised learning show that the proposed CURE\nand WeCURE significantly outperform LDMM and WNLL respectively.\n", "title": "CURE: Curvature Regularization For Missing Data Recovery" }
null
null
null
null
true
null
20911
null
Default
null
null
null
{ "abstract": " Microscopic artificial swimmers have recently become highly attractive due to\ntheir promising potential for biomedical applications. The pioneering work of\nDreyfus et al (2005) has demonstrated the motion of a microswimmer with an\nundulating chain of superparamagnetic beads, which is actuated by an\noscillating external magnetic field. Interestingly, it has also been\ntheoretically predicted that the swimming direction of this swimmer will\nundergo a $90^\\circ$-transition when the magnetic field's oscillations\namplitude is increased above a critical value of $\\sqrt{2}$. In this work, we\nfurther investigate this transition both theoretically and experimentally by\nusing numerical simulations and presenting a novel flexible microswimmer with a\nsuperparamagnetic head. We realize the $90^\\circ$-transition in swimming\ndirection, prove that this effect depends on both frequency and amplitude of\nthe oscillating magnetic field, and demonstrate the existence of an optimal\namplitude, under which, maximal swimming speed can be achieved. By\nasymptotically analyzing the dynamic motion of microswimmer with a minimal\ntwo-link model, we reveal that the stability transitions representing the\nchanges in the swimming direction are induced by the effect of nonlinear\nparametric excitation.\n", "title": "Nonlinear parametric excitation effect induces stability transitions in swimming direction of flexible superparamagnetic microswimmers" }
null
null
null
null
true
null
20912
null
Default
null
null
null
{ "abstract": " We show that on bounded Lipschitz pseudoconvex domains that admit good weight\nfunctions the $\\overline{\\partial}$-Neumann operators $N_q,\n\\overline{\\partial}^* N_{q}$, and $\\overline{\\partial} N_{q}$ are bounded on\n$L^p$ spaces for some values of $p$ greater than 2.\n", "title": "$L^p$ Mapping Properties for the Cauchy-Riemann Equations on Lipschitz Domains Admitting Subelliptic Estimates" }
null
null
null
null
true
null
20913
null
Default
null
null
null
{ "abstract": " Purpose: To provide a fast computational method, based on the proximal graph\nsolver (POGS) - a convex optimization solver using the alternating direction\nmethod of multipliers (ADMM), for calculating an optimal treatment plan in\nrotating shield brachytherapy (RSBT). RSBT treatment planning has more degrees\nof freedom than conventional high-dose-rate brachytherapy (HDR-BT) due to the\naddition of emission direction, and this necessitates a fast optimization\ntechnique to enable clinical usage. // Methods: The multi-helix RSBT (H-RSBT)\ndelivery technique was considered with five representative cervical cancer\npatients. Treatment plans were generated for all patients using the POGS method\nand the previously considered commercial solver IBM CPLEX. The rectum, bladder,\nsigmoid, high-risk clinical target volume (HR-CTV), and HR-CTV boundary were\nthe structures considered in our optimization problem, called the asymmetric\ndose-volume optimization with smoothness control. Dose calculation resolution\nwas 1x1x3 mm^3 for all cases. The H-RSBT applicator has 6 helices, with 33.3 mm\nof translation along the applicator per helical rotation and 1.7 mm spacing\nbetween dwell positions, yielding 17.5 degree emission angle spacing per 5 mm\nalong the applicator.// Results: For each patient, HR-CTV D90, HR-CTV D100,\nrectum D2cc, sigmoid D2cc, and bladder D2cc matched within 1% for CPLEX and\nPOGS. Also, we obtained similar EQD2 figures between CPLEX and POGS. POGS was\naround 18 times faster than CPLEX. Over all patients, total optimization times\nwere 32.1-65.4 seconds for CPLEX and 2.1-3.9 seconds for POGS. // Conclusions:\nPOGS substantially reduced treatment plan optimization time around 18 times for\nRSBT with similar HR-CTV D90, OAR D2cc values, and EQD2 figure relative to\nCPLEX, which is significant progress toward clinical translation of RSBT. POGS\nis also applicable to conventional HDR-BT.\n", "title": "Fast dose optimization for rotating shield brachytherapy" }
null
null
null
null
true
null
20914
null
Default
null
null
null
{ "abstract": " This work details the development of a three-dimensional (3D) electric field\nmodel for the LUX detector. The detector took data during two periods of\nsearching for weakly interacting massive particle (WIMP) searches. After the\nfirst period completed, a time-varying non-uniform negative charge developed in\nthe polytetrafluoroethylene (PTFE) panels that define the radial boundary of\nthe detector's active volume. This caused electric field variations in the\ndetector in time, depth and azimuth, generating an electrostatic\nradially-inward force on electrons on their way upward to the liquid surface.\nTo map this behavior, 3D electric field maps of the detector's active volume\nwere built on a monthly basis. This was done by fitting a model built in COMSOL\nMultiphysics to the uniformly distributed calibration data that were collected\non a regular basis. The modeled average PTFE charge density increased over the\ncourse of the exposure from -3.6 to $-5.5~\\mu$C/m$^2$. From our studies, we\ndeduce that the electric field magnitude varied while the mean value of the\nfield of $\\sim200$~V/cm remained constant throughout the exposure. As a result\nof this work the varying electric fields and their impact on event\nreconstruction and discrimination were successfully modeled.\n", "title": "3D Modeling of Electric Fields in the LUX Detector" }
null
null
null
null
true
null
20915
null
Default
null
null
null
{ "abstract": " This paper studies optimal communication and coordination strategies in\ncyber-physical systems for both defender and attacker within a game-theoretic\nframework. We model the communication network of a cyber-physical system as a\nsensor network which involves one single Gaussian source observed by many\nsensors, subject to additive independent Gaussian observation noises. The\nsensors communicate with the estimator over a coherent Gaussian multiple access\nchannel. The aim of the receiver is to reconstruct the underlying source with\nminimum mean squared error. The scenario of interest here is one where some of\nthe sensors are captured by the attacker and they act as the adversary\n(jammer): they strive to maximize distortion. The receiver (estimator) knows\nthe captured sensors but still cannot simply ignore them due to the multiple\naccess channel, i.e., the outputs of all sensors are summed to generate the\nestimator input. We show that the ability of transmitter sensors to secretly\nagree on a random event, that is \"coordination\", plays a key role in the\nanalysis...\n", "title": "Optimal Communication Strategies in Networked Cyber-Physical Systems with Adversarial Elements" }
null
null
null
null
true
null
20916
null
Default
null
null
null
{ "abstract": " We study a quadruple of interrelated subexponential subsystems of arithmetic\nWKL$_0^-$, RCA$^-_0$, I$\\Delta_0$, and $\\Delta$RA$_1$, which complement the\nsimilarly related quadruple WKL$_0$, RCA$_0$, I$\\Sigma_1$, and PRA studied by\nSimpson, and the quadruple WKL$_0^\\ast$, RCA$_0^\\ast$, I$\\Delta_0$(exp), and\nEFA studied by Simpson and Smith. We then explore the space of subexponential\narithmetic theories between I$\\Delta_0$ and I$\\Delta_0$(exp). We introduce and\nstudy first- and second-order theories of recursive arithmetic $A$RA$_1$ and\n$A$RA$_2$ capable of characterizing various computational complexity classes\nand based on function algebras $A$, studied by Clote and others.\n", "title": "First- and Second-Order Models of Recursive Arithmetics" }
null
null
[ "Computer Science" ]
null
true
null
20917
null
Validated
null
null
null
{ "abstract": " Nowadays, modern earth observation programs produce huge volumes of satellite\nimages time series (SITS) that can be useful to monitor geographical areas\nthrough time. How to efficiently analyze such kind of information is still an\nopen question in the remote sensing field. Recently, deep learning methods\nproved suitable to deal with remote sensing data mainly for scene\nclassification (i.e. Convolutional Neural Networks - CNNs - on single images)\nwhile only very few studies exist involving temporal deep learning approaches\n(i.e Recurrent Neural Networks - RNNs) to deal with remote sensing time series.\nIn this letter we evaluate the ability of Recurrent Neural Networks, in\nparticular the Long-Short Term Memory (LSTM) model, to perform land cover\nclassification considering multi-temporal spatial data derived from a time\nseries of satellite images. We carried out experiments on two different\ndatasets considering both pixel-based and object-based classification. The\nobtained results show that Recurrent Neural Networks are competitive compared\nto state-of-the-art classifiers, and may outperform classical approaches in\npresence of low represented and/or highly mixed classes. We also show that\nusing the alternative feature representation generated by LSTM can improve the\nperformances of standard classifiers.\n", "title": "Land Cover Classification via Multi-temporal Spatial Data by Recurrent Neural Networks" }
null
null
null
null
true
null
20918
null
Default
null
null
null
{ "abstract": " We define and address the problem of unsupervised learning of disentangled\nrepresentations on data generated from independent factors of variation. We\npropose FactorVAE, a method that disentangles by encouraging the distribution\nof representations to be factorial and hence independent across the dimensions.\nWe show that it improves upon $\\beta$-VAE by providing a better trade-off\nbetween disentanglement and reconstruction quality. Moreover, we highlight the\nproblems of a commonly used disentanglement metric and introduce a new metric\nthat does not suffer from them.\n", "title": "Disentangling by Factorising" }
null
null
[ "Statistics" ]
null
true
null
20919
null
Validated
null
null
null
{ "abstract": " In the light of the recently proposed scenario of asymmetry-induced\nsynchronization (AISync), in which dynamical uniformity and consensus in a\ndistributed system would demand certain asymmetries in the underlying network,\nwe investigate here the influence of some regularities in the interlayer\nconnection patterns on the synchronization properties of multilayer random\nnetworks. More specifically, by considering a Stuart-Landau model of complex\noscillators with random frequencies, we report for multilayer networks a\ndynamical behavior that could be also classified as a manifestation of AISync.\nWe show, namely, that the presence of certain symmetries in the interlayer\nconnection pattern tends to diminish the synchronization capability of the\nwhole network or, in other words, asymmetries in the interlayer connections\nwould enhance synchronization in such structured networks. Our results might\nhelp the understanding not only of the AISync mechanism itself, but also its\npossible role in the determination of the interlayer connection pattern of\nmultilayer and other structured networks with optimal synchronization\nproperties.\n", "title": "Symmetries and synchronization in multilayer random networks" }
null
null
null
null
true
null
20920
null
Default
null
null
null
{ "abstract": " As part of autonomous car driving systems, semantic segmentation is an\nessential component to obtain a full understanding of the car's environment.\nOne difficulty, that occurs while training neural networks for this purpose, is\nclass imbalance of training data. Consequently, a neural network trained on\nunbalanced data in combination with maximum a-posteriori classification may\neasily ignore classes that are rare in terms of their frequency in the dataset.\nHowever, these classes are often of highest interest. We approach such\npotential misclassifications by weighting the posterior class probabilities\nwith the prior class probabilities which in our case are the inverse\nfrequencies of the corresponding classes in the training dataset. More\nprecisely, we adopt a localized method by computing the priors pixel-wise such\nthat the impact can be analyzed at pixel level as well. In our experiments, we\ntrain one network from scratch using a proprietary dataset containing 20,000\nannotated frames of video sequences recorded from street scenes. The evaluation\non our test set shows an increase of average recall with regard to instances of\npedestrians and info signs by $25\\%$ and $23.4\\%$, respectively. In addition,\nwe significantly reduce the non-detection rate for instances of the same\nclasses by $61\\%$ and $38\\%$.\n", "title": "Application of Decision Rules for Handling Class Imbalance in Semantic Segmentation" }
null
null
null
null
true
null
20921
null
Default
null
null
null
{ "abstract": " We provide a pair of dual results, each stating the coincidence of highness\nproperties from computability theory. We provide an analogous pair of dual\nresults on the coincidence of cardinal characteristics within ZFC.\nA mass problem is a set of functions on $\\omega$. For mass problems $\\mathcal\nC, \\mathcal D$, one says that $\\mathcal C$ is Muchnik reducible to $\\mathcal D$\nif each function in $\\mathcal D$ computes a function in $\\mathcal C$. In this\npaper we view highness properties as mass problems, and compare them with\nrespect to Muchnik reducibility and its uniform strengthening, Medvedev\nreducibility.\nLet $\\mathcal D(p)$ be the mass problem of infinite bit sequences $y$ (i.e.,\n0,1 valued functions) such that for each computable bit sequence $x$, the\nasymptotic lower density $\\underline \\rho$ of the agreement bit sequence $x\n\\leftrightarrow y$ is at most $p$ (this sequence takes the value 1 at a bit\nposition iff $x$ and $y$ agree).\nWe show that all members of this family of mass problems parameterized by a\nreal $p$ with $0 < p<1/2 $ have the same complexity in the sense of Muchnik\nreducibility. This also yields a new version of Monin's affirmative answer to\nthe \"Gamma question\", whether $\\Gamma(A)< 1/2$ implies $\\Gamma(A)=0$ for each\nTuring oracle $A$.\nWe also show, together with Joseph Miller, that for any order function~$g$\nthere exists a faster growing order function $h $ such that $\\mathrm{IOE}(g) $\nis strictly Muchnik below $\\mathrm{IOE}(h)$.\nWe study cardinal characteristics analogous to the highness properties above.\nFor instance, $\\mathfrak d (p)$ is the least size of a set $G$ of bit sequences\nso that for each bit sequence $x$ there is a bit sequence $y$ in $G$ so that\n$\\underline \\rho (x \\leftrightarrow y) >p$. We prove within ZFC all the\ncoincidences of cardinal characteristics that are the analogs of the results\nabove.\n", "title": "Muchnik degrees and cardinal characteristics" }
null
null
null
null
true
null
20922
null
Default
null
null
null
{ "abstract": " We prove the existence of a solution to the semirelativistic Hartree equation\n$$\\sqrt{-\\Delta+m^2}u+ V(x) u = A(x)\\left( W * |u|^p \\right) |u|^{p-2}u $$\nunder suitable growth assumption on the potential functions $V$ and $A$. In\nparticular, both can be unbounded from above.\n", "title": "Existence of solutions for a semirelativistic Hartree equation with unbounded potentials" }
null
null
null
null
true
null
20923
null
Default
null
null
null
{ "abstract": " Unification and generalization are operations on two terms computing\nrespectively their greatest lower bound and least upper bound when the terms\nare quasi-ordered by subsumption up to variable renaming (i.e., $t_1\\preceq\nt_2$ iff $t_1 = t_2\\sigma$ for some variable substitution $\\sigma$). When term\nsignatures are such that distinct functor symbols may be related with a fuzzy\nequivalence (called a similarity), these operations can be formally extended to\ntolerate mismatches on functor names and/or arity or argument order. We\nreformulate and extend previous work with a declarative approach defining\nunification and generalization as sets of axioms and rules forming a complete\nconstraint-normalization proof system. These include the Reynolds-Plotkin\nterm-generalization procedures, Maria Sessa's \"weak\" unification with partially\nfuzzy signatures and its corresponding generalization, as well as novel\nextensions of such operations to fully fuzzy signatures (i.e., similar functors\nwith possibly different arities). One advantage of this approach is that it\nrequires no modification of the conventional data structures for terms and\nsubstitutions. This and the fact that these declarative specifications are\nefficiently executable conditional Horn-clauses offers great practical\npotential for fuzzy information-handling applications.\n", "title": "Lattice Operations on Terms over Similar Signatures" }
null
null
null
null
true
null
20924
null
Default
null
null
null
{ "abstract": " This paper addresses the question of how a previously available control\npolicy $\\pi_s$ can be used as a supervisor to more quickly and safely train a\nnew learned control policy $\\pi_L$ for a robot. A weighted average of the\nsupervisor and learned policies is used during trials, with a heavier weight\ninitially on the supervisor, in order to allow safe and useful physical trials\nwhile the learned policy is still ineffective. During the process, the weight\nis adjusted to favor the learned policy. As weights are adjusted, the learned\nnetwork must compensate so as to give safe and reasonable outputs under the\ndifferent weights. A pioneer network is introduced that pre-learns a policy\nthat performs similarly to the current learned policy under the planned next\nstep for new weights; this pioneer network then replaces the currently learned\nnetwork in the next set of trials. Experiments in OpenAI Gym demonstrate the\neffectiveness of the proposed method.\n", "title": "Towards Physically Safe Reinforcement Learning under Supervision" }
null
null
null
null
true
null
20925
null
Default
null
null
null
{ "abstract": " Risk diversification is one of the dominant concerns for portfolio managers.\nVarious portfolio constructions have been proposed to minimize the risk of the\nportfolio under some constrains including expected returns. We propose a\nportfolio construction method that incorporates the complex valued principal\ncomponent analysis into the risk diversification portfolio construction. The\nproposed method is verified to outperform the conventional risk parity and risk\ndiversification portfolio constructions.\n", "title": "Complex Valued Risk Diversification" }
null
null
null
null
true
null
20926
null
Default
null
null
null
{ "abstract": " Today's telecommunication networks have become sources of enormous amounts of\nwidely heterogeneous data. This information can be retrieved from network\ntraffic traces, network alarms, signal quality indicators, users' behavioral\ndata, etc. Advanced mathematical tools are required to extract meaningful\ninformation from these data and take decisions pertaining to the proper\nfunctioning of the networks from the network-generated data. Among these\nmathematical tools, Machine Learning (ML) is regarded as one of the most\npromising methodological approaches to perform network-data analysis and enable\nautomated network self-configuration and fault management. The adoption of ML\ntechniques in the field of optical communication networks is motivated by the\nunprecedented growth of network complexity faced by optical networks in the\nlast few years. Such complexity increase is due to the introduction of a huge\nnumber of adjustable and interdependent system parameters (e.g., routing\nconfigurations, modulation format, symbol rate, coding schemes, etc.) that are\nenabled by the usage of coherent transmission/reception technologies, advanced\ndigital signal processing and compensation of nonlinear effects in optical\nfiber propagation. In this paper we provide an overview of the application of\nML to optical communications and networking. We classify and survey relevant\nliterature dealing with the topic, and we also provide an introductory tutorial\non ML for researchers and practitioners interested in this field. Although a\ngood number of research papers have recently appeared, the application of ML to\noptical networks is still in its infancy: to stimulate further work in this\narea, we conclude the paper proposing new possible research directions.\n", "title": "An Overview on Application of Machine Learning Techniques in Optical Networks" }
null
null
null
null
true
null
20927
null
Default
null
null
null
{ "abstract": " The management of long-lived radionuclides in spent fuel is a key issue to\nachieve the closed nuclear fuel cycle and the sustainable development of\nnuclear energy. Partitioning-Transmutation is supposed to be an efficient\nmethod to treat the long-lived radionuclides in spent fuel. Some Minor\nActinides (MAs) have very long half-lives among the radionuclides in the spent\nfuel. Accordingly, the study of MAs transmutation is a significant work for the\npost-processing of spent fuel.\nIn the present work, the transmutations in Pressurized Water Reactor (PWR)\nmixed oxide (MOX) fuel are investigated through the Monte Carlo based code RMC.\nTwo kinds of MAs, $^{237}$Np and five MAs ($^{237}$Np, $^{241}$Am, $^{243}$Am,\n$^{244}$Cm and $^{245}$Cm) are incorporated homogeneously into the MOX fuel\nassembly. The transmutation of MAs is simulated with different initial MOX\nconcentrations.\nThe results indicate an overall nice efficiency of transmutation in both\ninitial MOX concentrations, especially for the two kinds of MAs primarily\ngenerated in the UOX fuel, $^{237}$Np and $^{241}$Am. In addition, the\ninclusion of $^{237}$Np in MOX has no large influence for other MAs, while the\ntransmutation efficiency of $^{237}$Np is excellent. The transmutation of MAs\nin MOX fuel depletion is expected to be a new, efficient nuclear spent fuel\nmanagement method for the future nuclear power generation.\n", "title": "Study of Minor Actinides Transmutation in PWR MOX fuel" }
null
null
null
null
true
null
20928
null
Default
null
null
null
{ "abstract": " There are two natural simplicial complexes associated to the noncrossing\npartition lattice: the order complex of the full lattice and the order complex\nof the lattice with its bounding elements removed. The latter is a complex that\nwe call the noncrossing partition link because it is the link of an edge in the\nformer. The first author and his coauthors conjectured that various collections\nof simplices of the noncrossing partition link (determined by the undesired\nparking spaces in the corresponding parking functions) form contractible\nsubcomplexes. In this article we prove their conjecture by combining the fact\nthat the star of a simplex in a flag complex is contractible with the second\nauthor's theory of noncrossing hypertrees.\n", "title": "Undesired parking spaces and contractible pieces of the noncrossing partition link" }
null
null
null
null
true
null
20929
null
Default
null
null
null
{ "abstract": " Due to their simplicity and excellent performance, parallel asynchronous\nvariants of stochastic gradient descent have become popular methods to solve a\nwide range of large-scale optimization problems on multi-core architectures.\nYet, despite their practical success, support for nonsmooth objectives is still\nlacking, making them unsuitable for many problems of interest in machine\nlearning, such as the Lasso, group Lasso or empirical risk minimization with\nconvex constraints.\nIn this work, we propose and analyze ProxASAGA, a fully asynchronous sparse\nmethod inspired by SAGA, a variance reduced incremental gradient algorithm. The\nproposed method is easy to implement and significantly outperforms the state of\nthe art on several nonsmooth, large-scale problems. We prove that our method\nachieves a theoretical linear speedup with respect to the sequential version\nunder assumptions on the sparsity of gradients and block-separability of the\nproximal term. Empirical benchmarks on a multi-core architecture illustrate\npractical speedups of up to 12x on a 20-core machine.\n", "title": "Breaking the Nonsmooth Barrier: A Scalable Parallel Method for Composite Optimization" }
null
null
null
null
true
null
20930
null
Default
null
null
null
{ "abstract": " We consider the inverse problem of recovering an unknown functional parameter\n$u$ in a separable Banach space, from a noisy observation $y$ of its image\nthrough a known possibly non-linear ill-posed map ${\\mathcal G}$. The data $y$\nis finite-dimensional and the noise is Gaussian. We adopt a Bayesian approach\nto the problem and consider Besov space priors (see Lassas et al. 2009), which\nare well-known for their edge-preserving and sparsity-promoting properties and\nhave recently attracted wide attention especially in the medical imaging\ncommunity.\nOur key result is to show that in this non-parametric setup the maximum a\nposteriori (MAP) estimates are characterized by the minimizers of a generalized\nOnsager--Machlup functional of the posterior. This is done independently for\nthe so-called weak and strong MAP estimates, which as we show coincide in our\ncontext. In addition, we prove a form of weak consistency for the MAP\nestimators in the infinitely informative data limit. Our results are remarkable\nfor two reasons: first, the prior distribution is non-Gaussian and does not\nmeet the smoothness conditions required in previous research on non-parametric\nMAP estimates. Second, the result analytically justifies existing uses of the\nMAP estimate in finite but high dimensional discretizations of Bayesian inverse\nproblems with the considered Besov priors.\n", "title": "Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems" }
null
null
null
null
true
null
20931
null
Default
null
null
null
{ "abstract": " Our ability to model the shapes and strengths of iron lines in the solar\nspectrum is a critical test of the accuracy of the solar iron abundance, which\nsets the absolute zero-point of all stellar metallicities. We use an extensive\n463-level Fe atom with new photoionisation cross-sections for FeI as well as\nquantum mechanical calculations of collisional excitation and charge transfer\nwith neutral hydrogen; the latter effectively remove a free parameter that has\nhampered all previous line formation studies of Fe in non-local thermodynamic\nequilibrium (NLTE). For the first time, we use realistic 3D NLTE calculations\nof Fe for a quantitative comparison to solar observations. We confront our\ntheoretical line profiles with observations taken at different viewing angles\nacross the solar disk with the Swedish 1-m Solar Telescope. We find that 3D\nmodelling well reproduces the observed centre-to-limb behaviour of spectral\nlines overall, but highlight aspects that may require further work, especially\ncross-sections for inelastic collisions with electrons. Our inferred solar iron\nabundance is log(eps(Fe))=7.48+-0.04.\n", "title": "Non-LTE line formation of Fe in late-type stars IV: Modelling of the solar centre-to-limb variation in 3D" }
null
null
null
null
true
null
20932
null
Default
null
null
null
{ "abstract": " Background. Several studies have used phylogenetics to investigate Human\nImmunodeficiency Virus (HIV) transmission among Men who have Sex with Men\n(MSMs) in Montreal, Quebec, Canada, revealing many transmission clusters. The\nQuebec HIV genotyping program sequence database now includes viral sequences\nfrom close to 4,000 HIV-positive individuals classified as MSMs. In this paper,\nwe investigate clustering in those data by comparing results from several\nmethods: the conventional Bayesian and maximum likelihood-bootstrap methods,\nand two more recent algorithms, DM-PhyClus, a Bayesian algorithm that produces\na measure of uncertainty for proposed partitions, and the Gap Procedure, a fast\ndistance-based approach. We estimate cluster growth by focusing on recent cases\nin the Primary HIV Infection (PHI) stage. Results. The analyses reveal\nconsiderable overlap between cluster estimates obtained from conventional\nmethods. The Gap Procedure and DM-PhyClus rely on different cluster definitions\nand as a result, suggest moderately different partitions. All estimates lead to\nsimilar conclusions about cluster expansion: several large clusters have\nexperienced sizeable growth, and a few new transmission clusters are likely\nemerging. Conclusions. The lack of a gold standard measure for clustering\nquality makes picking a best estimate among those proposed difficult. Work\naiming to refine clustering criteria would be required to improve estimates.\nNevertheless, the results unanimously stress the role that clusters play in\npromoting HIV incidence among MSMs.\n", "title": "Transmission clusters in the HIV-1 epidemic among men who have sex with men in Montreal, Quebec, Canada" }
null
null
null
null
true
null
20933
null
Default
null
null
null
{ "abstract": " The aim of this paper is to generalize the notion of conformal blocks to the\nsituation in which the Lie algebra they are attached to is not defined over a\nfield, but depends on covering data of curves. The result will be a sheaf of\nconformal blocks on the Hurwitz stack parametrizing Galois coverings of curves.\nMany features of the classical sheaves of conformal blocks are proved to hold\nin this more general setting, in particular the fusion rules, the propagation\nof vacua and the WZW connection.\n", "title": "Conformal blocks attached to twisted groups" }
null
null
null
null
true
null
20934
null
Default
null
null
null
{ "abstract": " This paper shows a statistical analysis of 10.2 kHz Omega broadcasts of an\nartificial signal broadcast from ground stations, propagated in the\nplasmasphere, and detected using an automatic detection method we developed. We\nstudy the propagation patterns of the Omega signals to understand the\npropagation characteristics that are strongly affected by plasmaspheric\nelectron density and the ambient magnetic field. We show the unique propagation\npatterns of the Omega 10.2 kHz signal when it was broadcast from two\nhigh-middle-latitude stations. We use about eight years of data captured by the\nPoynting flux analyzer subsystem on board the Akebono satellite from October\n1989 to September 1997. We demonstrate that the signals broadcast from almost\nthe same latitude (in geomagnetic coordinates) propagated differently depending\non the geographic latitude. We also study propagation characteristics as a\nfunction of local time, season, and solar activity. The Omega signal tended to\npropagate farther on the nightside than on the dayside and was more widely\ndistributed during winter than during summer. When solar activity was at\nmaximum, the Omega signal propagated at a lower intensity level. In contrast,\nwhen solar activity was at minimum, the Omega signal propagated at a higher\nintensity and farther from the transmitter station.\n", "title": "Statistical study on propagation characteristics of Omega signals (VLF) in magnetosphere detected by the Akebono satellite" }
null
null
null
null
true
null
20935
null
Default
null
null
null
{ "abstract": " This work investigates the macroscopic thermomechanical behavior of lunar\nboulders by modeling their response to diurnal thermal forcing. Our results\nreveal a bimodal, spatiotemporally-complex stress response. During sunrise,\nstresses occur in the boulders' interiors that are associated with large-scale\ntemperature gradients developed due to overnight cooling. During sunset,\nstresses occur at the boulders' exteriors due to the cooling and contraction of\nthe surface. Both kinds of stresses are on the order of 10 MPa in 1 m boulders\nand decrease for smaller diameters, suggesting that larger boulders break down\nmore quickly. Boulders <30 cm exhibit a weak response to thermal forcing,\nsuggesting a threshold below which crack propagation may not occur. Boulders of\nany size buried by regolith are shielded from thermal breakdown. As boulders\nincrease in size (>1 m), stresses increase to several 10s of MPa as the\nbehavior of their surfaces approaches that of an infinite halfspace. As the\nthermal wave loses contact with the boulder interior, stresses become limited\nto the near-surface. This suggests that the survival time of a boulder is not\nonly controlled by the amplitude of induced stress, but also by its diameter as\ncompared to the diurnal skin depth. While stresses on the order of 10 MPa are\nenough to drive crack propagation in terrestrial environments, crack\npropagation rates in vacuum are not well constrained. We explore the\nrelationship between boulder size, stress, and the direction of crack\npropagation, and discuss the implications for the relative breakdown rates and\nestimated lifetimes of boulders on airless body surfaces.\n", "title": "Thermally induced stresses in boulders on airless body surfaces, and implications for rock breakdown" }
null
null
null
null
true
null
20936
null
Default
null
null
null
{ "abstract": " We analyze the definitions of generalized quantifiers of imperfect\ninformation that have been proposed by F.Engström. We argue that these\ndefinitions are just embeddings of the first-order generalized quantifiers into\nteam semantics, and fail to capture an adequate notion of team-theoretical\ngeneralized quantifier, save for the special cases in which the quantifiers are\napplied to flat formulas. We also criticize the meaningfulness of the\nmonotone/nonmonotone distinction in this context. We make some proposals for a\nmore adequate definition of generalized quantifiers of imperfect information.\n", "title": "Some observations about generalized quantifiers in logics of imperfect information" }
null
null
null
null
true
null
20937
null
Default
null
null
null
{ "abstract": " In this paper, we consider the problem of predicting demographics of\ngeographic units given geotagged Tweets that are composed within these units.\nTraditional survey methods that offer demographics estimates are usually\nlimited in terms of geographic resolution, geographic boundaries, and time\nintervals. Thus, it would be highly useful to develop computational methods\nthat can complement traditional survey methods by offering demographics\nestimates at finer geographic resolutions, with flexible geographic boundaries\n(i.e. not confined to administrative boundaries), and at different time\nintervals. While prior work has focused on predicting demographics and health\nstatistics at relatively coarse geographic resolutions such as the county-level\nor state-level, we introduce an approach to predict demographics at finer\ngeographic resolutions such as the blockgroup-level. For the task of predicting\ngender and race/ethnicity counts at the blockgroup-level, an approach adapted\nfrom prior work to our problem achieves an average correlation of 0.389\n(gender) and 0.569 (race) on a held-out test dataset. Our approach outperforms\nthis prior approach with an average correlation of 0.671 (gender) and 0.692\n(race).\n", "title": "Predicting Demographics of High-Resolution Geographies with Geotagged Tweets" }
null
null
null
null
true
null
20938
null
Default
null
null
null
{ "abstract": " Can textual data be compressed intelligently without losing accuracy in\nevaluating sentiment? In this study, we propose a novel evolutionary\ncompression algorithm, PARSEC (PARts-of-Speech for sEntiment Compression),\nwhich makes use of Parts-of-Speech tags to compress text in a way that\nsacrifices minimal classification accuracy when used in conjunction with\nsentiment analysis algorithms. An analysis of PARSEC with eight commercial and\nnon-commercial sentiment analysis algorithms on twelve English sentiment data\nsets reveals that accurate compression is possible with (0%, 1.3%, 3.3%) loss\nin sentiment classification accuracy for (20%, 50%, 75%) data compression with\nPARSEC using LingPipe, the most accurate of the sentiment algorithms. Other\nsentiment analysis algorithms are more severely affected by compression. We\nconclude that significant compression of text data is possible for sentiment\nanalysis depending on the accuracy demands of the specific application and the\nspecific sentiment analysis algorithm used.\n", "title": "Text Compression for Sentiment Analysis via Evolutionary Algorithms" }
null
null
null
null
true
null
20939
null
Default
null
null
null
{ "abstract": " Detection of protein-protein interactions (PPIs) plays a vital role in\nmolecular biology. Particularly, infections are caused by the interactions of\nhost and pathogen proteins. It is important to identify host-pathogen\ninteractions (HPIs) to discover new drugs to counter infectious diseases.\nConventional wet lab PPI prediction techniques have limitations in terms of\nlarge scale application and budget. Hence, computational approaches are\ndeveloped to predict PPIs. This study aims to develop large margin machine\nlearning models to predict interspecies PPIs with a special interest in\nhost-pathogen protein interactions (HPIs). Especially, we focus on seeking\nanswers to three queries that arise while developing an HPI predictor. 1) How\nshould we select negative samples? 2) What should be the size of negative\nsamples as compared to the positive samples? 3) What type of margin violation\npenalty should be used to train the predictor? We compare two available methods\nfor negative sampling. Moreover, we propose a new method of assigning weights\nto each training example in weighted SVM depending on the distance of the\nnegative examples from the positive examples. We have also developed a web\nserver for our HPI predictor called HoPItor (Host Pathogen Interaction\npredicTOR) that can predict interactions between human and viral proteins. This\nwebserver can be accessed at the URL:\nthis http URL.\n", "title": "Training large margin host-pathogen protein-protein interaction predictors" }
null
null
null
null
true
null
20940
null
Default
null
null
null
{ "abstract": " The Coupon Collector's Problem is one of the few mathematical problems that\nmake news headlines regularly. The reasons for this are on one hand the immense\npopularity of soccer albums (called Paninimania) and on the other hand that no\nsolution is known that is able to take into account all effects such as\nreplacement (limited purchasing of missing stickers) or swapping. In previous\npapers we have proven that the classical assumptions are not fulfilled in\npractice. Therefore we define new assumptions that match reality. Based on\nthese assumptions we are able to derive formulae for the mean number of\nstickers needed (and the associated standard deviation) that are able to take\ninto account all effects that occur in practical collecting. Thus collectors\ncan estimate the average cost of completion of an album and its standard\ndeviation just based on elementary calculations. From a practical point of view\nwe consider the Coupon Collector's problem as solved.\n-----\nDas Sammelbilderproblem ist eines der wenigen mathematischen Probleme, die\nregelmä{\\ss}ig in den Schlagzeilen der Nachrichten vorkommen. Dies liegt\neinerseits an der gro{\\ss}en Popularität von Fu{\\ss}ball-Sammelbildern\n(Paninimania genannt) und andererseits daran, dass es bisher keine Lösung\ngibt, die alle relevanten Effekte wie Nachkaufen oder Tauschen\nberücksichtigt. Wir haben bereits nachgewiesen, dass die klassischen Annahmen\nnicht der Realität entsprechen. Deshalb stellen wir neue Annahmen auf, die\ndie Praxis besser abbilden. Darauf aufbauend können wir Formeln für die\nmittlere Anzahl benötigter Bilder (sowie deren Standardabweichung) ableiten,\ndie alle in der Praxis relevanten Effekte berücksichtigen. Damit können\nSammler die mittleren Kosten eines Albums sowie deren Standardabweichung nur\nmit Hilfe von elementaren Rechnungen bestimmen. Für praktische Zwecke ist das\nSammelbilderproblem damit gelöst.\n", "title": "A Useful Solution of the Coupon Collector's Problem" }
null
null
[ "Mathematics" ]
null
true
null
20941
null
Validated
null
null
null
{ "abstract": " We study accretion driven turbulence for different inflow velocities in star\nforming filaments using the code ramses. Filaments are rarely isolated objects\nand their gravitational potential will lead to radially dominated accretion. In\nthe non-gravitational case, accretion by itself can already provoke\nnon-isotropic, radially dominated turbulent motions responsible for the complex\nstructure and non-thermal line widths observed in filaments. We find that there\nis a direct linear relation between the absolute value of the total density\nweighted velocity dispersion and the infall velocity. The turbulent velocity\ndispersion in the filaments is independent of sound speed or any net flow along\nthe filament. We show that the density weighted velocity dispersion acts as an\nadditional pressure term supporting the filament in hydrostatic equilibrium.\nComparing to observations, we find that the projected non-thermal line width\nvariation is generally subsonic independent of inflow velocity.\n", "title": "Accretion driven turbulence in filaments I: Non-gravitational accretion" }
null
null
[ "Physics" ]
null
true
null
20942
null
Validated
null
null
null
{ "abstract": " Outlier detection plays an essential role in many data-driven applications to\nidentify isolated instances that are different from the majority. While many\nstatistical learning and data mining techniques have been used for developing\nmore effective outlier detection algorithms, the interpretation of detected\noutliers does not receive much attention. Interpretation is becoming\nincreasingly important to help people trust and evaluate the developed models\nthrough providing intrinsic reasons why the certain outliers are chosen. It is\ndifficult, if not impossible, to simply apply feature selection for explaining\noutliers due to the distinct characteristics of various detection models,\ncomplicated structures of data in certain applications, and imbalanced\ndistribution of outliers and normal instances. In addition, the role of\ncontrastive contexts where outliers locate, as well as the relation between\noutliers and contexts, are usually overlooked in interpretation. To tackle the\nissues above, in this paper, we propose a novel Contextual Outlier\nINterpretation (COIN) method to explain the abnormality of existing outliers\nspotted by detectors. The interpretability for an outlier is achieved from\nthree aspects: outlierness score, attributes that contribute to the\nabnormality, and contextual description of its neighborhoods. Experimental\nresults on various types of datasets demonstrate the flexibility and\neffectiveness of the proposed framework compared with existing interpretation\napproaches.\n", "title": "Contextual Outlier Interpretation" }
null
null
null
null
true
null
20943
null
Default
null
null
null
{ "abstract": " A city's critical infrastructure such as gas, water, and power systems, are\nlargely interdependent since they share energy, computing, and communication\nresources. This, in turn, makes it challenging to endow them with fool-proof\nsecurity solutions. In this paper, a unified model for interdependent\ngas-power-water infrastructure is presented and the security of this model is\nstudied using a novel game-theoretic framework. In particular, a zero-sum\nnoncooperative game is formulated between a malicious attacker who seeks to\nsimultaneously alter the states of the gas-power-water critical infrastructure\nto increase the power generation cost and a defender who allocates\ncommunication resources over its attack detection filters in local areas to\nmonitor the infrastructure. At the mixed strategy Nash equilibrium of this\ngame, numerical results show that the expected power generation cost deviation\nis 35\\% lower than the one resulting from an equal allocation of resources over\nthe local filters. The results also show that, at equilibrium, the\ninterdependence of the power system on the natural gas and water systems can\nmotivate the attacker to target the states of the water and natural gas systems\nto change the operational states of the power grid. Conversely, the defender\nallocates a portion of its resources to the water and natural gas states of the\ninterdependent system to protect the grid from state deviations.\n", "title": "Game Theory for Secure Critical Interdependent Gas-Power-Water Infrastructure" }
null
null
null
null
true
null
20944
null
Default
null
null
null
{ "abstract": " Over the years, Twitter has become one of the largest communication platforms\nproviding key data to various applications such as brand monitoring, trend\ndetection, among others. Entity linking is one of the major tasks in natural\nlanguage understanding from tweets and it associates entity mentions in text to\ncorresponding entries in knowledge bases in order to provide unambiguous\ninterpretation and additional con- text. State-of-the-art techniques have\nfocused on linking explicitly mentioned entities in tweets with reasonable\nsuccess. However, we argue that in addition to explicit mentions i.e. The movie\nGravity was more ex- pensive than the mars orbiter mission entities (movie\nGravity) can also be mentioned implicitly i.e. This new space movie is crazy.\nyou must watch it!. This paper introduces the problem of implicit entity\nlinking in tweets. We propose an approach that models the entities by\nexploiting their factual and contextual knowledge. We demonstrate how to use\nthese models to perform implicit entity linking on a ground truth dataset with\n397 tweets from two domains, namely, Movie and Book. Specifically, we show: 1)\nthe importance of linking implicit entities and its value addition to the\nstandard entity linking task, and 2) the importance of exploiting contextual\nknowledge associated with an entity for linking their implicit mentions. We\nalso make the ground truth dataset publicly available to foster the research in\nthis new research area.\n", "title": "Implicit Entity Linking in Tweets" }
null
null
[ "Computer Science" ]
null
true
null
20945
null
Validated
null
null
null
{ "abstract": " A finitely presented 1-ended group $G$ has {\\it semistable fundamental group\nat infinity} if $G$ acts geometrically on a simply connected and locally\ncompact ANR $Y$ having the property that any two proper rays in $Y$ are\nproperly homotopic. This property of $Y$ captures a notion of connectivity at\ninfinity stronger than \"1-ended\", and is in fact a feature of $G$, being\nindependent of choices. It is a fundamental property in the homotopical study\nof finitely presented groups. While many important classes of groups have been\nshown to have semistable fundamental group at infinity, the question of whether\nevery $G$ has this property has been a recognized open question for nearly\nforty years. In this paper we attack the problem by considering a proper {\\it\nbut non-cocompact} action of a group $J$ on such an $Y$. This $J$ would\ntypically be a subgroup of infinite index in the geometrically acting\nover-group $G$; for example $J$ might be infinite cyclic or some other subgroup\nwhose semistability properties are known. We divide the semistability property\nof $G$ into a $J$-part and a \"perpendicular to $J$\" part, and we analyze how\nthese two parts fit together. Among other things, this analysis leads to a\nproof (in a companion paper) that a class of groups previously considered to be\nlikely counter examples do in fact have the semistability property.\n", "title": "Non-cocompact Group Actions and $π_1$-Semistability at Infinity" }
null
null
null
null
true
null
20946
null
Default
null
null
null
{ "abstract": " This paper addresses the problem of output voltage regulation for multiple\nDC/DC converters connected to a microgrid, and prescribes a scheme for sharing\npower among different sources. This architecture is structured in such a way\nthat it admits quantifiable analysis of the closed-loop performance of the\nnetwork of converters; the analysis simplifies to studying closed-loop\nperformance of an equivalent {\\em single-converter} system. The proposed\narchitecture allows for the proportion in which the sources provide power to\nvary with time; thus overcoming limitations of our previous designs.\nAdditionally, the proposed control framework is suitable to both centralized\nand decentralized implementations, i.e., the same control architecture can be\nemployed for voltage regulation irrespective of the availability of common\nload-current (or power) measurement, without the need to modify controller\nparameters. The performance becomes quantifiably better with better\ncommunication of the demanded load to all the controllers at all the converters\n(in the centralized case); however guarantees viability when such communication\nis absent. Case studies comprising of battery, PV and generic sources are\npresented and demonstrate the enhanced performance of prescribed optimal\ncontrollers for voltage regulation and power sharing.\n", "title": "Robust Distributed Control of DC Microgrids with Time-Varying Power Sharing" }
null
null
null
null
true
null
20947
null
Default
null
null
null
{ "abstract": " In this paper, we study quantum query complexity of the following rather\nnatural tripartite generalisations (in the spirit of the 3-sum problem) of the\nhidden shift and the set equality problems, which we call the 3-shift-sum and\nthe 3-matching-sum problems.\nThe 3-shift-sum problem is as follows: given a table of $3\\times n$ elements,\nis it possible to circularly shift its rows so that the sum of the elements in\neach column becomes zero? It is promised that, if this is not the case, then no\n3 elements in the table sum up to zero. The 3-matching-sum problem is defined\nsimilarly, but it is allowed to arbitrarily permute elements within each row.\nFor these problems, we prove lower bounds of $\\Omega(n^{1/3})$ and\n$\\Omega(\\sqrt n)$, respectively. The second lower bound is tight.\nThe lower bounds are proven by a novel application of the dual learning graph\nframework and by using representation-theoretic tools.\n", "title": "Quantum Lower Bounds for Tripartite Versions of the Hidden Shift and the Set Equality Problems" }
null
null
null
null
true
null
20948
null
Default
null
null
null
{ "abstract": " Overset methods are commonly employed to enable the effective simulation of\nproblems involving complex geometries and moving objects such as rotorcraft.\nThis paper presents a novel overset domain connectivity algorithm based upon\nthe direct cut approach suitable for use with GPU-accelerated solvers on\nhigh-order curved grids. In contrast to previous methods it is capable of\nexploiting the highly data-parallel nature of modern accelerators. Further, the\napproach is also substantially more efficient at handling the curved grids\nwhich arise within the context of high-order methods. An implementation of this\nnew algorithm is presented and combined with a high-order fluid dynamics code.\nThe algorithm is validated against several benchmark problems, including flow\nover a spinning golf ball at a Reynolds number of 150,000.\n", "title": "A Parallel Direct Cut Algorithm for High-Order Overset Methods with Application to a Spinning Golf Ball" }
null
null
null
null
true
null
20949
null
Default
null
null
null
{ "abstract": " In this paper we survey the various implementations of a new data\nassimilation (downscaling) algorithm based on spatial coarse mesh measurements.\nAs a paradigm, we demonstrate the application of this algorithm to the 3D\nLeray-$\\alpha$ subgrid scale turbulence model. Most importantly, we use this\nparadigm to show that it is not always necessary that one has to collect coarse\nmesh measurements of all the state variables, that are involved in the\nunderlying evolutionary system, in order to recover the corresponding exact\nreference solution. Specifically, we show that in the case of the 3D\nLeray$-\\alpha$ model of turbulence the solutions of the algorithm, constructed\nusing only coarse mesh observations of any two components of the\nthree-dimensional velocity field, and without any information of the third\ncomponent, converge, at an exponential rate in time, to the corresponding exact\nreference solution of the 3D Leray$-\\alpha$ model. This study serves as an\naddendum to our recent work on abridged continuous data assimilation for the 2D\nNavier-Stokes equations. Notably, similar results have also been recently\nestablished for the 3D viscous Planetary Geostrophic circulation model in which\nwe show that coarse mesh measurements of the temperature alone are sufficient\nfor recovering, through our data assimilation algorithm, the full solution;\nviz. the three components of velocity vector field and the temperature.\nConsequently, this proves the Charney conjecture for the 3D Planetary\nGeostrophic model; namely, that the history of the large spatial scales of\ntemperature is sufficient for determining all the other quantities (state\nvariables) of the model.\n", "title": "A data assimilation algorithm: the paradigm of the 3D Leray-alpha model of turbulence" }
null
null
null
null
true
null
20950
null
Default
null
null
null
{ "abstract": " Playing a Parrondo's game with a qutrit is the subject of this paper. We show\nthat a true quantum Parrondo's game can be played with a 3 state coin(qutrit)\nin a 1D quantum walk in contrast to the fact that playing a true Parrondo's\ngame with a 2 state coin(qubit) in 1D quantum walk fails in the asymptotic\nlimits.\n", "title": "Playing a true Parrondo's game with a three state coin on a quantum walk" }
null
null
null
null
true
null
20951
null
Default
null
null
null
{ "abstract": " We analyse multimodal time-series data corresponding to weight, sleep and\nsteps measurements. We focus on predicting whether a user will successfully\nachieve his/her weight objective. For this, we design several deep long\nshort-term memory (LSTM) architectures, including a novel cross-modal LSTM\n(X-LSTM), and demonstrate their superiority over baseline approaches. The\nX-LSTM improves parameter efficiency by processing each modality separately and\nallowing for information flow between them by way of recurrent\ncross-connections. We present a general hyperparameter optimisation technique\nfor X-LSTMs, which allows us to significantly improve on the LSTM and a prior\nstate-of-the-art cross-modal approach, using a comparable number of parameters.\nFinally, we visualise the model's predictions, revealing implications about\nlatent variables in this task.\n", "title": "Cross-modal Recurrent Models for Weight Objective Prediction from Multimodal Time-series Data" }
null
null
null
null
true
null
20952
null
Default
null
null
null
{ "abstract": " The key idea of variational auto-encoders (VAEs) resembles that of\ntraditional auto-encoder models in which spatial information is supposed to be\nexplicitly encoded in the latent space. However, the latent variables in VAEs\nare vectors, which can be interpreted as multiple feature maps of size 1x1.\nSuch representations can only convey spatial information implicitly when\ncoupled with powerful decoders. In this work, we propose spatial VAEs that use\nfeature maps of larger size as latent variables to explicitly capture spatial\ninformation. This is achieved by allowing the latent variables to be sampled\nfrom matrix-variate normal (MVN) distributions whose parameters are computed\nfrom the encoder network. To increase dependencies among locations on latent\nfeature maps and reduce the number of parameters, we further propose spatial\nVAEs via low-rank MVN distributions. Experimental results show that the\nproposed spatial VAEs outperform original VAEs in capturing rich structural and\nspatial information.\n", "title": "Spatial Variational Auto-Encoding via Matrix-Variate Normal Distributions" }
null
null
null
null
true
null
20953
null
Default
null
null
null
{ "abstract": " In 1996, Jackson and Martin proved that a strong ideal ramp scheme is\nequivalent to an orthogonal array. However, there was no good characterization\nof ideal ramp schemes that are not strong. Here we show the equivalence of\nideal ramp schemes to a new variant of orthogonal arrays that we term augmented\northogonal arrays. We give some constructions for these new kinds of arrays,\nand, as a consequence, we also provide parameter situations where ideal ramp\nschemes exist but strong ideal ramp schemes do not exist.\n", "title": "Optimal Ramp Schemes and Related Combinatorial Objects" }
null
null
null
null
true
null
20954
null
Default
null
null
null
{ "abstract": " The performance of deep learning in natural language processing has been\nspectacular, but the reasons for this success remain unclear because of the\ninherent complexity of deep learning. This paper provides empirical evidence of\nits effectiveness and of a limitation of neural networks for language\nengineering. Precisely, we demonstrate that a neural language model based on\nlong short-term memory (LSTM) effectively reproduces Zipf's law and Heaps' law,\ntwo representative statistical properties underlying natural language. We\ndiscuss the quality of reproducibility and the emergence of Zipf's law and\nHeaps' law as training progresses. We also point out that the neural language\nmodel has a limitation in reproducing long-range correlation, another\nstatistical property of natural language. This understanding could provide a\ndirection for improving the architectures of neural networks.\n", "title": "Do Neural Nets Learn Statistical Laws behind Natural Language?" }
null
null
null
null
true
null
20955
null
Default
null
null
null
{ "abstract": " This article presents the novel breakthrough general purpose algorithm for\nlarge scale optimization problems. The novel algorithm is capable of achieving\nbreakthrough speeds for very large-scale optimization on general purpose\nlaptops and embedded systems. Application of the algorithm to the Griewank\nfunction was possible in up to 1 billion decision variables in double precision\ntook only 64485 seconds (~18 hours) to solve, while consuming 7,630 MB (7.6 GB)\nor RAM on a single threaded laptop CPU. It shows that the algorithm is\ncomputationally and memory (space) linearly efficient, and can find the optimal\nor near-optimal solution in a fraction of the time and memory that many\nconventional algorithms require. It is envisaged that this will open up new\npossibilities of real-time large-scale problems on personal laptops and\nembedded systems.\n", "title": "Super-speeds with Zero-RAM: Next Generation Large-Scale Optimization in Your Laptop!" }
null
null
null
null
true
null
20956
null
Default
null
null
null
{ "abstract": " Ambiguities in the definition of stored energy within distributed or\nradiating electromagnetic systems motivate the discussion of the well-defined\nconcept of recoverable energy. This concept is commonly overlooked by the\ncommunity and the purpose of this communication is to recall its existence and\nto discuss its relationship to fractional bandwidth. Using a rational function\napproximation of a system's input impedance, the recoverable energy of lumped\nand radiating systems is calculated in closed form and is related to stored\nenergy and fractional bandwidth. Lumped circuits are also used to demonstrate\nthe relationship between recoverable energy and the energy stored within\nequivalent circuits produced by the minimum phase-shift Darlington's synthesis\nprocedure.\n", "title": "Recoverable Energy of Dissipative Electromagnetic Systems" }
null
null
null
null
true
null
20957
null
Default
null
null
null
{ "abstract": " We construct Hall algebra of elliptic curve over $\\mathbb{F}_1$ using the\ntheory of monoidal scheme due to Deitmar and the theory of Hall algebra for\nmonoidal representations due to Szczesny. The resulting algebra is shown to be\na specialization of elliptic Hall algebra studied by Burban and Schiffmann.\nThus our algebra is isomorphic to the skein algebra for torus by the recent\nwork of Morton and Samuelson.\n", "title": "Elliptic Hall algebra on $\\mathbb{F}_1$" }
null
null
null
null
true
null
20958
null
Default
null
null
null
{ "abstract": " Queueing networks are systems of theoretical interest that give rise to\ncomplex families of stochastic processes, and find widespread use in the\nperformance evaluation of interconnected resources. Yet, despite their\nimportance within applications, and in comparison to their counterpart\nstochastic models in genetics or mathematical biology, there exist few relevant\napproaches for transient inference and uncertainty quantification tasks in\nthese systems. This is a consequence of strong computational impediments and\ndistinctive properties of the Markov jump processes induced by queueing\nnetworks. In this paper, we offer a comprehensive overview of the inferential\nchallenge and its comparison to analogue tasks within related mathematical\ndomains. We then discuss a model augmentation over an approximating network\nsystem, and present a flexible and scalable variational Bayesian framework,\nwhich is targeted at general-form open and closed queueing systems, with varied\nservice disciplines and priorities. The inferential procedure is finally\nvalidated in a couple of uncertainty quantification tasks for network service\nrates.\n", "title": "Approximate Bayesian inference with queueing networks and coupled jump processes" }
null
null
null
null
true
null
20959
null
Default
null
null
null
{ "abstract": " Using a large-scale Deep Learning approach applied to a high-frequency\ndatabase containing billions of electronic market quotes and transactions for\nUS equities, we uncover nonparametric evidence for the existence of a universal\nand stationary price formation mechanism relating the dynamics of supply and\ndemand for a stock, as revealed through the order book, to subsequent\nvariations in its market price. We assess the model by testing its\nout-of-sample predictions for the direction of price moves given the history of\nprice and order flow, across a wide range of stocks and time periods. The\nuniversal price formation model is shown to exhibit a remarkably stable\nout-of-sample prediction accuracy across time, for a wide range of stocks from\ndifferent sectors. Interestingly, these results also hold for stocks which are\nnot part of the training sample, showing that the relations captured by the\nmodel are universal and not asset-specific.\nThe universal model --- trained on data from all stocks --- outperforms, in\nterms of out-of-sample prediction accuracy, asset-specific linear and nonlinear\nmodels trained on time series of any given stock, showing that the universal\nnature of price formation weighs in favour of pooling together financial data\nfrom various stocks, rather than designing asset- or sector-specific models as\ncommonly done. Standard data normalizations based on volatility, price level or\naverage spread, or partitioning the training data into sectors or categories\nsuch as large/small tick stocks, do not improve training results. On the other\nhand, inclusion of price and order flow history over many past observations is\nshown to improve forecasting performance, showing evidence of path-dependence\nin price dynamics.\n", "title": "Universal features of price formation in financial markets: perspectives from Deep Learning" }
null
null
[ "Statistics", "Quantitative Finance" ]
null
true
null
20960
null
Validated
null
null
null
{ "abstract": " In this paper, we propose a new algorithm based on radial symmetry center\nmethod to track colloidal particles close to contact, where the optical images\nof the particles start to overlap in digital video microscopy. This overlapping\neffect is important to observe the pair interaction potential in colloidal\nstudies and it appears as additional interaction in the measurement of the\ninteraction with conventional tracking analysis. The proposed algorithm in this\nwork is simple, fast and applicable for not only two particles but also three\nand more particles without any modification. The algorithm uses gradient\nvectors of the particle intensity distribution, which allows us to use a part\nof the symmetric intensity distribution in the calculation of the actual\nparticle position. In this study, simulations are performed to see the\nperformance of the proposed algorithm for two and three particles, where the\nsimulation images are generated by using fitted curve to experimental particle\nimage for different sized particles. As a result, the algorithm yields the\nmaximum error smaller than 2 nm for 5.53 {\\mu}m silica particles in contact\ncondition.\n", "title": "A New Tracking Algorithm for Multiple Colloidal Particles Close to Contact" }
null
null
null
null
true
null
20961
null
Default
null
null
null
{ "abstract": " We present numerical evidence that most two-dimensional surface states of a\nbulk topological superconductor (TSC) sit at an integer quantum Hall plateau\ntransition. We study TSC surface states in class CI with quenched disorder.\nLow-energy (finite-energy) surface states were expected to be critically\ndelocalized (Anderson localized). We confirm the low-energy picture, but find\ninstead that finite-energy states are also delocalized, with universal\nstatistics that are independent of the TSC winding number, and consistent with\nthe spin quantum Hall plateau transition (percolation).\n", "title": "Critical Percolation Without Fine Tuning on the Surface of a Topological Superconductor" }
null
null
[ "Physics" ]
null
true
null
20962
null
Validated
null
null
null
{ "abstract": " Features and applications of quasi-spherical settling accretion onto rotating\nmagnetized neutron stars in high-mass X-ray binaries are discussed. The\nsettling accretion occurs in wind-fed HMXBs when the plasma cooling time is\nlonger than the free-fall time from the gravitational capture radius, which can\ntake place in low-luminosity HMXBs with $L_x\\lesssim 4\\times 10^{36}$ erg/s. We\nbriefly review the implications of the settling accretion, focusing on the SFXT\nphenomenon, which can be related to instability of the quasi-spherical\nconvective shell above the neutron star magnetosphere due to magnetic\nreconnection from fast temporarily magnetized winds from OB-supergiant. If a\nyoung neutron star in a wind-fed HMXB is rapidly rotating, the propeller regime\nin a quasi-spherical hot shell occurs. We show that X-ray spectral and temporal\nproperties of enigmatic $\\gamma$ Cas Be-stars are consistent with failed\nsettling accretion regime onto a propelling neutron star. The subsequent\nevolutionary stage of $\\gamma$ Cas and its analogs should be the X Per-type\nbinaries comprising low-luminosity slowly rotating X-ray pulsars.\n", "title": "Low-luminosity stellar wind accretion onto neutron stars in HMXBs" }
null
null
[ "Physics" ]
null
true
null
20963
null
Validated
null
null
null
{ "abstract": " Inference amortization methods share information across multiple\nposterior-inference problems, allowing each to be carried out more efficiently.\nGenerally, they require the inversion of the dependency structure in the\ngenerative model, as the modeller must learn a mapping from observations to\ndistributions approximating the posterior. Previous approaches have involved\ninverting the dependency structure in a heuristic way that fails to capture\nthese dependencies correctly, thereby limiting the achievable accuracy of the\nresulting approximations. We introduce an algorithm for faithfully, and\nminimally, inverting the graphical model structure of any generative model.\nSuch inverses have two crucial properties: (a) they do not encode any\nindependence assertions that are absent from the model and; (b) they are local\nmaxima for the number of true independencies encoded. We prove the correctness\nof our approach and empirically show that the resulting minimally faithful\ninverses lead to better inference amortization than existing heuristic\napproaches.\n", "title": "Faithful Inversion of Generative Models for Effective Amortized Inference" }
null
null
null
null
true
null
20964
null
Default
null
null
null
{ "abstract": " We study the U.S. Operations Research/Industrial-Systems Engineering (ORIE)\nfaculty hiring network, consisting of 1,179 faculty origin and destination data\ntogether with attribute data from 83 ORIE departments. A social network\nanalysis of faculty hires can reveal important patterns in an academic field,\nsuch as the existence of a hierarchy or sociological aspects such as the\npresence of communities of departments. We first statistically test for the\nexistence of a linear hierarchy in the network and for its steepness. We find a\nnear linear hierarchical order of the departments, proposing a new index for\nhiring networks, which we contrast with other indicators of hierarchy,\nincluding published rankings. A single index is not capable to capture the full\nstructure of a complex network, however, so we next fit a latent exponential\nrandom graph model (ERGM) to the network, which is able to reproduce its main\nobserved characteristics: high incidence of self-hiring, skewed out-degree\ndistribution, low density and clustering. Finally, we use the latent variables\nin the ERGM to simplify the network to one where faculty hires take place among\nthree groups of departments. We contrast our findings with those reported for\nother related disciplines, Computer Science and Business.\n", "title": "A social Network Analysis of the Operations Research/Industrial Engineering Faculty Hiring Network" }
null
null
null
null
true
null
20965
null
Default
null
null
null
{ "abstract": " An aggregate data meta-analysis is a statistical method that pools the\nsummary statistics of several selected studies to estimate the outcome of\ninterest. When considering a continuous outcome, typically each study must\nreport the same measure of the outcome variable and its spread (e.g., the\nsample mean and its standard error). However, some studies may instead report\nthe median along with various measures of spread. Recently, the task of\nincorporating medians in meta-analysis has been achieved by estimating the\nsample mean and its standard error from each study that reports a median in\norder to meta-analyze the means. In this paper, we propose two alternative\napproaches to meta-analyze data that instead rely on medians. We systematically\ncompare these approaches via simulation study to each other and to methods that\ntransform the study-specific medians and spread into sample means and their\nstandard errors. We demonstrate that the proposed median-based approaches\nperform better than the transformation-based approaches, especially when\napplied to skewed data and data with high inter-study variance. In addition,\nwhen meta-analyzing data that consists of medians, we show that the\nmedian-based approaches perform considerably better than or comparably to the\nbest-case scenario for a transformation approach: conducting a meta-analysis\nusing the actual sample mean and standard error of the mean of each study.\nFinally, we illustrate these approaches in a meta-analysis of patient delay in\ntuberculosis diagnosis.\n", "title": "One-sample aggregate data meta-analysis of medians" }
null
null
null
null
true
null
20966
null
Default
null
null
null
{ "abstract": " Large inter-datacenter transfers are crucial for cloud service efficiency and\nare increasingly used by organizations that have dedicated wide area networks\nbetween datacenters. A recent work uses multicast forwarding trees to reduce\nthe bandwidth needs and improve completion times of point-to-multipoint\ntransfers. Using a single forwarding tree per transfer, however, leads to poor\nperformance because the slowest receiver dictates the completion time for all\nreceivers. Using multiple forwarding trees per transfer alleviates this\nconcern--the average receiver could finish early; however, if done naively,\nbandwidth usage would also increase and it is apriori unclear how best to\npartition receivers, how to construct the multiple trees and how to determine\nthe rate and schedule of flows on these trees. This paper presents QuickCast, a\nfirst solution to these problems. Using simulations on real-world network\ntopologies, we see that QuickCast can speed up the average receiver's\ncompletion time by as much as $10\\times$ while only using $1.04\\times$ more\nbandwidth; further, the completion time for all receivers also improves by as\nmuch as $1.6\\times$ faster at high loads.\n", "title": "QuickCast: Fast and Efficient Inter-Datacenter Transfers using Forwarding Tree Cohorts" }
null
null
[ "Computer Science" ]
null
true
null
20967
null
Validated
null
null
null
{ "abstract": " Machine learning is finding increasingly broad application in the physical\nsciences. This most often involves building a model relationship between a\ndependent, measurable output and an associated set of controllable, but\ncomplicated, independent inputs. We present a tutorial on current techniques in\nmachine learning -- a jumping-off point for interested researchers to advance\ntheir work. We focus on deep neural networks with an emphasis on demystifying\ndeep learning. We begin with background ideas in machine learning and some\nexample applications from current research in plasma physics. We discuss\nsupervised learning techniques for modeling complicated functions, beginning\nwith familiar regression schemes, then advancing to more sophisticated deep\nlearning methods. We also address unsupervised learning and techniques for\nreducing the dimensionality of input spaces. Along the way, we describe methods\nfor practitioners to help ensure that their models generalize from their\ntraining data to as-yet-unseen test data. We describe classes of tasks --\npredicting scalars, handling images, fitting time-series -- and prepare the\nreader to choose an appropriate technique. We finally point out some\nlimitations to modern machine learning and speculate on some ways that\npractitioners from the physical sciences may be particularly suited to help.\n", "title": "Contemporary machine learning: a guide for practitioners in the physical sciences" }
null
null
null
null
true
null
20968
null
Default
null
null
null
{ "abstract": " Polycrystalline diamond coatings have been grown on cemented carbide\nsubstrates with different aspect ratios by a microwave plasma CVD in\nmethane-hydrogen gas mixtures. To protect the edges of the substrates from\nnon-uniform heating due to the plasma edge effect, a special plateholder with\npockets for group growth has been used. The difference in heights of the\nsubstrates and plateholder, and its influence on the diamond film mean grain\nsize, growth rate, phase composition and stress was investigated. The substrate\ntemperature range, within which uniform diamond films are produced with good\nadhesion, is determined. The diamond-coated cutting inserts produced at\noptimized process exhibited a reduction of cutting force and wear resistance by\na factor of two, and cutting efficiency increase by 4.3 times upon turning A390\nAl-Si alloy as compared to performance of uncoated tools.\n", "title": "Uniform diamond coatings on WC-Co hard alloy cutting inserts deposited by a microwave plasma CVD" }
null
null
null
null
true
null
20969
null
Default
null
null
null
{ "abstract": " We present a new approach for identifying situations and behaviours, which we\ncall \"moves\", from soccer games in the 2D simulation league. Being able to\nidentify key situations and behaviours are useful capabilities for analysing\nsoccer matches, anticipating opponent behaviours to aid selection of\nappropriate tactics, and also as a prerequisite for automatic learning of\nbehaviours and policies. To support a wide set of strategies, our goal is to\nidentify situations from data, in an unsupervised way without making use of\npre-defined soccer specific concepts such as \"pass\" or \"dribble\". The recurrent\nneural networks we use in our approach act as a high-dimensional projection of\nthe recent history of a situation on the field. Similar situations, i.e., with\nsimilar histories, are found by clustering of network states. The same networks\nare also used to learn so-called conceptors, that are lower-dimensional\nmanifolds that describe trajectories through a high-dimensional state space\nthat enable situation-specific predictions from the same neural network. With\nthe proposed approach, we can segment games into sequences of situations that\nare learnt in an unsupervised way, and learn conceptors that are useful for the\nprediction of the near future of the respective situation.\n", "title": "Analysing Soccer Games with Clustering and Conceptors" }
null
null
[ "Computer Science" ]
null
true
null
20970
null
Validated
null
null
null
{ "abstract": " The sum of Log-normal variates is encountered in many challenging\napplications such as in performance analysis of wireless communication systems\nand in financial engineering. Several approximation methods have been developed\nin the literature, the accuracy of which is not ensured in the tail regions.\nThese regions are of primordial interest wherein small probability values have\nto be evaluated with high precision. Variance reduction techniques are known to\nyield accurate, yet efficient, estimates of small probability values. Most of\nthe existing approaches, however, have considered the problem of estimating the\nright-tail of the sum of Log-normal random variables (RVS). In the present\nwork, we consider instead the estimation of the left-tail of the sum of\ncorrelated Log-normal variates with Gaussian copula under a mild assumption on\nthe covariance matrix. We propose an estimator combining an existing\nmean-shifting importance sampling approach with a control variate technique.\nThe main result is that the proposed estimator has an asymptotically vanishing\nrelative error which represents a major finding in the context of the left-tail\nsimulation of the sum of Log-normal RVs. Finally, we assess by various\nsimulation results the performances of the proposed estimator compared to\nexisting estimators.\n", "title": "On the Efficient Simulation of the Left-Tail of the Sum of Correlated Log-normal Variates" }
null
null
null
null
true
null
20971
null
Default
null
null
null
{ "abstract": " Recently, optional stopping has been a subject of debate in the Bayesian\npsychology community. Rouder (2014) argues that optional stopping is no problem\nfor Bayesians, and even recommends the use of optional stopping in practice, as\ndo Wagenmakers et al. (2012). This article addresses the question whether\noptional stopping is problematic for Bayesian methods, and specifies under\nwhich circumstances and in which sense it is and is not. By slightly varying\nand extending Rouder's (2014) experiment, we illustrate that, as soon as the\nparameters of interest are equipped with default or pragmatic priors - which\nmeans, in most practical applications of Bayes Factor hypothesis testing -\nresilience to optional stopping can break down. We distinguish between four\ntypes of default priors, each having their own specific issues with optional\nstopping, ranging from no-problem-at-all (Type 0 priors) to quite severe (Type\nII and III priors).\n", "title": "Why optional stopping is a problem for Bayesians" }
null
null
null
null
true
null
20972
null
Default
null
null