text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Advances in deep learning for natural images have prompted a surge of\ninterest in applying similar techniques to medical images. The majority of the\ninitial attempts focused on replacing the input of a deep convolutional neural\nnetwork with a medical image, which does not take into consideration the\nfundamental differences between these two types of images. Specifically, fine\ndetails are necessary for detection in medical images, unlike in natural images\nwhere coarse structures matter most. This difference makes it inadequate to use\nthe existing network architectures developed for natural images, because they\nwork on heavily downscaled images to reduce the memory requirements. This hides\ndetails necessary to make accurate predictions. Additionally, a single exam in\nmedical imaging often comes with a set of views which must be fused in order to\nreach a correct conclusion. In our work, we propose to use a multi-view deep\nconvolutional neural network that handles a set of high-resolution medical\nimages. We evaluate it on large-scale mammography-based breast cancer screening\n(BI-RADS prediction) using 886,000 images. We focus on investigating the impact\nof the training set size and image size on the prediction accuracy. Our results\nhighlight that performance increases with the size of training set, and that\nthe best performance can only be achieved using the original resolution. In the\nreader study, performed on a random subset of the test set, we confirmed the\nefficacy of our model, which achieved performance comparable to a committee of\nradiologists when presented with the same data.\n", "title": "High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks" }
null
null
null
null
true
null
3101
null
Default
null
null
null
{ "abstract": " We investigate the frequentist properties of Bayesian procedures for\nestimation based on the horseshoe prior in the sparse multivariate normal means\nmodel. Previous theoretical results assumed that the sparsity level, that is,\nthe number of signals, was known. We drop this assumption and characterize the\nbehavior of the maximum marginal likelihood estimator (MMLE) of a key parameter\nof the horseshoe prior. We prove that the MMLE is an effective estimator of the\nsparsity level, in the sense that it leads to (near) minimax optimal estimation\nof the underlying mean vector generating the data. Besides this empirical Bayes\nprocedure, we consider the hierarchical Bayes method of putting a prior on the\nunknown sparsity level as well. We show that both Bayesian techniques lead to\nrate-adaptive optimal posterior contraction, which implies that the horseshoe\nposterior is a good candidate for generating rate-adaptive credible sets.\n", "title": "Adaptive posterior contraction rates for the horseshoe" }
null
null
null
null
true
null
3102
null
Default
null
null
null
{ "abstract": " A convex optimization-based method is proposed to numerically solve dynamic\nprograms in continuous state and action spaces. This approach using a\ndiscretization of the state space has the following salient features. First, by\nintroducing an auxiliary optimization variable that assigns the contribution of\neach grid point, it does not require an interpolation in solving an associated\nBellman equation and constructing a control policy. Second, the proposed method\nallows us to solve the Bellman equation with a desired level of precision via\nconvex programming in the case of linear systems and convex costs. We can also\nconstruct a control policy of which performance converges to the optimum as the\ngrid resolution becomes finer in this case. Third, when a nonlinear\ncontrol-affine system is considered, the convex optimization approach provides\nan approximate control policy with a provable suboptimality bound. Fourth, for\ngeneral cases, the proposed convex formulation of dynamic programming operators\ncan be simply modified as a nonconvex bi-level program, in which the inner\nproblem is a linear program, without losing convergence properties. From our\nconvex methods and analyses, we observe that convexity in dynamic programming\ndeserves attention as it can play a critical role in obtaining a tractable and\nconvergent numerical solution.\n", "title": "A Convex Optimization Approach to Dynamic Programming in Continuous State and Action Spaces" }
null
null
null
null
true
null
3103
null
Default
null
null
null
{ "abstract": " The enactive approach to cognition is typically proposed as a viable\nalternative to traditional cognitive science. Enactive cognition displaces the\nexplanatory focus from the internal representations of the agent to the direct\nsensorimotor interaction with its environment. In this paper, we investigate\nenactive learning through means of artificial agent simulations. We compare the\nperformances of the enactive agent to an agent operating on classical\nreinforcement learning in foraging tasks within maze environments. The\ncharacteristics of the agents are analysed in terms of the accessibility of the\nenvironmental states, goals, and exploration/exploitation tradeoffs. We confirm\nthat the enactive agent can successfully interact with its environment and\nlearn to avoid unfavourable interactions using intrinsically defined goals. The\nperformance of the enactive agent is shown to be limited by the number of\naffordable actions.\n", "title": "Investigating Enactive Learning for Autonomous Intelligent Agents" }
null
null
null
null
true
null
3104
null
Default
null
null
null
{ "abstract": " We study computable topological spaces and semicomputable and computable sets\nin these spaces. In particular, we investigate conditions under which\nsemicomputable sets are computable. We prove that a semicomputable compact\nmanifold $M$ is computable if its boundary $\\partial M$ is computable. We also\nshow how this result combined with certain construction which compactifies a\nsemicomputable set leads to the conclusion that some noncompact semicomputable\nmanifolds in computable metric spaces are computable.\n", "title": "Computability of semicomputable manifolds in computable topological spaces" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
3105
null
Validated
null
null
null
{ "abstract": " We present a theoretical and experimental study of boundary-driven acoustic\nstreaming in an inhomogeneous fluid with variations in density and\ncompressibility. In a homogeneous fluid this streaming results from dissipation\nin the boundary layers (Rayleigh streaming). We show that in an inhomogeneous\nfluid, an additional non-dissipative force density acts on the fluid to\nstabilize particular inhomogeneity configurations, which markedly alters and\neven suppresses the streaming flows. Our theoretical and numerical analysis of\nthe phenomenon is supported by ultrasound experiments performed with\ninhomogeneous aqueous iodixanol solutions in a glass-silicon microchip.\n", "title": "Acoustic streaming and its suppression in inhomogeneous fluids" }
null
null
[ "Physics" ]
null
true
null
3106
null
Validated
null
null
null
{ "abstract": " The dynamic and secondary spectra of many pulsars show evidence for\nlong-lived, aligned images of the pulsar that are stationary on a thin\nscattering sheet. One explanation for this phenomenon considers the effects of\nwave crests along sheets in the ionized interstellar medium, such as those due\nto Alfvén waves propagating along current sheets. If these sheets are closely\naligned to our line-of-sight to the pulsar, high bending angles arise at the\nwave crests and a selection effect causes alignment of images produced at\ndifferent crests, similar to grazing reflection off of a lake. Using geometric\noptics, we develop a simple parameterized model of these corrugated sheets that\ncan be constrained with a single observation and that makes observable\npredictions for variations in the scintillation of the pulsar over time and\nfrequency. This model reveals qualitative differences between lensing from\noverdense and underdense corrugated sheets: Only if the sheet is overdense\ncompared to the surrounding interstellar medium can the lensed images be\nbrighter than the line-of-sight image to the pulsar, and the faint lensed\nimages are closer to the pulsar at higher frequencies if the sheet is\nunderdense, but at lower frequencies if the sheet is overdense.\n", "title": "Predicting Pulsar Scintillation from Refractive Plasma Sheets" }
null
null
null
null
true
null
3107
null
Default
null
null
null
{ "abstract": " In the presented study Parent/Teacher Disruptive Behavior Disorder (DBD)\nrating scale based on the Diagnostic and Statistical Manual of Mental Disorders\n(DSM-IV-TR [APA, 2000]) which was developed by Pelham and his colleagues\n(Pelham et al., 1992) was translated and adopted for assessment of childhood\nbehavioral abnormalities, especially ADHD, ODD and CD in Georgian children and\nadolescents. The DBD rating scale was translated into Georgian language using\nback translation technique by English language philologists and checked and\ncorrected by qualified psychologists and psychiatrist of Georgia. Children and\nadolescents in the age range of 6 to 16 years (N 290; Mean Age 10.50, SD=2.88)\nincluding 153 males (Mean Age 10.42, SD= 2.62) and 141 females (Mean Age 10.60,\nSD=3.14) were recruited from different public schools of Tbilisi and the\nNeurology Department of the Pediatric Clinic of the Tbilisi State Medical\nUniversity. Participants objectively were assessed via interviewing\nparents/teachers and qualified psychologists in three different settings\nincluding school, home and clinic. In terms of DBD total scores revealed\nstatistically significant differences between healthy controls (M=27.71,\nSD=17.26) and children and adolescents with ADHD (M=61.51, SD= 22.79).\nStatistically significant differences were found for inattentive subtype\nbetween control (M=8.68, SD=5.68) and ADHD (M=18.15, SD=6.57) groups. In\ngeneral it was shown that children and adolescents with ADHD had high score on\nDBD in comparison to typically developed persons. In the study also was\ndetermined gender wise prevalence in children and adolescents with ADHD, ODD\nand CD. The research revealed prevalence of males in comparison with females in\nall investigated categories.\n", "title": "Disruptive Behavior Disorder (DBD) Rating Scale for Georgian Population" }
null
null
null
null
true
null
3108
null
Default
null
null
null
{ "abstract": " The Poisson-Fermi model is an extension of the classical Poisson-Boltzmann\nmodel to include the steric and correlation effects of ions and water treated\nas nonuniform spheres in aqueous solutions. Poisson-Boltzmann electrostatic\ncalculations are essential but computationally very demanding for molecular\ndynamics or continuum simulations of complex systems in molecular biophysics\nand electrochemistry. The graphic processing unit (GPU) with enormous\narithmetic capability and streaming memory bandwidth is now a powerful engine\nfor scientific as well as industrial computing. We propose two parallel GPU\nalgorithms, one for linear solver and the other for nonlinear solver, for\nsolving the Poisson-Fermi equation approximated by the standard finite\ndifference method in 3D to study biological ion channels with crystallized\nstructures from the Protein Data Bank, for example. Numerical methods for both\nlinear and nonlinear solvers in the parallel algorithms are given in detail to\nillustrate the salient features of the CUDA (compute unified device\narchitecture) software platform of GPU in implementation. It is shown that the\nparallel algorithms on GPU over the sequential algorithms on CPU (central\nprocessing unit) can achieve 22.8x and 16.9x speedups for the linear solver\ntime and total runtime, respectively.\n", "title": "A GPU Poisson-Fermi Solver for Ion Channel Simulations" }
null
null
null
null
true
null
3109
null
Default
null
null
null
{ "abstract": " We study several aspects of the recently introduced fixed-phase spin-orbit\ndiffusion Monte Carlo (FPSODMC) method, in particular, its relation to the\nfixed-node method and its potential use as a general approach for electronic\nstructure calculations. We illustrate constructions of spinor-based wave\nfunctions with the full space-spin symmetry without assigning up or down spin\nlabels to particular electrons, effectively \"complexifying\" even ordinary\nreal-valued wave functions. Interestingly, with proper choice of the simulation\nparameters and spin variables, such fixed-phase calculations enable one to\nreach also the fixed-node limit. The fixed-phase solution provides a\nstraightforward interpretation as the lowest bosonic state in a given effective\npotential generated by the many-body approximate phase. In addition, the\ndivergences present at real wave function nodes are smoothed out to lower\ndimensionality, decreasing thus the variation of sampled quantities and making\nthe sampling also more straightforward. We illustrate some of these properties\non calculations of selected first-row systems that recover the fixed-node\nresults with quantitatively similar levels of the corresponding biases. At the\nsame time, the fixed-phase approach opens new possibilities for more general\ntrial wave functions with further opportunities for increasing accuracy in\npractical calculations.\n", "title": "Quantum Monte Carlo with variable spins: fixed-phase and fixed-node approximations" }
null
null
null
null
true
null
3110
null
Default
null
null
null
{ "abstract": " We introduce coroICA, confounding-robust independent component analysis, a\nnovel ICA algorithm which decomposes linearly mixed multivariate observations\ninto independent components that are corrupted (and rendered dependent) by\nhidden group-wise stationary confounding. It extends the ordinary ICA model in\na theoretically sound and explicit way to incorporate group-wise (or\nenvironment-wise) confounding. We show that our general noise model allows to\nperform ICA in settings where other noisy ICA procedures fail. Additionally, it\ncan be used for applications with grouped data by adjusting for different\nstationary noise within each group. We show that the noise model has a natural\nrelation to causality and explain how it can be applied in the context of\ncausal inference. In addition to our theoretical framework, we provide an\nefficient estimation procedure and prove identifiability of the unmixing matrix\nunder mild assumptions. Finally, we illustrate the performance and robustness\nof our method on simulated data, provide audible and visual examples, and\ndemonstrate the applicability to real-world scenarios by experiments on\npublicly available Antarctic ice core data as well as two EEG data sets. We\nprovide a scikit-learn compatible pip-installable Python package coroICA as\nwell as R and Matlab implementations accompanied by a documentation at\nthis https URL.\n", "title": "Robustifying Independent Component Analysis by Adjusting for Group-Wise Stationary Noise" }
null
null
[ "Statistics", "Quantitative Biology" ]
null
true
null
3111
null
Validated
null
null
null
{ "abstract": " Diversification-Based Learning (DBL) derives from a collection of principles\nand methods introduced in the field of metaheuristics that have broad\napplications in computing and optimization. We show that the DBL framework goes\nsignificantly beyond that of the more recent Opposition-based learning (OBL)\nframework introduced in Tizhoosh (2005), which has become the focus of numerous\nresearch initiatives in machine learning and metaheuristic optimization. We\nunify and extend earlier proposals in metaheuristic search (Glover, 1997,\nGlover and Laguna, 1997) to give a collection of approaches that are more\nflexible and comprehensive than OBL for creating intensification and\ndiversification strategies in metaheuristic search. We also describe potential\napplications of DBL to various subfields of machine learning and optimization.\n", "title": "Diversification-Based Learning in Computing and Optimization" }
null
null
null
null
true
null
3112
null
Default
null
null
null
{ "abstract": " Canonical correlation analysis is a family of multivariate statistical\nmethods for the analysis of paired sets of variables. Since its proposition,\ncanonical correlation analysis has for instance been extended to extract\nrelations between two sets of variables when the sample size is insufficient in\nrelation to the data dimensionality, when the relations have been considered to\nbe non-linear, and when the dimensionality is too large for human\ninterpretation. This tutorial explains the theory of canonical correlation\nanalysis including its regularised, kernel, and sparse variants. Additionally,\nthe deep and Bayesian CCA extensions are briefly reviewed. Together with the\nnumerical examples, this overview provides a coherent compendium on the\napplicability of the variants of canonical correlation analysis. By bringing\ntogether techniques for solving the optimisation problems, evaluating the\nstatistical significance and generalisability of the canonical correlation\nmodel, and interpreting the relations, we hope that this article can serve as a\nhands-on tool for applying canonical correlation methods in data analysis.\n", "title": "A Tutorial on Canonical Correlation Methods" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3113
null
Validated
null
null
null
{ "abstract": " The field of brain-computer interfaces is poised to advance from the\ntraditional goal of controlling prosthetic devices using brain signals to\ncombining neural decoding and encoding within a single neuroprosthetic device.\nSuch a device acts as a \"co-processor\" for the brain, with applications ranging\nfrom inducing Hebbian plasticity for rehabilitation after brain injury to\nreanimating paralyzed limbs and enhancing memory. We review recent progress in\nsimultaneous decoding and encoding for closed-loop control and plasticity\ninduction. To address the challenge of multi-channel decoding and encoding, we\nintroduce a unifying framework for developing brain co-processors based on\nartificial neural networks and deep learning. These \"neural co-processors\" can\nbe used to jointly optimize cost functions with the nervous system to achieve\ndesired behaviors ranging from targeted neuro-rehabilitation to augmentation of\nbrain function.\n", "title": "Towards Neural Co-Processors for the Brain: Combining Decoding and Encoding in Brain-Computer Interfaces" }
null
null
null
null
true
null
3114
null
Default
null
null
null
{ "abstract": " Many phenomena in collisionless plasma physics require a kinetic description.\nThe evolution of the phase space density can be modeled by means of the Vlasov\nequation, which has to be solved numerically in most of the relevant cases. One\nof the problems that often arise in such simulations is the violation of\nimportant physical conservation laws. Numerical diffusion in phase space\ntranslates into unphysical heating, which can increase the overall energy\nsignificantly, depending on the time scale and the plasma regime. In this\npaper, a general and straightforward way of improving conservation properties\nof Vlasov schemes is presented that can potentially be applied to a variety of\ndifferent codes. The basic idea is to use fluid models with good conservation\nproperties for correcting kinetic models. The higher moments that are missing\nin the fluid models are provided by the kinetic codes, so that both kinetic and\nfluid codes compensate the weaknesses of each other in a closed feedback loop.\n", "title": "Enhanced conservation properties of Vlasov codes through coupling with conservative fluid models" }
null
null
[ "Physics" ]
null
true
null
3115
null
Validated
null
null
null
{ "abstract": " In this paper we have generalized the notion of $\\lambda$-radial contraction\nin complete Riemannian manifold and developed the concept of $p^\\lambda$-convex\nfunction. We have also given a counter example proving the fact that in general\n$\\lambda$-radial contraction of a geodesic is not necessarily a geodesic. We\nhave also deduced some relations between geodesic convex sets and\n$p^\\lambda$-convex sets and showed that under certain conditions they are\nequivalent.\n", "title": "A note on $p^λ$-convex set in a complete Riemannian manifold" }
null
null
null
null
true
null
3116
null
Default
null
null
null
{ "abstract": " Using a probabilistic argument we show that the second bounded cohomology of\nan acylindrically hyperbolic group $G$ (e.g., a non-elementary hyperbolic or\nrelatively hyperbolic group, non-exceptional mapping class group, ${\\rm\nOut}(F_n)$, \\dots) embeds via the natural restriction maps into the inverse\nlimit of the second bounded cohomologies of its virtually free subgroups, and\nin fact even into the inverse limit of the second bounded cohomologies of its\nhyperbolically embedded virtually free subgroups. This result is new and\nnon-trivial even in the case where $G$ is a (non-free) hyperbolic group. The\ncorresponding statement fails in general for the third bounded cohomology, even\nfor surface groups.\n", "title": "Bounded cohomology and virtually free hyperbolically embedded subgroups" }
null
null
null
null
true
null
3117
null
Default
null
null
null
{ "abstract": " Programming Computable Functions (PCF) is a simplified programming language\nwhich provides the theoretical basis of modern functional programming\nlanguages. Answer set programming (ASP) is a programming paradigm focused on\nsolving search problems. In this paper we provide a translation from PCF to\nASP. Using this translation it becomes possible to specify search problems\nusing PCF.\n", "title": "Transpiling Programmable Computable Functions to Answer Set Programs" }
null
null
null
null
true
null
3118
null
Default
null
null
null
{ "abstract": " Analysis of solar magnetic fields using observations as well as theoretical\ninterpretations of the scattering polarization is commonly designated as a high\npriority area of the solar research. The interpretation of the observed\npolarization raises a serious theoretical challenge to the researchers involved\nin this field. In fact, realistic interpretations need detailed investigations\nof the depolarizing role of isotropic collisions with neutral hydrogen. The\ngoal of this paper is to determine new relationships which allow the\ncalculation of any collisional rates of the d-levels of ions by simply\ndetermining the value of n^* and $E_p$ without the need of determining the\ninteraction potentials and treating the dynamics of collisions. The\ndetermination of n^* and E_p is easy and based on atomic data usually available\nonline. Accurate collisional rates allow a reliable diagnostics of solar\nmagnetic fields. In this work we applied our collisional FORTRAN code to a\nlarge number of cases involving complex and simple ions. After that, the\nresults are utilized and injected in a genetic programming code developed with\nC-langugae in order to infer original relationships which will be of great help\nto solar applications. We discussed the accurarcy of our collisional rates in\nthe cases of polarized complex atoms and atoms with hyperfine structure. The\nrelationships are expressed on the tensorial basis and we explain how to\ninclude their contributions in the master equation giving the variation of the\ndensity matrix elements. As a test, we compared the results obtained through\nthe general relationships provided in this work with the results obtained\ndirectly by running our code of collisions. These comparisons show a percentage\nof error of about 10% in the average value.\n", "title": "Scattering polarization of the $d$-states of ions and solar magnetic field: Effects of isotropic collisions" }
null
null
null
null
true
null
3119
null
Default
null
null
null
{ "abstract": " Topological data analysis, such as persistent homology has shown beneficial\nproperties for machine learning in many tasks. Topological representations,\nsuch as the persistence diagram (PD), however, have a complex structure\n(multiset of intervals) which makes it difficult to combine with typical\nmachine learning workflows. We present novel compact fixed-size vectorial\nrepresentations of PDs based on clustering and bag of words encodings that cope\nwell with the inherent sparsity of PDs. Our novel representations outperform\nstate-of-the-art approaches from topological data analysis and are\ncomputationally more efficient.\n", "title": "Persistence Codebooks for Topological Data Analysis" }
null
null
null
null
true
null
3120
null
Default
null
null
null
{ "abstract": " We study the problem of computing the \\textsc{Maxima} of a set of $n$\n$d$-dimensional points. For dimensions 2 and 3, there are algorithms to solve\nthe problem with order-oblivious instance-optimal running time. However, in\nhigher dimensions there is still room for improvements. We present an algorithm\nsensitive to the structural entropy of the input set, which improves the\nrunning time, for large classes of instances, on the best solution for\n\\textsc{Maxima} to date for $d \\ge 4$.\n", "title": "Multivariate Analysis for Computing Maxima in High Dimensions" }
null
null
null
null
true
null
3121
null
Default
null
null
null
{ "abstract": " It has been pointed out that non-singular cosmological solutions in\nsecond-order scalar-tensor theories generically suffer from gradient\ninstabilities. We extend this no-go result to second-order gravitational\ntheories with an arbitrary number of interacting scalar fields. Our proof\nfollows directly from the action of generalized multi-Galileons, and thus is\ndifferent from and complementary to that based on the effective field theory\napproach. Several new terms for generalized multi-Galileons on a flat\nbackground were proposed recently. We find a covariant completion of them and\nconfirm that they do not participate in the no-go argument.\n", "title": "Generalized multi-Galileons, covariantized new terms, and the no-go theorem for non-singular cosmologies" }
null
null
null
null
true
null
3122
null
Default
null
null
null
{ "abstract": " NMDA receptors (NMDA-R) typically contribute to excitatory synaptic\ntransmission in the central nervous system. While calcium influx through NMDA-R\nplays a critical role in synaptic plasticity, indirect experimental evidence\nalso exists demonstrating actions of NMDAR-mediated calcium influx on neuronal\nexcitability through the activation of calcium-activated potassium channels.\nBut, so far, this mechanism has not been studied theoretically. Our theoretical\nmodel provide a simple description of neuronal electrical activity including\nthe tonic activity of NMDA receptors and a cytosolic calcium compartment. We\nshow that calcium influx through NMDA-R can directly be coupled to activation\nof calcium-activated potassium channels providing an overall inhibitory effect\non neuronal excitability. Furthermore, the presence of tonic NMDA-R activity\npromotes bistability in electrical activity by dramatically increasing the\nstimulus interval where both a stable steady state and repetitive firing can\nexist. This results could provide an intrinsic mechanism for the constitution\nof memory traces in neuronal circuits. They also shed light on the way by which\nbeta-amyloids can decrease neuronal activity when interfering with NMDA-R in\nAlzheimer's disease.\n", "title": "Tonic activation of extrasynaptic NMDA receptors decreases intrinsic excitability and promotes bistability in a model of neuronal activity" }
null
null
null
null
true
null
3123
null
Default
null
null
null
{ "abstract": " Aligning sequencing reads on graph representations of genomes is an important\ningredient of pan-genomics. Such approaches typically find a set of local\nanchors that indicate plausible matches between substrings of a read to\nsubpaths of the graph. These anchor matches are then combined to form a\n(semi-local) alignment of the complete read on a subpath. Co-linear chaining is\nan algorithmically rigorous approach to combine the anchors. It is a well-known\napproach for the case of two sequences as inputs. Here we extend the approach\nso that one of the inputs can be a directed acyclic graph (DAGs), e.g. a\nsplicing graph in transcriptomics or a variant graph in pan-genomics.\nThis extension to DAGs turns out to have a tight connection to the minimum\npath cover problem, asking for a minimum-cardinality set of paths that cover\nall the nodes of a DAG. We study the case when the size $k$ of a minimum path\ncover is small, which is often the case in practice. First, we propose an\nalgorithm for finding a minimum path cover of a DAG $(V,E)$ in $O(k|E|\\log|V|)$\ntime, improving all known time-bounds when $k$ is small and the DAG is not too\ndense. Second, we introduce a general technique for extending dynamic\nprogramming (DP) algorithms from sequences to DAGs. This is enabled by our\nminimum path cover algorithm, and works by mimicking the DP algorithm for\nsequences on each path of the minimum path cover. This technique generally\nproduces algorithms that are slower than their counterparts on sequences only\nby a factor $k$. Our technique can be applied, for example, to the classical\nlongest increasing subsequence and longest common subsequence problems,\nextended to labeled DAGs. Finally, we apply this technique to the co-linear\nchaining problem. We also implemented the new co-linear chaining approach.\nExperiments on splicing graphs show that the new method is efficient also in\npractice.\n", "title": "Using Minimum Path Cover to Boost Dynamic Programming on DAGs: Co-Linear Chaining Extended" }
null
null
null
null
true
null
3124
null
Default
null
null
null
{ "abstract": " Ensembles of classifier models typically deliver superior performance and can\noutperform single classifier models given a dataset and classification task at\nhand. However, the gain in performance comes together with the lack in\ncomprehensibility, posing a challenge to understand how each model affects the\nclassification outputs and where the errors come from. We propose a tight\nvisual integration of the data and the model space for exploring and combining\nclassifier models. We introduce a workflow that builds upon the visual\nintegration and enables the effective exploration of classification outputs and\nmodels. We then present a use case in which we start with an ensemble\nautomatically selected by a standard ensemble selection algorithm, and show how\nwe can manipulate models and alternative combinations.\n", "title": "Visual Integration of Data and Model Space in Ensemble Learning" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3125
null
Validated
null
null
null
{ "abstract": " A rotating continuum of particles attracted to each other by gravity may be\nmodeled by the Euler-Poisson system. The existence of solutions is a very\nclassical problem. Here it is proven that a curve of solutions exists,\nparametrized by the rotation speed, with a fixed mass independent of the speed.\nThe rotation is allowed to vary with the distance to the axis. A special case\nis when the equation of state is $p=\\rho^\\gamma,\\ 6/5<\\gamma<2,\\ \\gamma\\ne4/3$,\nin contrast to previous variational methods which have required $4/3 < \\gamma$.\nThe continuum of particles may alternatively be modeled microscopically by\nthe Vlasov-Poisson system. The kinetic density is a prescribed function. We\nprove an analogous theorem asserting the existence of a curve of solutions with\nconstant mass. In this model the whole range $(6/5,2)$ is allowed, including\n$\\gamma=4/3$.\n", "title": "Steady States of Rotating Stars and Galaxies" }
null
null
null
null
true
null
3126
null
Default
null
null
null
{ "abstract": " We have studied the critical properties of the contact process on a square\nlattice with quenched site dilution by Monte Carlo simulations. This was\nachieved by generating in advance the percolating cluster, through the use of\nan appropriate epidemic model, and then by the simulation of the contact\nprocess on the top of the percolating cluster. The dynamic critical exponents\nwere calculated by assuming an activated scaling relation and the static\nexponents by the usual power law behavior. Our results are in agreement with\nthe prediction that the quenched diluted contact process belongs to the\nuniversality class of the random transverse-field Ising model. We have also\nanalyzed the model and determined the phase diagram by the use of a mean-field\ntheory that takes into account the correlation between neighboring sites.\n", "title": "Critical properties of the contact process with quenched dilution" }
null
null
null
null
true
null
3127
null
Default
null
null
null
{ "abstract": " A new Strouhal-Reynolds number relationship, $St=1/(A+B/Re)$, has been\nrecently proposed based on observations of laminar vortex shedding from\ncircular cylinders in a flowing soap film. Since the new $St$-$Re$ relation was\nderived from a general physical consideration, it raises the possibility that\nit may be applicable to vortex shedding from bodies other than circular ones.\nThe work presented herein provides experimental evidence that this is the case.\nOur measurements also show that in the asymptotic limit\n($Re\\rightarrow\\infty$), $St_{\\infty}=1/A\\simeq0.21$ is constant independent of\nrod shapes, leaving $B$ the only parameter that is shape dependent.\n", "title": "A Unified Strouhal-Reynolds Number Relationship for Laminar Vortex Streets Generated by Different Shaped Obstacles" }
null
null
[ "Physics" ]
null
true
null
3128
null
Validated
null
null
null
{ "abstract": " The randomized-feature approach has been successfully employed in large-scale\nkernel approximation and supervised learning. The distribution from which the\nrandom features are drawn impacts the number of features required to\nefficiently perform a learning task. Recently, it has been shown that employing\ndata-dependent randomization improves the performance in terms of the required\nnumber of random features. In this paper, we are concerned with the\nrandomized-feature approach in supervised learning for good generalizability.\nWe propose the Energy-based Exploration of Random Features (EERF) algorithm\nbased on a data-dependent score function that explores the set of possible\nfeatures and exploits the promising regions. We prove that the proposed score\nfunction with high probability recovers the spectrum of the best fit within the\nmodel class. Our empirical results on several benchmark datasets further verify\nthat our method requires smaller number of random features to achieve a certain\ngeneralization error compared to the state-of-the-art while introducing\nnegligible pre-processing overhead. EERF can be implemented in a few lines of\ncode and requires no additional tuning parameters.\n", "title": "On Data-Dependent Random Features for Improved Generalization in Supervised Learning" }
null
null
null
null
true
null
3129
null
Default
null
null
null
{ "abstract": " The recent discovery of a direct link between the sharp peak in the electron\nquasiparticle scattering rate of cuprate superconductors and the well-known\npeak-dip-hump structure in the electron quasiparticle excitation spectrum is\ncalling for an explanation. Within the framework of the kinetic-energy driven\nsuperconducting mechanism, the complicated line-shape in the electron\nquasiparticle excitation spectrum of cuprate superconductors is investigated.\nIt is shown that the interaction between electrons by the exchange of spin\nexcitations generates a notable peak structure in the electron quasiparticle\nscattering rate around the antinodal and nodal regions. However, this peak\nstructure disappears at the hot spots, which leads to that the striking\npeak-dip-hump structure is developed around the antinodal and nodal regions,\nand vanishes at the hot spots. The theory also confirms that the sharp peak\nobserved in the electron quasiparticle scattering rate is directly responsible\nfor the remarkable peak-dip-hump structure in the electron quasiparticle\nexcitation spectrum of cuprate superconductors.\n", "title": "Anomalous electron spectrum and its relation to peak structure of electron scattering rate in cuprate superconductors" }
null
null
null
null
true
null
3130
null
Default
null
null
null
{ "abstract": " This paper proposes new specification tests for conditional models with\ndiscrete responses, which are key to apply efficient maximum likelihood\nmethods, to obtain consistent estimates of partial effects and to get\nappropriate predictions of the probability of future events. In particular, we\ntest the static and dynamic ordered choice model specifications and can cover\ninfinite support distributions for e.g. count data. The traditional approach\nfor specification testing of discrete response models is based on probability\nintegral transforms of a jittered discrete data which leads to continuous\nuniform iid series under the true conditional distribution. Then, standard\nspecification testing techniques for continuous variables could be applied to\nthe transformed series, but the extra randomness from jitters affects the power\nproperties of these methods. We investigate in this paper an alternative\ntransformation based only on original discrete data that avoids any\nrandomization. We analyze the asymptotic properties of goodness-of-fit tests\nbased on this new transformation and explore the properties in finite samples\nof a bootstrap algorithm to approximate the critical values of test statistics\nwhich are model and parameter dependent. We show analytically and in\nsimulations that our approach dominates the methods based on randomization in\nterms of power. We apply the new tests to models of the monetary policy\nconducted by the Federal Reserve.\n", "title": "New goodness-of-fit diagnostics for conditional discrete response models" }
null
null
null
null
true
null
3131
null
Default
null
null
null
{ "abstract": " Integrating different semiconductor materials into an epitaxial device\nstructure offers additional degrees of freedom to select for optimal material\nproperties in each layer. However, interface between materials with different\nvalences (i.e. III-V, II-VI and IV semiconductors) can be difficult to form\nwith high quality. Using ZnSe/GaAs as a model system, we explore the use of UV\nillumination during heterovalent interface growth by molecular beam epitaxy as\na way to modify the interface properties. We find that UV illumination alters\nthe mixture of chemical bonds at the interface, permitting the formation of\nGa-Se bonds that help to passivate the underlying GaAs layer. Illumination also\nhelps to reduce defects in the ZnSe epilayer. These results suggest that\nmoderate UV illumination during growth may be used as a way to improve the\noptical properties of both the GaAs and ZnSe layers on either side of the\ninterface.\n", "title": "Tailoring Heterovalent Interface Formation with Light" }
null
null
null
null
true
null
3132
null
Default
null
null
null
{ "abstract": " We survey the main ideas in the early history of the subjects on which\nRiemann worked and that led to some of his most important discoveries. The\nsubjects discussed include the theory of functions of a complex variable,\nelliptic and Abelian integrals, the hypergeometric series, the zeta function,\ntopology, differential geometry, integration, and the notion of space. We shall\nsee that among Riemann's predecessors in all these fields, one name occupies a\nprominent place, this is Leonhard Euler. The final version of this paper will\nappear in the book \\emph{From Riemann to differential geometry and relativity}\n(L. Ji, A. Papadopoulos and S. Yamada, ed.) Berlin: Springer, 2017.\n", "title": "Looking backward: From Euler to Riemann" }
null
null
null
null
true
null
3133
null
Default
null
null
null
{ "abstract": " In this paper we investigate an emerging application, 3D scene understanding,\nlikely to be significant in the mobile space in the near future. The goal of\nthis exploration is to reduce execution time while meeting our quality of\nresult objectives. In previous work we showed for the first time that it is\npossible to map this application to power constrained embedded systems,\nhighlighting that decision choices made at the algorithmic design-level have\nthe most impact.\nAs the algorithmic design space is too large to be exhaustively evaluated, we\nuse a previously introduced multi-objective Random Forest Active Learning\nprediction framework dubbed HyperMapper, to find good algorithmic designs. We\nshow that HyperMapper generalizes on a recent cutting edge 3D scene\nunderstanding algorithm and on a modern GPU-based computer architecture.\nHyperMapper is able to beat an expert human hand-tuning the algorithmic\nparameters of the class of Computer Vision applications taken under\nconsideration in this paper automatically. In addition, we use crowd-sourcing\nusing a 3D scene understanding Android app to show that the Pareto front\nobtained on an embedded system can be used to accelerate the same application\non all the 83 smart-phones and tablets crowd-sourced with speedups ranging from\n2 to over 12.\n", "title": "Algorithmic Performance-Accuracy Trade-off in 3D Vision Applications Using HyperMapper" }
null
null
null
null
true
null
3134
null
Default
null
null
null
{ "abstract": " We prove that for a finitely generated field over an infinite perfect field\nk, and for any integer n, the (n,n)-th MW-motivic cohomology group identifies\nwith the n-th Milnor-Witt K-theory group of that field\n", "title": "A comparison theorem for MW-motivic cohomology" }
null
null
null
null
true
null
3135
null
Default
null
null
null
{ "abstract": " This paper considers a version of the Wiener filtering problem for\nequalization of passive quantum linear quantum systems. We demonstrate that\ntaking into consideration the quantum nature of the signals involved leads to\nfeatures typically not encountered in classical equalization problems. Most\nsignificantly, finding a mean-square optimal quantum equalizing filter amounts\nto solving a nonconvex constrained optimization problem. We discuss two\napproaches to solving this problem, both involving a relaxation of the\nconstraint. In both cases, unlike classical equalization, there is a threshold\non the variance of the noise below which an improvement of the mean-square\nerror cannot be guaranteed.\n", "title": "Wiener Filtering for Passive Linear Quantum Systems" }
null
null
null
null
true
null
3136
null
Default
null
null
null
{ "abstract": " Robust navigation in urban environments has received a considerable amount of\nboth academic and commercial interest over recent years. This is primarily due\nto large commercial organizations such as Google and Uber stepping into the\nautonomous navigation market. Most of this research has shied away from Global\nNavigation Satellite System (GNSS) based navigation. The aversion to utilizing\nGNSS data is due to the degraded nature of the data in urban environment (e.g.,\nmultipath, poor satellite visibility). The degradation of the GNSS data in\nurban environments makes it such that traditional (GNSS) positioning methods\n(e.g., extended Kalman filter, particle filters) perform poorly. However,\nrecent advances in robust graph theoretic based sensor fusion methods,\nprimarily applied to Simultaneous Localization and Mapping (SLAM) based robotic\napplications, can also be applied to GNSS data processing. This paper will\nutilize one such method known as the factor graph in conjunction several robust\noptimization techniques to evaluate their applicability to robust GNSS data\nprocessing. The goals of this study are two-fold. First, for GNSS applications,\nwe will experimentally evaluate the effectiveness of robust optimization\ntechniques within a graph-theoretic estimation framework. Second, by releasing\nthe software developed and data sets used for this study, we will introduce a\nnew open-source front-end to the Georgia Tech Smoothing and Mapping (GTSAM)\nlibrary for the purpose of integrating GNSS pseudorange observations.\n", "title": "Robust Navigation In GNSS Degraded Environment Using Graph Optimization" }
null
null
null
null
true
null
3137
null
Default
null
null
null
{ "abstract": " This report has several purposes. First, our report is written to investigate\nthe reproducibility of the submitted paper On the regularization of Wasserstein\nGANs (2018). Second, among the experiments performed in the submitted paper,\nfive aspects were emphasized and reproduced: learning speed, stability,\nrobustness against hyperparameter, estimating the Wasserstein distance, and\nvarious sampling method. Finally, we identify which parts of the contribution\ncan be reproduced, and at what cost in terms of resources. All source code for\nreproduction is open to the public.\n", "title": "On reproduction of On the regularization of Wasserstein GANs" }
null
null
null
null
true
null
3138
null
Default
null
null
null
{ "abstract": " This paper studies density estimation under pointwise loss in the setting of\ncontamination model. The goal is to estimate $f(x_0)$ at some\n$x_0\\in\\mathbb{R}$ with i.i.d. observations, $$ X_1,\\dots,X_n\\sim\n(1-\\epsilon)f+\\epsilon g, $$ where $g$ stands for a contamination distribution.\nIn the context of multiple testing, this can be interpreted as estimating the\nnull density at a point. We carefully study the effect of contamination on\nestimation through the following model indices: contamination proportion\n$\\epsilon$, smoothness of target density $\\beta_0$, smoothness of contamination\ndensity $\\beta_1$, and level of contamination $m$ at the point to be estimated,\ni.e. $g(x_0)\\leq m$. It is shown that the minimax rate with respect to the\nsquared error loss is of order $$\n[n^{-\\frac{2\\beta_0}{2\\beta_0+1}}]\\vee[\\epsilon^2(1\\wedge\nm)^2]\\vee[n^{-\\frac{2\\beta_1}{2\\beta_1+1}}\\epsilon^{\\frac{2}{2\\beta_1+1}}], $$\nwhich characterizes the exact influence of contamination on the difficulty of\nthe problem. We then establish the minimal cost of adaptation to contamination\nproportion, to smoothness and to both of the numbers. It is shown that some\nsmall price needs to be paid for adaptation in any of the three cases.\nVariations of Lepski's method are considered to achieve optimal adaptation.\nThe problem is also studied when there is no smoothness assumption on the\ncontamination distribution. This setting that allows for an arbitrary\ncontamination distribution is recognized as Huber's $\\epsilon$-contamination\nmodel. The minimax rate is shown to be $$\n[n^{-\\frac{2\\beta_0}{2\\beta_0+1}}]\\vee [\\epsilon^{\\frac{2\\beta_0}{\\beta_0+1}}].\n$$ The adaptation theory is also different from the smooth contamination case.\nWhile adaptation to either contamination proportion or smoothness only costs a\nlogarithmic factor, adaptation to both numbers is proved to be impossible.\n", "title": "Density Estimation with Contaminated Data: Minimax Rates and Theory of Adaptation" }
null
null
null
null
true
null
3139
null
Default
null
null
null
{ "abstract": " Let U be the complement of a smooth anticanonical divisor in a del Pezzo\nsurface of degree at most 7 over a number field k. We show that there is an\neffective uniform bound for the size of the Brauer group of U in terms of the\ndegree of k.\n", "title": "A uniform bound on the Brauer groups of certain log K3 surfaces" }
null
null
null
null
true
null
3140
null
Default
null
null
null
{ "abstract": " Model-free deep reinforcement learning has been shown to exhibit good\nperformance in domains ranging from video games to simulated robotic\nmanipulation and locomotion. However, model-free methods are known to perform\npoorly when the interaction time with the environment is limited, as is the\ncase for most real-world robotic tasks. In this paper, we study how maximum\nentropy policies trained using soft Q-learning can be applied to real-world\nrobotic manipulation. The application of this method to real-world manipulation\nis facilitated by two important features of soft Q-learning. First, soft\nQ-learning can learn multimodal exploration strategies by learning policies\nrepresented by expressive energy-based models. Second, we show that policies\nlearned with soft Q-learning can be composed to create new policies, and that\nthe optimality of the resulting policy can be bounded in terms of the\ndivergence between the composed policies. This compositionality provides an\nespecially valuable tool for real-world manipulation, where constructing new\npolicies by composing existing skills can provide a large gain in efficiency\nover training from scratch. Our experimental evaluation demonstrates that soft\nQ-learning is substantially more sample efficient than prior model-free deep\nreinforcement learning methods, and that compositionality can be performed for\nboth simulated and real-world tasks.\n", "title": "Composable Deep Reinforcement Learning for Robotic Manipulation" }
null
null
null
null
true
null
3141
null
Default
null
null
null
{ "abstract": " Temporal cavity solitons (CS) are optical pulses that can persist in passive\nresonators, and they play a key role in the generation of coherent\nmicroresonator frequency combs. In resonators made of amorphous materials, such\nas fused silica, they can exhibit a spectral red-shift due to stimulated Raman\nscattering. Here we show that this Raman-induced self-frequency-shift imposes a\nfundamental limit on the duration and bandwidth of temporal CSs. Specifically,\nwe theoretically predict that stimulated Raman scattering introduces a\npreviously unidentified Hopf bifurcation that leads to destabilization of CSs\nat large pump-cavity detunings, limiting the range of detunings over which they\ncan exist. We have confirmed our theoretical predictions by performing\nextensive experiments in several different synchronously-driven fiber ring\nresonators, obtaining results in excellent agreement with numerical\nsimulations. Our results could have significant implications for the future\ndesign of Kerr frequency comb systems based on amorphous microresonators.\n", "title": "Stimulated Raman Scattering Imposes Fundamental Limits to the Duration and Bandwidth of Temporal Cavity Solitons" }
null
null
[ "Physics" ]
null
true
null
3142
null
Validated
null
null
null
{ "abstract": " New system for i-vector speaker recognition based on variational autoencoder\n(VAE) is investigated. VAE is a promising approach for developing accurate deep\nnonlinear generative models of complex data. Experiments show that VAE provides\nspeaker embedding and can be effectively trained in an unsupervised manner. LLR\nestimate for VAE is developed. Experiments on NIST SRE 2010 data demonstrate\nits correctness. Additionally, we show that the performance of VAE-based system\nin the i-vectors space is close to that of the diagonal PLDA. Several\ninteresting results are also observed in the experiments with $\\beta$-VAE. In\nparticular, we found that for $\\beta\\ll 1$, VAE can be trained to capture the\nfeatures of complex input data distributions in an effective way, which is hard\nto obtain in the standard VAE ($\\beta=1$).\n", "title": "Investigation of Using VAE for i-Vector Speaker Verification" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3143
null
Validated
null
null
null
{ "abstract": " Many machine learning models are reformulated as optimization problems. Thus,\nit is important to solve a large-scale optimization problem in big data\napplications. Recently, subsampled Newton methods have emerged to attract much\nattention for optimization due to their efficiency at each iteration, rectified\na weakness in the ordinary Newton method of suffering a high cost in each\niteration while commanding a high convergence rate. Other efficient stochastic\nsecond order methods are also proposed. However, the convergence properties of\nthese methods are still not well understood. There are also several important\ngaps between the current convergence theory and the performance in real\napplications. In this paper, we aim to fill these gaps. We propose a unifying\nframework to analyze local convergence properties of second order methods.\nBased on this framework, our theoretical analysis matches the performance in\nreal applications.\n", "title": "A Unifying Framework for Convergence Analysis of Approximate Newton Methods" }
null
null
null
null
true
null
3144
null
Default
null
null
null
{ "abstract": " We define a Newman property for BLD-mappings and study its connections to the\nporosity of the branch set in the setting of generalized manifolds equipped\nwith complete path metrics.\n", "title": "A Newman property for BLD-mappings" }
null
null
null
null
true
null
3145
null
Default
null
null
null
{ "abstract": " In this paper, we study a variant of the framework of online learning using\nexpert advice with limited/bandit feedback. We consider each expert as a\nlearning entity, seeking to more accurately reflecting certain real-world\napplications. In our setting, the feedback at any time $t$ is limited in a\nsense that it is only available to the expert $i^t$ that has been selected by\nthe central algorithm (forecaster), \\emph{i.e.}, only the expert $i^t$ receives\nfeedback from the environment and gets to learn at time $t$. We consider a\ngeneric black-box approach whereby the forecaster does not control or know the\nlearning dynamics of the experts apart from knowing the following no-regret\nlearning property: the average regret of any expert $j$ vanishes at a rate of\nat least $O(t_j^{\\regretRate-1})$ with $t_j$ learning steps where $\\regretRate\n\\in [0, 1]$ is a parameter.\nIn the spirit of competing against the best action in hindsight in\nmulti-armed bandits problem, our goal here is to be competitive w.r.t. the\ncumulative losses the algorithm could receive by following the policy of always\nselecting one expert. We prove the following hardness result: without any\ncoordination between the forecaster and the experts, it is impossible to design\na forecaster achieving no-regret guarantees. In order to circumvent this\nhardness result, we consider a practical assumption allowing the forecaster to\n\"guide\" the learning process of the experts by filtering/blocking some of the\nfeedbacks observed by them from the environment, \\emph{i.e.}, not allowing the\nselected expert $i^t$ to learn at time $t$ for some time steps. Then, we design\na novel no-regret learning algorithm \\algo for this problem setting by\ncarefully guiding the feedbacks observed by experts. We prove that \\algo\nachieves the worst-case expected cumulative regret of $O(\\Time^\\frac{1}{2 -\n\\regretRate})$ after $\\Time$ time steps.\n", "title": "Learning to Use Learners' Advice" }
null
null
null
null
true
null
3146
null
Default
null
null
null
{ "abstract": " Time series (TS) occur in many scientific and commercial applications,\nranging from earth surveillance to industry automation to the smart grids. An\nimportant type of TS analysis is classification, which can, for instance,\nimprove energy load forecasting in smart grids by detecting the types of\nelectronic devices based on their energy consumption profiles recorded by\nautomatic sensors. Such sensor-driven applications are very often characterized\nby (a) very long TS and (b) very large TS datasets needing classification.\nHowever, current methods to time series classification (TSC) cannot cope with\nsuch data volumes at acceptable accuracy; they are either scalable but offer\nonly inferior classification quality, or they achieve state-of-the-art\nclassification quality but cannot scale to large data volumes.\nIn this paper, we present WEASEL (Word ExtrAction for time SEries\ncLassification), a novel TSC method which is both scalable and accurate. Like\nother state-of-the-art TSC methods, WEASEL transforms time series into feature\nvectors, using a sliding-window approach, which are then analyzed through a\nmachine learning classifier. The novelty of WEASEL lies in its specific method\nfor deriving features, resulting in a much smaller yet much more discriminative\nfeature set. On the popular UCR benchmark of 85 TS datasets, WEASEL is more\naccurate than the best current non-ensemble algorithms at orders-of-magnitude\nlower classification and training times, and it is almost as accurate as\nensemble classifiers, whose computational complexity makes them inapplicable\neven for mid-size datasets. The outstanding robustness of WEASEL is also\nconfirmed by experiments on two real smart grid datasets, where it\nout-of-the-box achieves almost the same accuracy as highly tuned,\ndomain-specific methods.\n", "title": "Fast and Accurate Time Series Classification with WEASEL" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3147
null
Validated
null
null
null
{ "abstract": " In this work, we investigate the value of employing deep learning for the\ntask of wireless signal modulation recognition. Recently in [1], a framework\nhas been introduced by generating a dataset using GNU radio that mimics the\nimperfections in a real wireless channel, and uses 10 different modulation\ntypes. Further, a convolutional neural network (CNN) architecture was developed\nand shown to deliver performance that exceeds that of expert-based approaches.\nHere, we follow the framework of [1] and find deep neural network architectures\nthat deliver higher accuracy than the state of the art. We tested the\narchitecture of [1] and found it to achieve an accuracy of approximately 75% of\ncorrectly recognizing the modulation type. We first tune the CNN architecture\nof [1] and find a design with four convolutional layers and two dense layers\nthat gives an accuracy of approximately 83.8% at high SNR. We then develop\narchitectures based on the recently introduced ideas of Residual Networks\n(ResNet [2]) and Densely Connected Networks (DenseNet [3]) to achieve high SNR\naccuracies of approximately 83.5% and 86.6%, respectively. Finally, we\nintroduce a Convolutional Long Short-term Deep Neural Network (CLDNN [4]) to\nachieve an accuracy of approximately 88.5% at high SNR.\n", "title": "Deep Neural Network Architectures for Modulation Classification" }
null
null
null
null
true
null
3148
null
Default
null
null
null
{ "abstract": " Online programming discussion platforms such as Stack Overflow serve as a\nrich source of information for software developers. Available information\ninclude vibrant discussions and oftentimes ready-to-use code snippets.\nAnecdotes report that software developers copy and paste code snippets from\nthose information sources for convenience reasons. Such behavior results in a\nconstant flow of community-provided code snippets into production software. To\ndate, the impact of this behaviour on code security is unknown. We answer this\nhighly important question by quantifying the proliferation of security-related\ncode snippets from Stack Overflow in Android applications available on Google\nPlay. Access to the rich source of information available on Stack Overflow\nincluding ready-to-use code snippets provides huge benefits for software\ndevelopers. However, when it comes to code security there are some caveats to\nbear in mind: Due to the complex nature of code security, it is very difficult\nto provide ready-to-use and secure solutions for every problem. Hence,\nintegrating a security-related code snippet from Stack Overflow into production\nsoftware requires caution and expertise. Unsurprisingly, we observed insecure\ncode snippets being copied into Android applications millions of users install\nfrom Google Play every day. To quantitatively evaluate the extent of this\nobservation, we scanned Stack Overflow for code snippets and evaluated their\nsecurity score using a stochastic gradient descent classifier. In order to\nidentify code reuse in Android applications, we applied state-of-the-art static\nanalysis. Our results are alarming: 15.4% of the 1.3 million Android\napplications we analyzed, contained security-related code snippets from Stack\nOverflow. Out of these 97.9% contain at least one insecure code snippet.\n", "title": "Stack Overflow Considered Harmful? The Impact of Copy&Paste on Android Application Security" }
null
null
null
null
true
null
3149
null
Default
null
null
null
{ "abstract": " Prior works, such as the Tallinn manual on the international law applicable\nto cyber warfare, focus on the circumstances of cyber warfare. Many\norganizations are considering how to conduct cyber warfare, but few have\ndiscussed methods to reduce, or even prevent, cyber conflict. A recent series\nof publications started developing the framework of Cyber Peacekeeping (CPK)\nand its legal requirements. These works assessed the current state of\norganizations such as ITU IMPACT, NATO CCDCOE and Shanghai Cooperation\nOrganization, and found that they did not satisfy requirements to effectively\nhost CPK activities. An assessment of organizations currently working in the\nareas related to CPK found that the United Nations (UN) has mandates and\norganizational structures that appear to somewhat overlap the needs of CPK.\nHowever, the UN's current approach to Peacekeeping cannot be directly mapped to\ncyberspace. In this research we analyze the development of traditional\nPeacekeeping in the United Nations, and current initiatives in cyberspace.\nSpecifically, we will compare the proposed CPK framework with the recent\ninitiative of the United Nations named the 'Digital Blue Helmets' as well as\nwith other projects in the UN which helps to predict and mitigate conflicts.\nOur goal is to find practical recommendations for the implementation of the CPK\nframework in the United Nations, and to examine how responsibilities defined in\nthe CPK framework overlap with those of the 'Digital Blue Helmets' and the\nGlobal Pulse program.\n", "title": "United Nations Digital Blue Helmets as a Starting Point for Cyber Peacekeeping" }
null
null
null
null
true
null
3150
null
Default
null
null
null
{ "abstract": " We observe that derived equivalent K3 surfaces have isomorphic Chow motives.\n", "title": "Motives of derived equivalent K3 surfaces" }
null
null
null
null
true
null
3151
null
Default
null
null
null
{ "abstract": " Motivated by recent advance of machine learning using Deep Reinforcement\nLearning this paper proposes a modified architecture that produces more robust\nagents and speeds up the training process. Our architecture is based on\nAsynchronous Advantage Actor-Critic (A3C) algorithm where the total input\ndimensionality is halved by dividing the input into two independent streams. We\nuse ViZDoom, 3D world software that is based on the classical first person\nshooter video game, Doom, as a test case. The experiments show that in\ncomparison to single input agents, the proposed architecture succeeds to have\nthe same playing performance and shows more robust behavior, achieving\nsignificant reduction in the number of training parameters of almost 30%.\n", "title": "Robust Dual View Deep Agent" }
null
null
null
null
true
null
3152
null
Default
null
null
null
{ "abstract": " Microplasma generation using microwaves in an electromagnetically induced\ntransparency (EIT)-like metasurface composed of two types of radiatively\ncoupled cut-wire resonators with slightly different resonance frequencies is\ninvestigated. Microplasma is generated in either of the gaps of the cut-wire\nresonators as a result of strong enhancement of the local electric field\nassociated with resonance and slow microwave effect. The threshold microwave\npower for plasma ignition is found to reach a minimum at the EIT-like\ntransmission peak frequency, where the group index is maximized. A pump-probe\nmeasurement of the metasurface reveals that the transmission properties can be\nsignificantly varied by varying the properties of the generated microplasma\nnear the EIT-like transmission peak frequency and the resonance frequency. The\nelectron density of the microplasma is roughly estimated to be of order\n$1\\times 10^{10}\\,\\mathrm{cm}^{-3}$ for a pump power of $15.8\\,\\mathrm{W}$ by\ncomparing the measured transmission spectrum for the probe wave with the\nnumerically calculated spectrum. In the calculation, we assumed that the plasma\nis uniformly generated in the resonator gap, that the electron temperature is\n$2\\,\\mathrm{eV}$, and that the elastic scattering cross section is $20 \\times\n10^{-16}\\,\\mathrm{cm}^2$.\n", "title": "Microplasma generation by slow microwave in an electromagnetically induced transparency-like metasurface" }
null
null
null
null
true
null
3153
null
Default
null
null
null
{ "abstract": " We investigate Banach algebras of convolution operators on the $L^p$ spaces\nof a locally compact group, and their K-theory. We show that for a discrete\ngroup, the corresponding K-theory groups depend continuously on $p$ in an\ninductive sense. Via a Banach version of property RD, we show that for a large\nclass of groups, the K-theory groups of the Banach algebras are independent of\n$p$.\n", "title": "K-theory of group Banach algebras and Banach property RD" }
null
null
null
null
true
null
3154
null
Default
null
null
null
{ "abstract": " Spiking neural networks (SNNs) possess energy-efficient potential due to\nevent-based computation. However, supervised training of SNNs remains a\nchallenge as spike activities are non-differentiable. Previous SNNs training\nmethods can basically be categorized into two classes, backpropagation-like\ntraining methods and plasticity-based learning methods. The former methods are\ndependent on energy-inefficient real-valued computation and non-local\ntransmission, as also required in artificial neural networks (ANNs), while the\nlatter either be considered biologically implausible or exhibit poor\nperformance. Hence, biologically plausible (bio-plausible) high-performance\nsupervised learning (SL) methods for SNNs remain deficient. In this paper, we\nproposed a novel bio-plausible SNN model for SL based on the symmetric\nspike-timing dependent plasticity (sym-STDP) rule found in neuroscience. By\ncombining the sym-STDP rule with bio-plausible synaptic scaling and intrinsic\nplasticity of the dynamic threshold, our SNN model implemented SL well and\nachieved good performance in the benchmark recognition task (MNIST). To reveal\nthe underlying mechanism of our SL model, we visualized both layer-based\nactivities and synaptic weights using the t-distributed stochastic neighbor\nembedding (t-SNE) method after training and found that they were well\nclustered, thereby demonstrating excellent classification ability. As the\nlearning rules were bio-plausible and based purely on local spike events, our\nmodel could be easily applied to neuromorphic hardware for online training and\nmay be helpful for understanding SL information processing at the synaptic\nlevel in biological neural systems.\n", "title": "A Biologically Plausible Supervised Learning Method for Spiking Neural Networks Using the Symmetric STDP Rule" }
null
null
null
null
true
null
3155
null
Default
null
null
null
{ "abstract": " Suicide is an important but often misunderstood problem, one that researchers\nare now seeking to better understand through social media. Due in large part to\nthe fuzzy nature of what constitutes suicidal risks, most supervised approaches\nfor learning to automatically detect suicide-related activity in social media\nrequire a great deal of human labor to train. However, humans themselves have\ndiverse or conflicting views on what constitutes suicidal thoughts. So how to\nobtain reliable gold standard labels is fundamentally challenging and, we\nhypothesize, depends largely on what is asked of the annotators and what slice\nof the data they label. We conducted multiple rounds of data labeling and\ncollected annotations from crowdsourcing workers and domain experts. We\naggregated the resulting labels in various ways to train a series of supervised\nmodels. Our preliminary evaluations show that using unanimously agreed labels\nfrom multiple annotators is helpful to achieve robust machine models.\n", "title": "Learning from various labeling strategies for suicide-related messages on social media: An experimental study" }
null
null
[ "Computer Science" ]
null
true
null
3156
null
Validated
null
null
null
{ "abstract": " This paper proposes a principled information theoretic analysis of\nclassification for deep neural network structures, e.g. convolutional neural\nnetworks (CNN). The output of convolutional filters is modeled as a random\nvariable Y conditioned on the object class C and network filter bank F. The\nconditional entropy (CENT) H(Y |C,F) is shown in theory and experiments to be a\nhighly compact and class-informative code, that can be computed from the filter\noutputs throughout an existing CNN and used to obtain higher classification\nresults than the original CNN itself. Experiments demonstrate the effectiveness\nof CENT feature analysis in two separate CNN classification contexts. 1) In the\nclassification of neurodegeneration due to Alzheimer's disease (AD) and natural\naging from 3D magnetic resonance image (MRI) volumes, 3 CENT features result in\nan AUC=94.6% for whole-brain AD classification, the highest reported accuracy\non the public OASIS dataset used and 12% higher than the softmax output of the\noriginal CNN trained for the task. 2) In the context of visual object\nclassification from 2D photographs, transfer learning based on a small set of\nCENT features identified throughout an existing CNN leads to AUC values\ncomparable to the 1000-feature softmax output of the original network when\nclassifying previously unseen object categories. The general information\ntheoretical analysis explains various recent CNN design successes, e.g. densely\nconnected CNN architectures, and provides insights for future research\ndirections in deep learning.\n", "title": "Modeling Information Flow Through Deep Neural Networks" }
null
null
null
null
true
null
3157
null
Default
null
null
null
{ "abstract": " Twin Support Vector Machines (TWSVMs) have emerged an efficient alternative\nto Support Vector Machines (SVM) for learning from imbalanced datasets. The\nTWSVM learns two non-parallel classifying hyperplanes by solving a couple of\nsmaller sized problems. However, it is unsuitable for large datasets, as it\ninvolves matrix operations. In this paper, we discuss a Twin Neural Network\n(Twin NN) architecture for learning from large unbalanced datasets. The Twin NN\nalso learns an optimal feature map, allowing for better discrimination between\nclasses. We also present an extension of this network architecture for\nmulticlass datasets. Results presented in the paper demonstrate that the Twin\nNN generalizes well and scales well on large unbalanced datasets.\n", "title": "Scalable Twin Neural Networks for Classification of Unbalanced Data" }
null
null
null
null
true
null
3158
null
Default
null
null
null
{ "abstract": " Recurrent Neural Networks (RNNs) are powerful sequence modeling tools.\nHowever, when dealing with high dimensional inputs, the training of RNNs\nbecomes computational expensive due to the large number of model parameters.\nThis hinders RNNs from solving many important computer vision tasks, such as\nAction Recognition in Videos and Image Captioning. To overcome this problem, we\npropose a compact and flexible structure, namely Block-Term tensor\ndecomposition, which greatly reduces the parameters of RNNs and improves their\ntraining efficiency. Compared with alternative low-rank approximations, such as\ntensor-train RNN (TT-RNN), our method, Block-Term RNN (BT-RNN), is not only\nmore concise (when using the same rank), but also able to attain a better\napproximation to the original RNNs with much fewer parameters. On three\nchallenging tasks, including Action Recognition in Videos, Image Captioning and\nImage Generation, BT-RNN outperforms TT-RNN and the standard RNN in terms of\nboth prediction accuracy and convergence rate. Specifically, BT-LSTM utilizes\n17,388 times fewer parameters than the standard LSTM to achieve an accuracy\nimprovement over 15.6\\% in the Action Recognition task on the UCF11 dataset.\n", "title": "Learning Compact Recurrent Neural Networks with Block-Term Tensor Decomposition" }
null
null
null
null
true
null
3159
null
Default
null
null
null
{ "abstract": " In this paper, we propose deep learning architectures (FNN, CNN and LSTM) to\nforecast a regression model for time dependent data. These algorithm's are\ndesigned to handle Floating Car Data (FCD) historic speeds to predict road\ntraffic data. For this we aggregate the speeds into the network inputs in an\ninnovative way. We compare the RMSE thus obtained with the results of a simpler\nphysical model, and show that the latter achieves better RMSE accuracy. We also\npropose a new indicator, which evaluates the algorithms improvement when\ncompared to a benchmark prediction. We conclude by questioning the interest of\nusing deep learning methods for this specific regression task.\n", "title": "Deep Learning applied to Road Traffic Speed forecasting" }
null
null
null
null
true
null
3160
null
Default
null
null
null
{ "abstract": " Self-admitted technical debt refers to situations where a software developer\nknows that their current implementation is not optimal and indicates this using\na source code comment. In this work, we hypothesize that it is possible to\ndevelop automated techniques to understand a subset of these comments in more\ndetail, and to propose tool support that can help developers manage\nself-admitted technical debt more effectively. Based on a qualitative study of\n335 comments indicating self-admitted technical debt, we first identify one\nparticular class of debt amenable to automated management: \"on-hold\"\nself-admitted technical debt, i.e., debt which contains a condition to indicate\nthat a developer is waiting for a certain event or an updated functionality\nhaving been implemented elsewhere. We then design and evaluate an automated\nclassifier which can automatically identify these \"on-hold\" instances with a\nprecision of 0.81 as well as detect the specific conditions that developers are\nwaiting for. Our work presents a first step towards automated tool support that\nis able to indicate when certain instances of self-admitted technical debt are\nready to be addressed.\n", "title": "Wait For It: Identifying \"On-Hold\" Self-Admitted Technical Debt" }
null
null
null
null
true
null
3161
null
Default
null
null
null
{ "abstract": " A better understanding and anticipation of natural processes such as\nlandsliding or seismic fault activity requires detailed theoretical and\nexperimental analysis of rock mechanics and geomaterial dynamics. These last\ndecades, considerable progress has been made towards understanding deformation\nand fracture process in laboratory experiment on granular rock materials, as\nthe well-known shear banding experiment. One of the reasons for this progress\nis the continuous improvement in the instrumental techniques of observation.\nBut the lack of real time methods does not allow the detection of indicators of\nthe upcoming fracture process and thus to anticipate the phenomenon. Here, we\nhave performed uniaxial compression experiments to analyse the response of a\ngranular rock material sample to different shocks. We use a novel\ninterferometric laser sensor based on the nonlinear self-mixing interferometry\ntechnique to observe in real time the deformations of the sample and assess its\nusefulness as a diagnostic tool for the analysis of geomaterial dynamics. Due\nto the high spatial and temporal resolution of this approach, we observe both\nvibrations processes in response to a dynamic loading and the onset of failure.\nThe latter is preceded by a continuous variation of vibration period of the\nmaterial. After several shocks, the material response is no longer reversible\nand we detect a progressive accumulation of irreversible deformation leading to\nthe fracture process. We demonstrate that material failure is anticipated by\nthe critical slowing down of the surface vibrational motion, which may\ntherefore be envisioned as an early warning signal or predictor to the\nmacroscopic failure of the sample. The nonlinear self-mixing interferometry\ntechnique is readily extensible to fault propagation measurements. As such, it\nopens a new window of observation for the study of geomaterial deformation and\nfailure.\n", "title": "Real time observation of granular rock analogue material deformation and failure using nonlinear laser interferometry" }
null
null
null
null
true
null
3162
null
Default
null
null
null
{ "abstract": " For a given $(X,S,\\beta)$, where $S,\\beta\\colon X\\times X\\to X\\times X$ are\nset theoretical solutions of Yang-Baxter equation with a compatibility\ncondition, we define an invariant for virtual (or classical) knots/links using\nnon commutative 2-cocycles pairs $(f,g)$ that generalizes the one defined in\n[FG2]. We also define, a group $U_{nc}^{fg}=U_{nc}^{fg}(X,S,\\beta)$ and\nfunctions $\\pi_f, \\pi_g\\colon X\\times X\\to U_{nc}^{fg}(X)$ governing all\n2-cocycles in $X$. We exhibit examples of computations achieved using GAP.\n", "title": "Virtual link and knot invariants from non-abelian Yang-Baxter 2-cocycle pairs" }
null
null
null
null
true
null
3163
null
Default
null
null
null
{ "abstract": " We present a new extensible and divisible taxonomy for open set sound scene\nanalysis. This new model allows complex scene analysis with tangible\ndescriptors and perception labels. Its novel structure is a cluster graph such\nthat each cluster (or subset) can stand alone for targeted analyses such as\noffice sound event detection, whilst maintaining integrity over the whole graph\n(superset) of labels. The key design benefit is its extensibility as new labels\nare needed during new data capture. Furthermore, datasets which use the same\ntaxonomy are easily augmented, saving future data collection effort. We balance\nthe details needed for complex scene analysis with avoiding 'the taxonomy of\neverything' with our framework to ensure no duplicity in the superset of labels\nand demonstrate this with DCASE challenge classifications.\n", "title": "An extensible cluster-graph taxonomy for open set sound scene analysis" }
null
null
null
null
true
null
3164
null
Default
null
null
null
{ "abstract": " The CREATE database is composed of 14 hours of multimodal recordings from a\nmobile robotic platform based on the iRobot Create. The various sensors cover\nvision, audition, motors and proprioception. The dataset has been designed in\nthe context of a mobile robot that can learn multimodal representations of its\nenvironment, thanks to its ability to navigate the environment. This ability\ncan also be used to learn the dependencies and relationships between the\ndifferent modalities of the robot (e.g. vision, audition), as they reflect both\nthe external environment and the internal state of the robot. The provided\nmultimodal dataset is expected to have multiple usages, such as multimodal\nunsupervised object learning, multimodal prediction and egomotion/causality\ndetection.\n", "title": "CREATE: Multimodal Dataset for Unsupervised Learning, Generative Modeling and Prediction of Sensory Data from a Mobile Robot in Indoor Environments" }
null
null
[ "Computer Science" ]
null
true
null
3165
null
Validated
null
null
null
{ "abstract": " Mobile-phones have facilitated the creation of field-portable, cost-effective\nimaging and sensing technologies that approach laboratory-grade instrument\nperformance. However, the optical imaging interfaces of mobile-phones are not\ndesigned for microscopy and produce spatial and spectral distortions in imaging\nmicroscopic specimens. Here, we report on the use of deep learning to correct\nsuch distortions introduced by mobile-phone-based microscopes, facilitating the\nproduction of high-resolution, denoised and colour-corrected images, matching\nthe performance of benchtop microscopes with high-end objective lenses, also\nextending their limited depth-of-field. After training a convolutional neural\nnetwork, we successfully imaged various samples, including blood smears,\nhistopathology tissue sections, and parasites, where the recorded images were\nhighly compressed to ease storage and transmission for telemedicine\napplications. This method is applicable to other low-cost, aberrated imaging\nsystems, and could offer alternatives for costly and bulky microscopes, while\nalso providing a framework for standardization of optical images for clinical\nand biomedical applications.\n", "title": "Deep learning enhanced mobile-phone microscopy" }
null
null
null
null
true
null
3166
null
Default
null
null
null
{ "abstract": " In this paper, we consider an estimation problem concerning the matrix of\ncorrelation coefficients in context of high dimensional data settings. In\nparticular, we revisit some results in Li and Rolsalsky [Li, D. and Rolsalsky,\nA. (2006). Some strong limit theorems for the largest entries of sample\ncorrelation matrices, The Annals of Applied Probability, 16, 1, 423-447]. Four\nof the main theorems of Li and Rolsalsky (2006) are established in their full\ngeneralities and we simplify substantially some proofs of the quoted paper.\nFurther, we generalize a theorem which is useful in deriving the existence of\nthe pth moment as well as in studying the convergence rates in law of large\nnumbers.\n", "title": "On convergence of the sample correlation matrices in high-dimensional data" }
null
null
null
null
true
null
3167
null
Default
null
null
null
{ "abstract": " Given a conjunctive Boolean network (CBN) with $n$ state-variables, we\nconsider the problem of finding a minimal set of state-variables to directly\naffect with an input so that the resulting conjunctive Boolean control network\n(CBCN) is controllable. We give a necessary and sufficient condition for\ncontrollability of a CBCN; an $O(n^2)$-time algorithm for testing\ncontrollability; and prove that nonetheless the minimal controllability problem\nfor CBNs is NP-hard.\n", "title": "Minimal Controllability of Conjunctive Boolean Networks is NP-Complete" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
3168
null
Validated
null
null
null
{ "abstract": " Starting from a summary of detection statistics of our recent X-shooter\ncampaign, we review the major surveys, both space and ground based, for\nemission counterparts of high-redshift damped Ly$\\alpha$ absorbers (DLAs)\ncarried out since the first detection 25 years ago. We show that the detection\nrates of all surveys are precisely reproduced by a simple model in which the\nmetallicity and luminosity of the galaxy associated to the DLA follow a\nrelation of the form, ${\\rm M_{UV}} = -5 \\times \\left(\\,[{\\rm M/H}] + 0.3\\,\n\\right) - 20.8$, and the DLA cross-section follows a relation of the form\n$\\sigma_{DLA} \\propto L^{0.8}$. Specifically, our spectroscopic campaign\nconsists of 11 DLAs preselected based on their equivalent width of SiII\n$\\lambda1526$ to have a metallicity higher than [Si/H] > -1. The targets have\nbeen observed with the X-shooter spectrograph at the Very Large Telescope to\nsearch for emission lines around the quasars. We observe a high detection rate\nof 64% (7/11), significantly higher than the typical $\\sim$10% for random,\nHI-selected DLA samples. We use the aforementioned model, to simulate the\nresults of our survey together with a range of previous surveys: spectral\nstacking, direct imaging (using the `double DLA' technique), long-slit\nspectroscopy, and integral field spectroscopy. Based on our model results, we\nare able to reconcile all results. Some tension is observed between model and\ndata when looking at predictions of Ly$\\alpha$ emission for individual targets.\nHowever, the object to object variations are most likely a result of the\nsignificant scatter in the underlying scaling relations as well as\nuncertainties in the amount of dust which affects the emission.\n", "title": "Consensus report on 25 years of searches for damped Ly$α$ galaxies in emission: Confirming their metallicity-luminosity relation at $z \\gtrsim 2$" }
null
null
[ "Physics" ]
null
true
null
3169
null
Validated
null
null
null
{ "abstract": " Complex Ornstein-Uhlenbeck (OU) processes have various applications in\nstatistical modelling. They play role e.g. in the description of the motion of\na charged test particle in a constant magnetic field or in the study of\nrotating waves in time-dependent reaction diffusion systems, whereas Kolmogorov\nused such a process to model the so-called Chandler wobble, small deviation in\nthe Earth's axis of rotation. In these applications parameter estimation and\nmodel fitting is based on discrete observations of the underlying stochastic\nprocess, however, the accuracy of the results strongly depend on the\nobservation points.\nThis paper studies the properties of D-optimal designs for estimating the\nparameters of a complex OU process with a trend. We show that in contrast with\nthe case of the classical real OU process, a D-optimal design exists not only\nfor the trend parameter, but also for joint estimation of the covariance\nparameters, moreover, these optimal designs are equidistant.\n", "title": "D-optimal designs for complex Ornstein-Uhlenbeck processes" }
null
null
null
null
true
null
3170
null
Default
null
null
null
{ "abstract": " We consider estimating the parametric components of semi-parametric multiple\nindex models in a high-dimensional and non-Gaussian setting. Such models form a\nrich class of non-linear models with applications to signal processing, machine\nlearning and statistics. Our estimators leverage the score function based first\nand second-order Stein's identities and do not require the covariates to\nsatisfy Gaussian or elliptical symmetry assumptions common in the literature.\nMoreover, to handle score functions and responses that are heavy-tailed, our\nestimators are constructed via carefully thresholding their empirical\ncounterparts. We show that our estimator achieves near-optimal statistical rate\nof convergence in several settings. We supplement our theoretical results via\nsimulation experiments that confirm the theory.\n", "title": "On Stein's Identity and Near-Optimal Estimation in High-dimensional Index Models" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
3171
null
Validated
null
null
null
{ "abstract": " We define Dirac operators on $\\mathbb{S}^3$ (and $\\mathbb{R}^3$) with\nmagnetic fields supported on smooth, oriented links and prove self-adjointness\nof certain (natural) extensions. We then analyze their spectral properties and\nshow, among other things, that these operators have discrete spectrum. Certain\nexamples, such as circles in $\\mathbb{S}^3$, are investigated in detail and we\ncompute the dimension of the zero-energy eigenspace.\n", "title": "Self-adjointness and spectral properties of Dirac operators with magnetic links" }
null
null
null
null
true
null
3172
null
Default
null
null
null
{ "abstract": " Latent tree models are graphical models defined on trees, in which only a\nsubset of variables is observed. They were first discussed by Judea Pearl as\ntree-decomposable distributions to generalise star-decomposable distributions\nsuch as the latent class model. Latent tree models, or their submodels, are\nwidely used in: phylogenetic analysis, network tomography, computer vision,\ncausal modeling, and data clustering. They also contain other well-known\nclasses of models like hidden Markov models, Brownian motion tree model, the\nIsing model on a tree, and many popular models used in phylogenetics. This\narticle offers a concise introduction to the theory of latent tree models. We\nemphasise the role of tree metrics in the structural description of this model\nclass, in designing learning algorithms, and in understanding fundamental\nlimits of what and when can be learned.\n", "title": "Latent tree models" }
null
null
null
null
true
null
3173
null
Default
null
null
null
{ "abstract": " Program synthesis is the process of automatically translating a specification\ninto computer code. Traditional synthesis settings require a formal, precise\nspecification. Motivated by computer education applications where a student\nlearns to code simple turtle-style drawing programs, we study a novel synthesis\nsetting where only a noisy user-intention drawing is specified. This allows\nstudents to sketch their intended output, optionally together with their own\nincomplete program, to automatically produce a completed program. We formulate\nthis synthesis problem as search in the space of programs, with the score of a\nstate being the Hausdorff distance between the program output and the user\ndrawing. We compare several search algorithms on a corpus consisting of real\nuser drawings and the corresponding programs, and demonstrate that our\nalgorithms can synthesize programs optimally satisfying the specification.\n", "title": "Program Synthesis from Visual Specification" }
null
null
null
null
true
null
3174
null
Default
null
null
null
{ "abstract": " Applications which use human speech as an input require a speech interface\nwith high recognition accuracy. The words or phrases in the recognised text are\nannotated with a machine-understandable meaning and linked to knowledge graphs\nfor further processing by the target application. These semantic annotations of\nrecognised words can be represented as a subject-predicate-object triples which\ncollectively form a graph often referred to as a knowledge graph. This type of\nknowledge representation facilitates to use speech interfaces with any spoken\ninput application, since the information is represented in logical, semantic\nform, retrieving and storing can be followed using any web standard query\nlanguages. In this work, we develop a methodology for linking speech input to\nknowledge graphs and study the impact of recognition errors in the overall\nprocess. We show that for a corpus with lower WER, the annotation and linking\nof entities to the DBpedia knowledge graph is considerable. DBpedia Spotlight,\na tool to interlink text documents with the linked open data is used to link\nthe speech recognition output to the DBpedia knowledge graph. Such a\nknowledge-based speech recognition interface is useful for applications such as\nquestion answering or spoken dialog systems.\n", "title": "Towards a Knowledge Graph based Speech Interface" }
null
null
null
null
true
null
3175
null
Default
null
null
null
{ "abstract": " Let $G$ be a sofic group, and let $\\Sigma = (\\sigma_n)_{n\\geq 1}$ be a sofic\napproximation to it. For a probability-preserving $G$-system, a variant of the\nsofic entropy relative to $\\Sigma$ has recently been defined in terms of\nsequences of measures on its model spaces that `converge' to the system in a\ncertain sense. Here we prove that, in order to study this notion, one may\nrestrict attention to those sequences that have the asymptotic equipartition\nproperty. This may be seen as a relative of the Shannon--McMillan theorem in\nthe sofic setting.\nWe also give some first applications of this result, including a new formula\nfor the sofic entropy of a $(G\\times H)$-system obtained by co-induction from a\n$G$-system, where $H$ is any other infinite sofic group.\n", "title": "An asymptotic equipartition property for measures on model spaces" }
null
null
null
null
true
null
3176
null
Default
null
null
null
{ "abstract": " Visual localization and mapping is a crucial capability to address many\nchallenges in mobile robotics. It constitutes a robust, accurate and\ncost-effective approach for local and global pose estimation within prior maps.\nYet, in highly dynamic environments, like crowded city streets, problems arise\nas major parts of the image can be covered by dynamic objects. Consequently,\nvisual odometry pipelines often diverge and the localization systems\nmalfunction as detected features are not consistent with the precomputed 3D\nmodel. In this work, we present an approach to automatically detect dynamic\nobject instances to improve the robustness of vision-based localization and\nmapping in crowded environments. By training a convolutional neural network\nmodel with a combination of synthetic and real-world data, dynamic object\ninstance masks are learned in a semi-supervised way. The real-world data can be\ncollected with a standard camera and requires minimal further post-processing.\nOur experiments show that a wide range of dynamic objects can be reliably\ndetected using the presented method. Promising performance is demonstrated on\nour own and also publicly available datasets, which also shows the\ngeneralization capabilities of this approach.\n", "title": "Dynamic Objects Segmentation for Visual Localization in Urban Environments" }
null
null
null
null
true
null
3177
null
Default
null
null
null
{ "abstract": " A key problem in research on adversarial examples is that vulnerability to\nadversarial examples is usually measured by running attack algorithms. Because\nthe attack algorithms are not optimal, the attack algorithms are prone to\noverestimating the size of perturbation needed to fool the target model. In\nother words, the attack-based methodology provides an upper-bound on the size\nof a perturbation that will fool the model, but security guarantees require a\nlower bound. CLEVER is a proposed scoring method to estimate a lower bound.\nUnfortunately, an estimate of a bound is not a bound. In this report, we show\nthat gradient masking, a common problem that causes attack methodologies to\nprovide only a very loose upper bound, causes CLEVER to overestimate the size\nof perturbation needed to fool the model. In other words, CLEVER does not\nresolve the key problem with the attack-based methodology, because it fails to\nprovide a lower bound.\n", "title": "Gradient Masking Causes CLEVER to Overestimate Adversarial Perturbation Size" }
null
null
null
null
true
null
3178
null
Default
null
null
null
{ "abstract": " While the analysis of airborne laser scanning (ALS) data often provides\nreliable estimates for certain forest stand attributes -- such as total volume\nor basal area -- there is still room for improvement, especially in estimating\nspecies-specific attributes. Moreover, while information on the estimate\nuncertainty would be useful in various economic and environmental analyses on\nforests, a computationally feasible framework for uncertainty quantifying in\nALS is still missing. In this article, the species-specific stand attribute\nestimation and uncertainty quantification (UQ) is approached using Gaussian\nprocess regression (GPR), which is a nonlinear and nonparametric machine\nlearning method. Multiple species-specific stand attributes are estimated\nsimultaneously: tree height, stem diameter, stem number, basal area, and stem\nvolume. The cross-validation results show that GPR yields on average an\nimprovement of 4.6\\% in estimate RMSE over a state-of-the-art k-nearest\nneighbors (kNN) implementation, negligible bias and well performing UQ\n(credible intervals), while being computationally fast. The performance\nadvantage over kNN and the feasibility of credible intervals persists even when\nsmaller training sets are used.\n", "title": "Gaussian process regression for forest attribute estimation from airborne laser scanning data" }
null
null
null
null
true
null
3179
null
Default
null
null
null
{ "abstract": " We study the ground-state properties of a double layer graphene system with\nthe Coulomb interlayer electron-electron interaction modeled within the random\nphase approximation. We first obtain an expression of the quantum capacitance\nof a two layer system. In addition, we calculate the many-body\nexchange-correlation energy and quantum capacitance of the hybrid double layer\ngraphene system at zero-temperature. We show an enhancement of the majority\ndensity layer thermodynamic density-of-states owing to an increasing interlayer\ninteraction between two layers near the Dirac point. The quantum capacitance\nnear the neutrality point behaves like square root of the total density,\n$\\alpha \\sqrt{n}$, where the coefficient $\\alpha$ decreases by increasing the\ncharge density imbalance between two layers. Furthermore, we show that the\nquantum capacitance changes linearly by the gate voltage. Our results can be\nverified by current experiments.\n", "title": "Quantum capacitance of double-layer graphene" }
null
null
null
null
true
null
3180
null
Default
null
null
null
{ "abstract": " We propose a novel approach for loss reserving based on deep neural networks.\nThe approach allows for jointly modeling of paid losses and claims outstanding,\nand incorporation of heterogenous inputs. We validate the models on loss\nreserving data across lines of business, and show that they attain or exceed\nthe predictive accuracy of existing stochastic methods. The models require\nminimal feature engineering and expert input, and can be automated to produce\nforecasts at a high frequency.\n", "title": "DeepTriangle: A Deep Learning Approach to Loss Reserving" }
null
null
null
null
true
null
3181
null
Default
null
null
null
{ "abstract": " In this paper a new distributed asynchronous algorithm is proposed for time\nsynchronization in networks with random communication delays, measurement noise\nand communication dropouts. Three different types of the drift correction\nalgorithm are introduced, based on different kinds of local time increments.\nUnder nonrestrictive conditions concerning network properties, it is proved\nthat all the algorithm types provide convergence in the mean square sense and\nwith probability one (w.p.1) of the corrected drifts of all the nodes to the\nsame value (consensus). An estimate of the convergence rate of these algorithms\nis derived. For offset correction, a new algorithm is proposed containing a\ncompensation parameter coping with the influence of random delays and special\nterms taking care of the influence of both linearly increasing time and drift\ncorrection. It is proved that the corrected offsets of all the nodes converge\nin the mean square sense and w.p.1. An efficient offset correction algorithm\nbased on consensus on local compensation parameters is also proposed. It is\nshown that the overall time synchronization algorithm can also be implemented\nas a flooding algorithm with one reference node. It is proved that it is\npossible to achieve bounded error between local corrected clocks in the mean\nsquare sense and w.p.1. Simulation results provide an additional practical\ninsight into the algorithm properties and show its advantage over the existing\nmethods.\n", "title": "Distributed Time Synchronization for Networks with Random Delays and Measurement Noise" }
null
null
null
null
true
null
3182
null
Default
null
null
null
{ "abstract": " For developers concerned with a performance drop or improvement in their\nsoftware, a profiler allows a developer to quickly search and identify\nbottlenecks and leaks that consume much execution time. Non real-time profilers\nanalyze the history of already executed stack traces, while a real-time\nprofiler outputs the results concurrently with the execution of software, so\nusers can know the results instantaneously. However, a real-time profiler risks\nproviding overly large and complex outputs, which is difficult for developers\nto quickly analyze. In this paper, we visualize the performance data from a\nreal-time profiler. We visualize program execution as a three-dimensional (3D)\ncity, representing the structure of the program as artifacts in a city (i.e.,\nclasses and packages expressed as buildings and districts) and their program\nexecutions expressed as the fluctuating height of artifacts. Through two case\nstudies and using a prototype of our proposed visualization, we demonstrate how\nour visualization can easily identify performance issues such as a memory leak\nand compare performance changes between versions of a program. A demonstration\nof the interactive features of our prototype is available at\nthis https URL.\n", "title": "Using High-Rising Cities to Visualize Performance in Real-Time" }
null
null
null
null
true
null
3183
null
Default
null
null
null
{ "abstract": " We present a novel controller synthesis approach for discrete-time hybrid\npolynomial systems, a class of systems that can model a wide variety of\ninteractions between robots and their environment. The approach is rooted in\nrecently developed techniques that use occupation measures to formulate the\ncontroller synthesis problem as an infinite-dimensional linear program. The\nrelaxation of the linear program as a finite-dimensional semidefinite program\ncan be solved to generate a control law. The approach has several advantages\nincluding that the formulation is convex, that the formulation and the\nextracted controllers are simple, and that the computational complexity is\npolynomial in the state and control input dimensions. We illustrate our\napproach on some robotics examples.\n", "title": "Controller Synthesis for Discrete-time Hybrid Polynomial Systems via Occupation Measures" }
null
null
[ "Computer Science" ]
null
true
null
3184
null
Validated
null
null
null
{ "abstract": " Recently, research on accelerated stochastic gradient descent methods (e.g.,\nSVRG) has made exciting progress (e.g., linear convergence for strongly convex\nproblems). However, the best-known methods (e.g., Katyusha) requires at least\ntwo auxiliary variables and two momentum parameters. In this paper, we propose\na fast stochastic variance reduction gradient (FSVRG) method, in which we\ndesign a novel update rule with the Nesterov's momentum and incorporate the\ntechnique of growing epoch size. FSVRG has only one auxiliary variable and one\nmomentum weight, and thus it is much simpler and has much lower per-iteration\ncomplexity. We prove that FSVRG achieves linear convergence for strongly convex\nproblems and the optimal $\\mathcal{O}(1/T^2)$ convergence rate for non-strongly\nconvex problems, where $T$ is the number of outer-iterations. We also extend\nFSVRG to directly solve the problems with non-smooth component functions, such\nas SVM. Finally, we empirically study the performance of FSVRG for solving\nvarious machine learning problems such as logistic regression, ridge\nregression, Lasso and SVM. Our results show that FSVRG outperforms the\nstate-of-the-art stochastic methods, including Katyusha.\n", "title": "Fast Stochastic Variance Reduced Gradient Method with Momentum Acceleration for Machine Learning" }
null
null
null
null
true
null
3185
null
Default
null
null
null
{ "abstract": " Gradient reconstruction is a key process for the spatial accuracy and\nrobustness of finite volume method, especially in industrial aerodynamic\napplications in which grid quality affects reconstruction methods\nsignificantly. A novel gradient reconstruction method for cell-centered finite\nvolume scheme is introduced. This method is composed of two successive steps.\nFirst, a vertex-based weighted-least-squares procedure is implemented to\ncalculate vertex gradients, and then the cell-centered gradients are calculated\nby an arithmetic averaging procedure. By using these two procedures, extended\nstencils are implemented in the calculations, and the accuracy of gradient\nreconstruction is improved by the weighting procedure. In the given test cases,\nthe proposed method is showing improvement on both the accuracy and\nconvergence. Furthermore, the method could be extended to the calculation of\nviscous fluxes.\n", "title": "A vertex-weighted-Least-Squares gradient reconstruction" }
null
null
null
null
true
null
3186
null
Default
null
null
null
{ "abstract": " The Jordan decomposition theorem states that every function $f \\colon [0,1]\n\\to \\mathbb{R}$ of bounded variation can be written as the difference of two\nnon-decreasing functions. Combining this fact with a result of Lebesgue, every\nfunction of bounded variation is differentiable almost everywhere in the sense\nof Lebesgue measure. We analyze the strength of these theorems in the setting\nof reverse mathematics. Over $\\mathsf{RCA}_0$, a stronger version of Jordan's\nresult where all functions are continuous is equivalent to $\\mathsf{ACA}_0$,\nwhile the version stated is equivalent to $\\mathsf{WKL}_0$. The result that\nevery function on $[0,1]$ of bounded variation is almost everywhere\ndifferentiable is equivalent to $\\mathsf{WWKL}_0$. To state this equivalence in\na meaningful way, we develop a theory of Martin-Löf randomness over\n$\\mathsf{RCA}_0$.\n", "title": "The reverse mathematics of theorems of Jordan and Lebesgue" }
null
null
null
null
true
null
3187
null
Default
null
null
null
{ "abstract": " We compute the semiflat positive cone $K_0^{+SF}(A_\\theta^\\sigma)$ of the\n$K_0$-group of the irrational rotation orbifold $A_\\theta^\\sigma$ under the\nnoncommutative Fourier transform $\\sigma$ and show that it is determined by\nclasses of positive trace and the vanishing of two topological invariants. The\nsemiflat orbifold projections are 3-dimensional and come in three basic\ntopological genera: $(2,0,0)$, $(1,1,2)$, $(0,0,2)$. (A projection is called\nsemiflat when it has the form $h + \\sigma(h)$ where $h$ is a flip-invariant\nprojection such that $h\\sigma(h)=0$.) Among other things, we also show that\nevery number in $(0,1) \\cap (2\\mathbb Z + 2\\mathbb Z\\theta)$ is the trace of a\nsemiflat projection in $A_\\theta$. The noncommutative Fourier transform is the\norder 4 automorphism $\\sigma: V \\to U \\to V^{-1}$ (and the flip is $\\sigma^2$:\n$U \\to U^{-1},\\ V \\to V^{-1}$), where $U,V$ are the canonical unitary\ngenerators of the rotation algebra $A_\\theta$ satisfying $VU = e^{2\\pi i\\theta}\nUV$.\n", "title": "Semiflat Orbifold Projections" }
null
null
null
null
true
null
3188
null
Default
null
null
null
{ "abstract": " We have measured the magnetization of the organic compound BIP-BNO\n(3,5'-bis(N-tert-butylaminoxyl)-3',5-dibromobiphenyl) up to 76 T where the\nmagnetization is saturated. The S = 1/2 antiferromagnetic Heisenberg two-leg\nspin-ladder model accounts for the obtained experimental data regarding the\nmagnetization curve, which is clarified using the quantum Monte Carlo method.\nThe exchange constants on the rung and the side rail of the ladder are\nestimated to be J(rung)/kB = 65.7 K and J(leg)/kB = 14.1 K, respectively,\ndeeply in the strong coupling region: J(rung)/J(leg) > 1.\n", "title": "Magnetization process of the S = 1/2 two-leg organic spin-ladder compound BIP-BNO" }
null
null
[ "Physics" ]
null
true
null
3189
null
Validated
null
null
null
{ "abstract": " Class imbalance is a challenging issue in practical classification problems\nfor deep learning models as well as traditional models. Traditionally\nsuccessful countermeasures such as synthetic over-sampling have had limited\nsuccess with complex, structured data handled by deep learning models. In this\npaper, we propose Deep Over-sampling (DOS), a framework for extending the\nsynthetic over-sampling method to exploit the deep feature space acquired by a\nconvolutional neural network (CNN). Its key feature is an explicit, supervised\nrepresentation learning, for which the training data presents each raw input\nsample with a synthetic embedding target in the deep feature space, which is\nsampled from the linear subspace of in-class neighbors. We implement an\niterative process of training the CNN and updating the targets, which induces\nsmaller in-class variance among the embeddings, to increase the discriminative\npower of the deep representation. We present an empirical study using public\nbenchmarks, which shows that the DOS framework not only counteracts class\nimbalance better than the existing method, but also improves the performance of\nthe CNN in the standard, balanced settings.\n", "title": "Deep Over-sampling Framework for Classifying Imbalanced Data" }
null
null
null
null
true
null
3190
null
Default
null
null
null
{ "abstract": " In order to handle undesirable failures of a multicopter which occur in\neither the pre-flight process or the in-flight process, a failsafe mechanism\ndesign method based on supervisory control theory is proposed for the\nsemi-autonomous control mode. Failsafe mechanism is a control logic that guides\nwhat subsequent actions the multicopter should take, by taking account of\nreal-time information from guidance, attitude control, diagnosis, and other\nlow-level subsystems. In order to design a failsafe mechanism for multicopters,\nsafety issues of multicopters are introduced. Then, user requirements including\nfunctional requirements and safety requirements are textually described, where\nfunction requirements determine a general multicopter plant, and safety\nrequirements cover the failsafe measures dealing with the presented safety\nissues. In order to model the user requirements by discrete-event systems,\nseveral multicopter modes and events are defined. On this basis, the\nmulticopter plant and control specifications are modeled by automata. Then, a\nsupervisor is synthesized by monolithic supervisory control theory. In\naddition, we present three examples to demonstrate the potential blocking\nphenomenon due to inappropriate design of control specifications. Also, we\ndiscuss the meaning of correctness and the properties of the obtained\nsupervisor. This makes the failsafe mechanism convincingly correct and\neffective. Finally, based on the obtained supervisory controller generated by\nTCT software, an implementation method suitable for multicopters is presented,\nin which the supervisory controller is transformed into decision-making codes.\n", "title": "Failsafe Mechanism Design of Multicopters Based on Supervisory Control Theory" }
null
null
null
null
true
null
3191
null
Default
null
null
null
{ "abstract": " A method for inverse design of horizontal axis wind turbines (HAWTs) is\npresented in this paper. The direct solver for aerodynamic analysis solves the\nReynolds Averaged Navier Stokes (RANS) equations, where the effect of the\nturbine rotor is modeled as momentum sources using the actuator disk model\n(ADM); this approach is referred to as RANS/ADM. The inverse problem is posed\nas follows: for a given selection of airfoils, the objective is to find the\nblade geometry (described as blade twist and chord distributions) which\nrealizes the desired turbine aerodynamic performance at the design point; the\ndesired performance is prescribed as angle of attack ($\\alpha$) and axial\ninduction factor ($a$) distributions along the blade. An iterative approach is\nused. An initial estimate of blade geometry is used with the direct solver\n(RANS/ADM) to obtain $\\alpha$ and $a$. The differences between the calculated\nand desired values of $\\alpha$ and $a$ are computed and a new estimate for the\nblade geometry (chord and twist) is obtained via nonlinear least squares\nregression using the Trust-Region-Reflective (TRF) method. This procedure is\ncontinued until the difference between the calculated and the desired values is\nwithin acceptable tolerance. The method is demonstrated for conventional,\nsingle-rotor HAWTs and then extended to multi-rotor, specifically dual-rotor\nwind turbines. The TRF method is also compared with the multi-dimensional\nNewton iteration method and found to provide better convergence when\nconstraints are imposed in blade design, although faster convergence is\nobtained with the Newton method for unconstrained optimization.\n", "title": "Inverse Design of Single- and Multi-Rotor Horizontal Axis Wind Turbine Blades using Computational Fluid Dynamics" }
null
null
null
null
true
null
3192
null
Default
null
null
null
{ "abstract": " We report Chandra X-ray observations and optical weak-lensing measurements\nfrom Subaru/Suprime-Cam images of the double galaxy cluster Abell 2465\n(z=0.245). The X-ray brightness data are fit to a beta-model to obtain the\nradial gas density profiles of the northeast (NE) and southwest (SW)\nsub-components, which are seen to differ in structure. We determine core radii,\ncentral temperatures, the gas masses within $r_{500c}$, and the total masses\nfor the broader NE and sharper SW components assuming hydrostatic equilibrium.\nThe central entropy of the NE clump is about two times higher than the SW.\nAlong with its structural properties, this suggests that it has undergone\nmerging on its own. The weak-lensing analysis gives virial masses for each\nsubstructure, which compare well with earlier dynamical results. The derived\nouter mass contours of the SW sub-component from weak lensing are more\nirregular and extended than those of the NE. Although there is a weak\nenhancement and small offsets between X-ray gas and mass centers from weak\nlensing, the lack of large amounts of gas between the two sub-clusters\nindicates that Abell 2465 is in a pre-merger state. A dynamical model that is\nconsistent with the observed cluster data, based on the FLASH program and the\nradial infall model, is constructed, where the subclusters currently separated\nby ~1.2Mpc are approaching each other at ~2000km/s and will meet in ~0.4Gyr.\n", "title": "The Double Galaxy Cluster Abell 2465 III. X-ray and Weak-lensing Observations" }
null
null
null
null
true
null
3193
null
Default
null
null
null
{ "abstract": " We introduce superdensity operators as a tool for analyzing quantum\ninformation in spacetime. Superdensity operators encode spacetime correlation\nfunctions in an operator framework, and support a natural generalization of\nHilbert space techniques and Dirac's transformation theory as traditionally\napplied to standard density operators. Superdensity operators can be measured\nexperimentally, but accessing their full content requires novel procedures. We\ndemonstrate these statements on several examples. The superdensity formalism\nsuggests useful definitions of spacetime entropies and spacetime quantum\nchannels. For example, we show that the von Neumann entropy of a superdensity\noperator is related to a quantum generalization of the Kolmogorov-Sinai\nentropy, and compute this for a many-body system. We also suggest experimental\nprotocols for measuring spacetime entropies.\n", "title": "Superdensity Operators for Spacetime Quantum Mechanics" }
null
null
null
null
true
null
3194
null
Default
null
null
null
{ "abstract": " We study properties of magnetic nanoparticles adsorbed on the halloysite\nsurface. For that a distinct magnetic Hamiltonian with random distribution of\nspins on a cylindrical surface was solved by using a nonequilibrium Monte Carlo\nmethod. The parameters for our simulations: anisotropy constant, nanoparticle\nsize distribution, saturated magnetization and geometrical parameters of the\nhalloysite template were taken from recent experiments. We calculate the\nhysteresis loops and temperature dependence of the zero field cooling (ZFC)\nsusceptibility, which maximum determines the blocking temperature. It is shown\nthat the dipole-dipole interaction between nanoparticles moderately increases\nthe blocking temperature and weakly increases the coercive force. The obtained\nhysteresis loops (e.g., the value of the coercive force) for Ni nanoparticles\nare in reasonable agreement with the experimental data. We also discuss the\nsensitivity of the hysteresis loops and ZFC susceptibilities to the change of\nanisotropy and dipole-dipole interaction, as well as the 3d-shell occupation of\nthe metallic nanoparticles; in particular we predict larger coercive force for\nFe, than for Ni nanoparticles.\n", "title": "Monte Carlo study of magnetic nanoparticles adsorbed on halloysite $Al_2Si_2O_5(OH)_4$ nanotubes" }
null
null
null
null
true
null
3195
null
Default
null
null
null
{ "abstract": " We introduce an asymptotically unbiased estimator for the full\nhigh-dimensional parameter vector in linear regression models where the number\nof variables exceeds the number of available observations. The estimator is\naccompanied by a closed-form expression for the covariance matrix of the\nestimates that is free of tuning parameters. This enables the construction of\nconfidence intervals that are valid uniformly over the parameter vector.\nEstimates are obtained by using a scaled Moore-Penrose pseudoinverse as an\napproximate inverse of the singular empirical covariance matrix of the\nregressors. The approximation induces a bias, which is then corrected for using\nthe lasso. Regularization of the pseudoinverse is shown to yield narrower\nconfidence intervals under a suitable choice of the regularization parameter.\nThe methods are illustrated in Monte Carlo experiments and in an empirical\nexample where gross domestic product is explained by a large number of\nmacroeconomic and financial indicators.\n", "title": "Inference in high-dimensional linear regression models" }
null
null
null
null
true
null
3196
null
Default
null
null
null
{ "abstract": " Counterfactual learning is a natural scenario to improve web-based machine\ntranslation services by offline learning from feedback logged during user\ninteractions. In order to avoid the risk of showing inferior translations to\nusers, in such scenarios mostly exploration-free deterministic logging policies\nare in place. We analyze possible degeneracies of inverse and reweighted\npropensity scoring estimators, in stochastic and deterministic settings, and\nrelate them to recently proposed techniques for counterfactual learning under\ndeterministic logging.\n", "title": "Counterfactual Learning for Machine Translation: Degeneracies and Solutions" }
null
null
null
null
true
null
3197
null
Default
null
null
null
{ "abstract": " The 32-bit Mersenne Twister generator MT19937 is a widely used random number\ngenerator. To generate numbers with more than 32 bits in bit length, and\nparticularly when converting into 53-bit double-precision floating-point\nnumbers in $[0,1)$ in the IEEE 754 format, the typical implementation\nconcatenates two successive 32-bit integers and divides them by a power of $2$.\nIn this case, the 32-bit MT19937 is optimized in terms of its equidistribution\nproperties (the so-called dimension of equidistribution with $v$-bit accuracy)\nunder the assumption that one will mainly be using 32-bit output values, and\nhence the concatenation sometimes degrades the dimension of equidistribution\ncompared with the simple use of 32-bit outputs. In this paper, we analyze such\nphenomena by investigating hidden $\\mathbb{F}_2$-linear relations among the\nbits of high-dimensional outputs. Accordingly, we report that MT19937 with a\nspecific lag set fails several statistical tests, such as the overlapping\ncollision test, matrix rank test, and Hamming independence test.\n", "title": "Conversion of Mersenne Twister to double-precision floating-point numbers" }
null
null
null
null
true
null
3198
null
Default
null
null
null
{ "abstract": " Numerical QCD is often extremely resource demanding and it is not rare to run\nhundreds of simulations at the same time. Each of these can last for days or\neven months and it typically requires a job-script file as well as an input\nfile with the physical parameters for the application to be run. Moreover, some\nmonitoring operations (i.e. copying, moving, deleting or modifying files,\nresume crashed jobs, etc.) are often required to guarantee that the final\nstatistics is correctly accumulated. Proceeding manually in handling\nsimulations is probably the most error-prone way and it is deadly uncomfortable\nand inefficient! BaHaMAS was developed and successfully used in the last years\nas a tool to automatically monitor and administrate simulations.\n", "title": "BaHaMAS: A Bash Handler to Monitor and Administrate Simulations" }
null
null
null
null
true
null
3199
null
Default
null
null
null
{ "abstract": " The electronic structure and energetic stability of A$_2$BX$_6$ halide\ncompounds with the cubic and tetragonal variants of the perovskite-derived\nK$_2$PtCl$_6$ prototype structure are investigated computationally within the\nframeworks of density-functional-theory (DFT) and hybrid (HSE06) functionals.\nThe HSE06 calculations are undertaken for seven known A$_2$BX$_6$ compounds\nwith A = K, Rb and Cs, and B = Sn, Pd, Pt, Te, and X = I. Trends in band gaps\nand energetic stability are identified, which are explored further employing\nDFT calculations over a larger range of chemistries, characterized by A = K,\nRb, Cs, B = Si, Ge, Sn, Pb, Ni, Pd, Pt, Se and Te and X = Cl, Br, I. For the\nsystems investigated in this work, the band gap increases from iodide to\nbromide to chloride. Further, variations in the A site cation influences the\nband gap as well as the preferred degree of tetragonal distortion. Smaller A\nsite cations such as K and Rb favor tetragonal structural distortions,\nresulting in a slightly larger band gap. For variations in the B site in the\n(Ni, Pd, Pt) group and the (Se, Te) group, the band gap increases with\nincreasing cation size. However, no observed chemical trend with respect to\ncation size for band gap was found for the (Si, Sn, Ge, Pb) group. The findings\nin this work provide guidelines for the design of halide A$_2$BX$_6$ compounds\nfor potential photovoltaic applications.\n", "title": "Computational Study of Halide Perovskite-Derived A$_2$BX$_6$ Inorganic Compounds: Chemical Trends in Electronic Structure and Structural Stability" }
null
null
[ "Physics" ]
null
true
null
3200
null
Validated
null
null