text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Humans can learn in a continuous manner. Old rarely utilized knowledge can be\noverwritten by new incoming information while important, frequently used\nknowledge is prevented from being erased. In artificial learning systems,\nlifelong learning so far has focused mainly on accumulating knowledge over\ntasks and overcoming catastrophic forgetting. In this paper, we argue that,\ngiven the limited model capacity and the unlimited new information to be\nlearned, knowledge has to be preserved or erased selectively. Inspired by\nneuroplasticity, we propose a novel approach for lifelong learning, coined\nMemory Aware Synapses (MAS). It computes the importance of the parameters of a\nneural network in an unsupervised and online manner. Given a new sample which\nis fed to the network, MAS accumulates an importance measure for each parameter\nof the network, based on how sensitive the predicted output function is to a\nchange in this parameter. When learning a new task, changes to important\nparameters can then be penalized, effectively preventing important knowledge\nrelated to previous tasks from being overwritten. Further, we show an\ninteresting connection between a local version of our method and Hebb's\nrule,which is a model for the learning process in the brain. We test our method\non a sequence of object recognition tasks and on the challenging problem of\nlearning an embedding for predicting $<$subject, predicate, object$>$ triplets.\nWe show state-of-the-art performance and, for the first time, the ability to\nadapt the importance of the parameters based on unlabeled data towards what the\nnetwork needs (not) to forget, which may vary depending on test conditions.\n", "title": "Memory Aware Synapses: Learning what (not) to forget" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
101
null
Validated
null
null
null
{ "abstract": " In this paper, we study the generalized polynomial chaos (gPC) based\nstochastic Galerkin method for the linear semiconductor Boltzmann equation\nunder diffusive scaling and with random inputs from an anisotropic collision\nkernel and the random initial condition. While the numerical scheme and the\nproof of uniform-in-Knudsen-number regularity of the distribution function in\nthe random space has been introduced in [Jin-Liu-16'], the main goal of this\npaper is to first obtain a sharper estimate on the regularity of the\nsolution-an exponential decay towards its local equilibrium, which then lead to\nthe uniform spectral convergence of the stochastic Galerkin method for the\nproblem under study.\n", "title": "Uniform Spectral Convergence of the Stochastic Galerkin Method for the Linear Semiconductor Boltzmann Equation with Random Inputs and Diffusive Scalings" }
null
null
null
null
true
null
102
null
Default
null
null
null
{ "abstract": " Over the last decade, wireless networks have experienced an impressive growth\nand now play a main role in many telecommunications systems. As a consequence,\nscarce radio resources, such as frequencies, became congested and the need for\neffective and efficient assignment methods arose. In this work, we present a\nGenetic Algorithm for solving large instances of the Power, Frequency and\nModulation Assignment Problem, arising in the design of wireless networks. To\nour best knowledge, this is the first Genetic Algorithm that is proposed for\nsuch problem. Compared to previous works, our approach allows a wider\nexploration of the set of power solutions, while eliminating sources of\nnumerical problems. The performance of the algorithm is assessed by tests over\na set of large realistic instances of a Fixed WiMAX Network.\n", "title": "On Improving the Capacity of Solving Large-scale Wireless Network Design Problems by Genetic Algorithms" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
103
null
Validated
null
null
null
{ "abstract": " We report on a combined study of the de Haas-van Alphen effect and angle\nresolved photoemission spectroscopy on single crystals of the metallic\ndelafossite PdRhO$_2$ rounded off by \\textit{ab initio} band structure\ncalculations. A high sensitivity torque magnetometry setup with SQUID readout\nand synchrotron-based photoemission with a light spot size of\n$~50\\,\\mu\\mathrm{m}$ enabled high resolution data to be obtained from samples\nas small as $150\\times100\\times20\\,(\\mu\\mathrm{m})^3$. The Fermi surface shape\nis nearly cylindrical with a rounded hexagonal cross section enclosing a\nLuttinger volume of 1.00(1) electrons per formula unit.\n", "title": "Quasi two-dimensional Fermi surface topography of the delafossite PdRhO$_2$" }
null
null
null
null
true
null
104
null
Default
null
null
null
{ "abstract": " Atar, Chowdhary and Dupuis have recently exhibited a variational formula for\nexponential integrals of bounded measurable functions in terms of Rényi\ndivergences. We develop a variational characterization of the Rényi\ndivergences between two probability distributions on a measurable sace in terms\nof relative entropies. When combined with the elementary variational formula\nfor exponential integrals of bounded measurable functions in terms of relative\nentropy, this yields the variational formula of Atar, Chowdhary and Dupuis as a\ncorollary. We also develop an analogous variational characterization of the\nRényi divergence rates between two stationary finite state Markov chains in\nterms of relative entropy rates. When combined with Varadhan's variational\ncharacterization of the spectral radius of square matrices with nonnegative\nentries in terms of relative entropy, this yields an analog of the variational\nformula of Atar, Chowdary and Dupuis in the framework of finite state Markov\nchains.\n", "title": "A Variational Characterization of Rényi Divergences" }
null
null
null
null
true
null
105
null
Default
null
null
null
{ "abstract": " Bilayer van der Waals (vdW) heterostructures such as MoS2/WS2 and MoSe2/WSe2\nhave attracted much attention recently, particularly because of their type II\nband alignments and the formation of interlayer exciton as the lowest-energy\nexcitonic state. In this work, we calculate the electronic and optical\nproperties of such heterostructures with the first-principles GW+Bethe-Salpeter\nEquation (BSE) method and reveal the important role of interlayer coupling in\ndeciding the excited-state properties, including the band alignment and\nexcitonic properties. Our calculation shows that due to the interlayer\ncoupling, the low energy excitons can be widely tunable by a vertical gate\nfield. In particular, the dipole oscillator strength and radiative lifetime of\nthe lowest energy exciton in these bilayer heterostructures is varied by over\nan order of magnitude within a practical external gate field. We also build a\nsimple model that captures the essential physics behind this tunability and\nallows the extension of the ab initio results to a large range of electric\nfields. Our work clarifies the physical picture of interlayer excitons in\nbilayer vdW heterostructures and predicts a wide range of gate-tunable\nexcited-state properties of 2D optoelectronic devices.\n", "title": "Interlayer coupling and gate-tunable excitons in transition metal dichalcogenide heterostructures" }
null
null
null
null
true
null
106
null
Default
null
null
null
{ "abstract": " We construct the algebraic cobordism theory of bundles and divisors on\nvarieties. It has a simple basis (over Q) from projective spaces and its rank\nis equal to the number of Chern numbers. An application of this algebraic\ncobordism theory is the enumeration of singular subvarieties with give tangent\nconditions with a fixed smooth divisor, where the subvariety is the zero locus\nof a section of a vector bundle. We prove that the generating series of numbers\nof such subvarieties gives a homomorphism from the algebraic cobordism group to\nthe power series ring. This implies that the enumeration of singular\nsubvarieties with tangency conditions is governed by universal polynomials of\nChern numbers, when the vector bundle is sufficiently ample. This result\ncombines and generalizes the Caporaso-Harris recursive formula, Gottsche's\nconjecture, classical De Jonquiere's Formula and node polynomials from tropical\ngeometry.\n", "title": "Enumeration of singular varieties with tangency conditions" }
null
null
null
null
true
null
107
null
Default
null
null
null
{ "abstract": " People with profound motor deficits could perform useful physical tasks for\nthemselves by controlling robots that are comparable to the human body. Whether\nthis is possible without invasive interfaces has been unclear, due to the\nrobot's complexity and the person's limitations. We developed a novel,\naugmented reality interface and conducted two studies to evaluate the extent to\nwhich it enabled people with profound motor deficits to control robotic body\nsurrogates. 15 novice users achieved meaningful improvements on a clinical\nmanipulation assessment when controlling the robot in Atlanta from locations\nacross the United States. Also, one expert user performed 59 distinct tasks in\nhis own home over seven days, including self-care tasks such as feeding. Our\nresults demonstrate that people with profound motor deficits can effectively\ncontrol robotic body surrogates without invasive interfaces.\n", "title": "In-home and remote use of robotic body surrogates by people with profound motor deficits" }
null
null
null
null
true
null
108
null
Default
null
null
null
{ "abstract": " Object detection in wide area motion imagery (WAMI) has drawn the attention\nof the computer vision research community for a number of years. WAMI proposes\na number of unique challenges including extremely small object sizes, both\nsparse and densely-packed objects, and extremely large search spaces (large\nvideo frames). Nearly all state-of-the-art methods in WAMI object detection\nreport that appearance-based classifiers fail in this challenging data and\ninstead rely almost entirely on motion information in the form of background\nsubtraction or frame-differencing. In this work, we experimentally verify the\nfailure of appearance-based classifiers in WAMI, such as Faster R-CNN and a\nheatmap-based fully convolutional neural network (CNN), and propose a novel\ntwo-stage spatio-temporal CNN which effectively and efficiently combines both\nappearance and motion information to significantly surpass the state-of-the-art\nin WAMI object detection. To reduce the large search space, the first stage\n(ClusterNet) takes in a set of extremely large video frames, combines the\nmotion and appearance information within the convolutional architecture, and\nproposes regions of objects of interest (ROOBI). These ROOBI can contain from\none to clusters of several hundred objects due to the large video frame size\nand varying object density in WAMI. The second stage (FoveaNet) then estimates\nthe centroid location of all objects in that given ROOBI simultaneously via\nheatmap estimation. The proposed method exceeds state-of-the-art results on the\nWPAFB 2009 dataset by 5-16% for moving objects and nearly 50% for stopped\nobjects, as well as being the first proposed method in wide area motion imagery\nto detect completely stationary objects.\n", "title": "ClusterNet: Detecting Small Objects in Large Scenes by Exploiting Spatio-Temporal Information" }
null
null
null
null
true
null
109
null
Default
null
null
null
{ "abstract": " Monte Carlo Tree Search (MCTS), most famously used in game-play artificial\nintelligence (e.g., the game of Go), is a well-known strategy for constructing\napproximate solutions to sequential decision problems. Its primary innovation\nis the use of a heuristic, known as a default policy, to obtain Monte Carlo\nestimates of downstream values for states in a decision tree. This information\nis used to iteratively expand the tree towards regions of states and actions\nthat an optimal policy might visit. However, to guarantee convergence to the\noptimal action, MCTS requires the entire tree to be expanded asymptotically. In\nthis paper, we propose a new technique called Primal-Dual MCTS that utilizes\nsampled information relaxation upper bounds on potential actions, creating the\npossibility of \"ignoring\" parts of the tree that stem from highly suboptimal\nchoices. This allows us to prove that despite converging to a partial decision\ntree in the limit, the recommended action from Primal-Dual MCTS is optimal. The\nnew approach shows significant promise when used to optimize the behavior of a\nsingle driver navigating a graph while operating on a ride-sharing platform.\nNumerical experiments on a real dataset of 7,000 trips in New Jersey suggest\nthat Primal-Dual MCTS improves upon standard MCTS by producing deeper decision\ntrees and exhibits a reduced sensitivity to the size of the action space.\n", "title": "Monte Carlo Tree Search with Sampled Information Relaxation Dual Bounds" }
null
null
null
null
true
null
110
null
Default
null
null
null
{ "abstract": " We study the Fermi-edge singularity, describing the response of a degenerate\nelectron system to optical excitation, in the framework of the functional\nrenormalization group (fRG). Results for the (interband) particle-hole\nsusceptibility from various implementations of fRG (one- and two-\nparticle-irreducible, multi-channel Hubbard-Stratonovich, flowing\nsusceptibility) are compared to the summation of all leading logarithmic (log)\ndiagrams, achieved by a (first-order) solution of the parquet equations. For\nthe (zero-dimensional) special case of the X-ray-edge singularity, we show that\nthe leading log formula can be analytically reproduced in a consistent way from\na truncated, one-loop fRG flow. However, reviewing the underlying diagrammatic\nstructure, we show that this derivation relies on fortuitous partial\ncancellations special to the form of and accuracy applied to the X-ray-edge\nsingularity and does not generalize.\n", "title": "Fermi-edge singularity and the functional renormalization group" }
null
null
null
null
true
null
111
null
Default
null
null
null
{ "abstract": " Retrosynthesis is a technique to plan the chemical synthesis of organic\nmolecules, for example drugs, agro- and fine chemicals. In retrosynthesis, a\nsearch tree is built by analysing molecules recursively and dissecting them\ninto simpler molecular building blocks until one obtains a set of known\nbuilding blocks. The search space is intractably large, and it is difficult to\ndetermine the value of retrosynthetic positions. Here, we propose to model\nretrosynthesis as a Markov Decision Process. In combination with a Deep Neural\nNetwork policy learned from essentially the complete published knowledge of\nchemistry, Monte Carlo Tree Search (MCTS) can be used to evaluate positions. In\nexploratory studies, we demonstrate that MCTS with neural network policies\noutperforms the traditionally used best-first search with hand-coded\nheuristics.\n", "title": "Towards \"AlphaChem\": Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies" }
null
null
null
null
true
null
112
null
Default
null
null
null
{ "abstract": " The class of stochastically self-similar sets contains many famous examples\nof random sets, e.g. Mandelbrot percolation and general fractal percolation.\nUnder the assumption of the uniform open set condition and some mild\nassumptions on the iterated function systems used, we show that the\nquasi-Assouad dimension of self-similar random recursive sets is almost surely\nequal to the almost sure Hausdorff dimension of the set. We further comment on\nrandom homogeneous and $V$-variable sets and the removal of overlap conditions.\n", "title": "The quasi-Assouad dimension for stochastically self-similar sets" }
null
null
null
null
true
null
113
null
Default
null
null
null
{ "abstract": " We report on the influence of spin-orbit coupling (SOC) in the Fe-based\nsuperconductors (FeSCs) via application of circularly-polarized spin and\nangle-resolved photoemission spectroscopy. We combine this technique in\nrepresentative members of both the Fe-pnictides and Fe-chalcogenides with ab\ninitio density functional theory and tight-binding calculations to establish an\nubiquitous modification of the electronic structure in these materials imbued\nby SOC. The influence of SOC is found to be concentrated on the hole pockets\nwhere the superconducting gap is generally found to be largest. This result\ncontests descriptions of superconductivity in these materials in terms of pure\nspin-singlet eigenstates, raising questions regarding the possible pairing\nmechanisms and role of SOC therein.\n", "title": "Influence of Spin Orbit Coupling in the Iron-Based Superconductors" }
null
null
null
null
true
null
114
null
Default
null
null
null
{ "abstract": " In this work we examine how the updates addressing Meltdown and Spectre\nvulnerabilities impact the performance of HPC applications. To study this we\nuse the application kernel module of XDMoD to test the performance before and\nafter the application of the vulnerability patches. We tested the performance\ndifference for multiple application and benchmarks including: NWChem, NAMD,\nHPCC, IOR, MDTest and IMB. The results show that although some specific\nfunctions can have performance decreased by as much as 74%, the majority of\nindividual metrics indicates little to no decrease in performance. The\nreal-world applications show a 2-3% decrease in performance for single node\njobs and a 5-11% decrease for parallel multi node jobs.\n", "title": "Effect of Meltdown and Spectre Patches on the Performance of HPC Applications" }
null
null
[ "Computer Science" ]
null
true
null
115
null
Validated
null
null
null
{ "abstract": " Gene regulatory networks are powerful abstractions of biological systems.\nSince the advent of high-throughput measurement technologies in biology in the\nlate 90s, reconstructing the structure of such networks has been a central\ncomputational problem in systems biology. While the problem is certainly not\nsolved in its entirety, considerable progress has been made in the last two\ndecades, with mature tools now available. This chapter aims to provide an\nintroduction to the basic concepts underpinning network inference tools,\nattempting a categorisation which highlights commonalities and relative\nstrengths. While the chapter is meant to be self-contained, the material\npresented should provide a useful background to the later, more specialised\nchapters of this book.\n", "title": "Gene regulatory network inference: an introductory survey" }
null
null
null
null
true
null
116
null
Default
null
null
null
{ "abstract": " Glaucoma is the second leading cause of blindness all over the world, with\napproximately 60 million cases reported worldwide in 2010. If undiagnosed in\ntime, glaucoma causes irreversible damage to the optic nerve leading to\nblindness. The optic nerve head examination, which involves measurement of\ncup-to-disc ratio, is considered one of the most valuable methods of structural\ndiagnosis of the disease. Estimation of cup-to-disc ratio requires segmentation\nof optic disc and optic cup on eye fundus images and can be performed by modern\ncomputer vision algorithms. This work presents universal approach for automatic\noptic disc and cup segmentation, which is based on deep learning, namely,\nmodification of U-Net convolutional neural network. Our experiments include\ncomparison with the best known methods on publicly available databases\nDRIONS-DB, RIM-ONE v.3, DRISHTI-GS. For both optic disc and cup segmentation,\nour method achieves quality comparable to current state-of-the-art methods,\noutperforming them in terms of the prediction time.\n", "title": "Optic Disc and Cup Segmentation Methods for Glaucoma Detection with Modification of U-Net Convolutional Neural Network" }
null
null
null
null
true
null
117
null
Default
null
null
null
{ "abstract": " The life of the modern world essentially depends on the work of the large\nartificial homogeneous networks, such as wired and wireless communication\nsystems, networks of roads and pipelines. The support of their effective\ncontinuous functioning requires automatic screening and permanent optimization\nwith processing of the huge amount of data by high-performance distributed\nsystems. We propose new meta-algorithm of large homogeneous network analysis,\nits decomposition into alternative sets of loosely connected subnets, and\nparallel optimization of the most independent elements. This algorithm is based\non a network-specific correlation function, Simulated Annealing technique, and\nis adapted to work in the computer cluster. On the example of large wireless\nnetwork, we show that proposed algorithm essentially increases speed of\nparallel optimization. The elaborated general approach can be used for analysis\nand optimization of the wide range of networks, including such specific types\nas artificial neural networks or organized in networks physiological systems of\nliving organisms.\n", "title": "Automatic Analysis, Decomposition and Parallel Optimization of Large Homogeneous Networks" }
null
null
null
null
true
null
118
null
Default
null
null
null
{ "abstract": " This paper considers the actor-critic contextual bandit for the mobile health\n(mHealth) intervention. The state-of-the-art decision-making methods in mHealth\ngenerally assume that the noise in the dynamic system follows the Gaussian\ndistribution. Those methods use the least-square-based algorithm to estimate\nthe expected reward, which is prone to the existence of outliers. To deal with\nthe issue of outliers, we propose a novel robust actor-critic contextual bandit\nmethod for the mHealth intervention. In the critic updating, the\ncapped-$\\ell_{2}$ norm is used to measure the approximation error, which\nprevents outliers from dominating our objective. A set of weights could be\nachieved from the critic updating. Considering them gives a weighted objective\nfor the actor updating. It provides the badly noised sample in the critic\nupdating with zero weights for the actor updating. As a result, the robustness\nof both actor-critic updating is enhanced. There is a key parameter in the\ncapped-$\\ell_{2}$ norm. We provide a reliable method to properly set it by\nmaking use of one of the most fundamental definitions of outliers in\nstatistics. Extensive experiment results demonstrate that our method can\nachieve almost identical results compared with the state-of-the-art methods on\nthe dataset without outliers and dramatically outperform them on the datasets\nnoised by outliers.\n", "title": "Robust Contextual Bandit via the Capped-$\\ell_{2}$ norm" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
119
null
Validated
null
null
null
{ "abstract": " In 1933 Kolmogorov constructed a general theory that defines the modern\nconcept of conditional expectation. In 1955 Renyi fomulated a new axiomatic\ntheory for probability motivated by the need to include unbounded measures. We\nintroduce a general concept of conditional expectation in Renyi spaces. In this\ntheory improper priors are allowed, and the resulting posterior can also be\nimproper.\nIn 1965 Lindley published his classic text on Bayesian statistics using the\ntheory of Renyi, but retracted this idea in 1973 due to the appearance of\nmarginalization paradoxes presented by Dawid, Stone, and Zidek. The paradoxes\nare investigated, and the seemingly conflicting results are explained. The\ntheory of Renyi can hence be used as an axiomatic basis for statistics that\nallows use of unbounded priors.\nKeywords: Haldane's prior; Poisson intensity; Marginalization paradox;\nMeasure theory; conditional probability space; axioms for statistics;\nconditioning on a sigma field; improper prior\n", "title": "Improper posteriors are not improper" }
null
null
null
null
true
null
120
null
Default
null
null
null
{ "abstract": " Recently a new fault tolerant and simple mechanism was designed for solving\ncommit consensus problem. It is based on replicated validation of messages sent\nbetween transaction participants and a special dispatcher validator manager\nnode. This paper presents a correctness, safety proofs and performance analysis\nof this algorithm.\n", "title": "Fault Tolerant Consensus Agreement Algorithm" }
null
null
null
null
true
null
121
null
Default
null
null
null
{ "abstract": " This work presents a new method to quantify connectivity in transportation\nnetworks. Inspired by the field of topological data analysis, we propose a\nnovel approach to explore the robustness of road network connectivity in the\npresence of congestion on the roadway. The robustness of the pattern is\nsummarized in a congestion barcode, which can be constructed directly from\ntraffic datasets commonly used for navigation. As an initial demonstration, we\nillustrate the main technique on a publicly available traffic dataset in a\nneighborhood in New York City.\n", "title": "Congestion Barcodes: Exploring the Topology of Urban Congestion Using Persistent Homology" }
null
null
null
null
true
null
122
null
Default
null
null
null
{ "abstract": " The first transiting planetesimal orbiting a white dwarf was recently\ndetected in K2 data of WD1145+017 and has been followed up intensively. The\nmultiple, long, and variable transits suggest the transiting objects are dust\nclouds, probably produced by a disintegrating asteroid. In addition, the system\ncontains circumstellar gas, evident by broad absorption lines, mostly in the\nu'-band, and a dust disc, indicated by an infrared excess. Here we present the\nfirst detection of a change in colour of WD1145+017 during transits, using\nsimultaneous multi-band fast-photometry ULTRACAM measurements over the\nu'g'r'i'-bands. The observations reveal what appears to be 'bluing' during\ntransits; transits are deeper in the redder bands, with a u'-r' colour\ndifference of up to ~-0.05 mag. We explore various possible explanations for\nthe bluing. 'Spectral' photometry obtained by integrating over bandpasses in\nthe spectroscopic data in- and out-of-transit, compared to the photometric\ndata, shows that the observed colour difference is most likely the result of\nreduced circumstellar absorption in the spectrum during transits. This\nindicates that the transiting objects and the gas share the same line-of-sight,\nand that the gas covers the white dwarf only partially, as would be expected if\nthe gas, the transiting debris, and the dust emitting the infrared excess, are\npart of the same general disc structure (although possibly at different radii).\nIn addition, we present the results of a week-long monitoring campaign of the\nsystem.\n", "title": "Once in a blue moon: detection of 'bluing' during debris transits in the white dwarf WD1145+017" }
null
null
[ "Physics" ]
null
true
null
123
null
Validated
null
null
null
{ "abstract": " In this review article, we discuss recent studies on drops and bubbles in\nHele-Shaw cells, focusing on how scaling laws exhibit crossovers from the\nthree-dimensional counterparts and focusing on topics in which viscosity plays\nan important role. By virtue of progresses in analytical theory and high-speed\nimaging, dynamics of drops and bubbles have actively been studied with the aid\nof scaling arguments. However, compared with three dimensional problems,\nstudies on the corresponding problems in Hele-Shaw cells are still limited.\nThis review demonstrates that the effect of confinement in the Hele-Shaw cell\nintroduces new physics allowing different scaling regimes to appear. For this\npurpose, we discuss various examples that are potentially important for\nindustrial applications handling drops and bubbles in confined spaces by\nshowing agreement between experiments and scaling theories. As a result, this\nreview provides a collection of problems in hydrodynamics that may be\nanalytically solved or that may be worth studying numerically in the near\nfuture.\n", "title": "Viscous dynamics of drops and bubbles in Hele-Shaw cells: drainage, drag friction, coalescence, and bursting" }
null
null
null
null
true
null
124
null
Default
null
null
null
{ "abstract": " Stacking-based deep neural network (S-DNN), in general, denotes a deep neural\nnetwork (DNN) resemblance in terms of its very deep, feedforward network\narchitecture. The typical S-DNN aggregates a variable number of individually\nlearnable modules in series to assemble a DNN-alike alternative to the targeted\nobject recognition tasks. This work likewise devises an S-DNN instantiation,\ndubbed deep analytic network (DAN), on top of the spectral histogram (SH)\nfeatures. The DAN learning principle relies on ridge regression, and some key\nDNN constituents, specifically, rectified linear unit, fine-tuning, and\nnormalization. The DAN aptitude is scrutinized on three repositories of varying\ndomains, including FERET (faces), MNIST (handwritten digits), and CIFAR10\n(natural objects). The empirical results unveil that DAN escalates the SH\nbaseline performance over a sufficiently deep layer.\n", "title": "Stacking-based Deep Neural Network: Deep Analytic Network on Convolutional Spectral Histogram Features" }
null
null
null
null
true
null
125
null
Default
null
null
null
{ "abstract": " In spite of Anderson's theorem, disorder is known to affect superconductivity\nin conventional s-wave superconductors. In most superconductors, the degree of\ndisorder is fixed during sample preparation. Here we report measurements of the\nsuperconducting properties of the two-dimensional gas that forms at the\ninterface between LaAlO$_3$ (LAO) and SrTiO$_3$ (STO) in the (111) crystal\norientation, a system that permits \\emph{in situ} tuning of carrier density and\ndisorder by means of a back gate voltage $V_g$. Like the (001) oriented LAO/STO\ninterface, superconductivity at the (111) LAO/STO interface can be tuned by\n$V_g$. In contrast to the (001) interface, superconductivity in these (111)\nsamples is anisotropic, being different along different interface crystal\ndirections, consistent with the strong anisotropy already observed other\ntransport properties at the (111) LAO/STO interface. In addition, we find that\nthe (111) interface samples \"remember\" the backgate voltage $V_F$ at which they\nare cooled at temperatures near the superconducting transition temperature\n$T_c$, even if $V_g$ is subsequently changed at lower temperatures. The low\nenergy scale and other characteristics of this memory effect ($<1$ K)\ndistinguish it from charge-trapping effects previously observed in (001)\ninterface samples.\n", "title": "Superconductivity and Frozen Electronic States at the (111) LaAlO$_3$/SrTiO$_3$ Interface" }
null
null
null
null
true
null
126
null
Default
null
null
null
{ "abstract": " We investigate beam loading and emittance preservation for a high-charge\nelectron beam being accelerated in quasi-linear plasma wakefields driven by a\nshort proton beam. The structure of the studied wakefields are similar to those\nof a long, modulated proton beam, such as the AWAKE proton driver. We show that\nby properly choosing the electron beam parameters and exploiting two well known\neffects, beam loading of the wakefield and full blow out of plasma electrons by\nthe accelerated beam, the electron beam can gain large amounts of energy with a\nnarrow final energy spread (%-level) and without significant emittance growth.\n", "title": "Emittance preservation of an electron beam in a loaded quasi-linear plasma wakefield" }
null
null
[ "Physics" ]
null
true
null
127
null
Validated
null
null
null
{ "abstract": " In this paper, we propose a practical receiver for multicarrier signals\nsubjected to a strong memoryless nonlinearity. The receiver design is based on\na generalized approximate message passing (GAMP) framework, and this allows\nreal-time algorithm implementation in software or hardware with moderate\ncomplexity. We demonstrate that the proposed receiver can provide more than a\n2dB gain compared with an ideal uncoded linear OFDM transmission at a BER range\n$10^{-4}\\div10^{-6}$ in the AWGN channel, when the OFDM signal is subjected to\nclipping nonlinearity and the crest-factor of the clipped waveform is only\n1.9dB. Simulation results also demonstrate that the proposed receiver provides\nsignificant performance gain in frequency-selective multipath channels\n", "title": "Detection of Nonlinearly Distorted OFDM Signals via Generalized Approximate Message Passing" }
null
null
[ "Computer Science" ]
null
true
null
128
null
Validated
null
null
null
{ "abstract": " According to astrophysical observations value of recession velocity in a\ncertain point is proportional to a distance to this point. The proportionality\ncoefficient is the Hubble constant measured with 5% accuracy. It is used in\nmany cosmological theories describing dark energy, dark matter, baryons, and\ntheir relation with the cosmological constant introduced by Einstein.\nIn the present work we have determined a limit value of the global Hubble\nconstant (in a big distance from a point of observations) theoretically without\nusing any empirical constants on the base of our own fractal model used for the\ndescription a relation between distance to an observed galaxy and coordinate of\nits center. The distance has been defined as a nonlinear fractal measure with\nscale of measurement corresponding to a deviation of the measure from its fixed\nvalue (zero-gravity radius). We have suggested a model of specific anisotropic\nfractal for simulation a radial Universe expansion. Our theoretical results\nhave shown existence of an inverse proportionality between accuracy of\ndetermination the Hubble constant and accuracy of calculation a coordinates of\ngalaxies leading to ambiguity results obtained at cosmological observations.\n", "title": "Nonlinear fractal meaning of the Hubble constant" }
null
null
null
null
true
null
129
null
Default
null
null
null
{ "abstract": " Dynamic languages often employ reflection primitives to turn dynamically\ngenerated text into executable code at run-time. These features make standard\nstatic analysis extremely hard if not impossible because its essential data\nstructures, i.e., the control-flow graph and the system of recursive equations\nassociated with the program to analyse, are themselves dynamically mutating\nobjects. We introduce SEA, an abstract interpreter for automatic sound string\nexecutability analysis of dynamic languages employing bounded (i.e, finitely\nnested) reflection and dynamic code generation. Strings are statically\napproximated in an abstract domain of finite state automata with basic\noperations implemented as symbolic transducers. SEA combines standard program\nanalysis together with string executability analysis. The analysis of a call to\nreflection determines a call to the same abstract interpreter over a code which\nis synthesised directly from the result of the static string executability\nanalysis at that program point. The use of regular languages for approximating\ndynamically generated code structures allows SEA to soundly approximate safety\nproperties of self modifying programs yet maintaining efficiency. Soundness\nhere means that the semantics of the code synthesised by the analyser to\nresolve reflection over-approximates the semantics of the code dynamically\nbuilt at run-rime by the program at that point.\n", "title": "SEA: String Executability Analysis by Abstract Interpretation" }
null
null
null
null
true
null
130
null
Default
null
null
null
{ "abstract": " Reductions for transition systems have been recently introduced as a uniform\nand principled method for comparing the expressiveness of system models with\nrespect to a range of properties, especially bisimulations. In this paper we\nstudy the expressiveness (w.r.t. bisimulations) of models for quantitative\ncomputations such as weighted labelled transition systems (WLTSs), uniform\nlabelled transition systems (ULTraSs), and state-to-function transition systems\n(FuTSs). We prove that there is a trade-off between labels and weights: at one\nextreme lays the class of (unlabelled) weighted transition systems where\ninformation is presented using weights only; at the other lays the class of\nlabelled transition systems (LTSs) where information is shifted on labels.\nThese categories of systems cannot be further reduced in any significant way\nand subsume all the aforementioned models.\n", "title": "On the trade-off between labels and weights in quantitative bisimulation" }
null
null
null
null
true
null
131
null
Default
null
null
null
{ "abstract": " Poynting's theorem is used to obtain an expression for the turbulent\npower-spectral density as function of frequency and wavenumber in low-frequency\nmagnetic turbulence. No reference is made to Elsasser variables as is usually\ndone in magnetohydrodynamic turbulence mixing mechanical and electromagnetic\nturbulence. We rather stay with an implicit form of the mechanical part of\nturbulence as suggested by electromagnetic theory in arbitrary media. All of\nmechanics and flows is included into a turbulent response function which by\nappropriate observations can be determined from knowledge of the turbulent\nfluctuation spectra. This approach is not guided by the wish of developing a\ncomplete theory of turbulence. It aims on the identification of the response\nfunction from observations as input into a theory which afterwards attempts its\ninterpretation. Combination of both the magnetic and electric power spectral\ndensities leads to a representation of the turbulent response function, i.e.\nthe turbulent conductivity spectrum $\\sigma_{\\omega k}$ as function of\nfrequency $\\omega$ and wavenumber $k$. {It is given as the ratio of magnetic to\nelectric power spectral densities in frequency space. This knowledge allows for\nformally writing down a turbulent dispersion relation. Power law inertial range\nspectra result in a power law turbulent conductivity spectrum. These can be\ncompared with observations in the solar wind. Keywords: MHD turbulence,\nturbulent dispersion relation, turbulent response function, solar wind\nturbulence\n", "title": "Poynting's theorem in magnetic turbulence" }
null
null
null
null
true
null
132
null
Default
null
null
null
{ "abstract": " Let M be a compact Riemannian manifold and let $\\mu$,d be the associated\nmeasure and distance on M. Robert McCann obtained, generalizing results for the\nEuclidean case by Yann Brenier, the polar factorization of Borel maps S : M ->\nM pushing forward $\\mu$ to a measure $\\nu$: each S factors uniquely a.e. into\nthe composition S = T \\circ U, where U : M -> M is volume preserving and T : M\n-> M is the optimal map transporting $\\mu$ to $\\nu$ with respect to the cost\nfunction d^2/2.\nIn this article we study the polar factorization of conformal and projective\nmaps of the sphere S^n. For conformal maps, which may be identified with\nelements of the identity component of O(1,n+1), we prove that the polar\nfactorization in the sense of optimal mass transport coincides with the\nalgebraic polar factorization (Cartan decomposition) of this Lie group. For the\nprojective case, where the group GL_+(n+1) is involved, we find necessary and\nsufficient conditions for these two factorizations to agree.\n", "title": "Polar factorization of conformal and projective maps of the sphere in the sense of optimal mass transport" }
null
null
[ "Mathematics" ]
null
true
null
133
null
Validated
null
null
null
{ "abstract": " We examine the representation of numbers as the sum of two squares in\n$\\mathbb{Z}_n$ for a general positive integer $n$. Using this information we\nmake some comments about the density of positive integers which can be\nrepresented as the sum of two squares and powers of $2$ in $\\mathbb{N}$.\n", "title": "Representing numbers as the sum of squares and powers in the ring $\\mathbb{Z}_n$" }
null
null
[ "Mathematics" ]
null
true
null
134
null
Validated
null
null
null
{ "abstract": " Regression for spatially dependent outcomes poses many challenges, for\ninference and for computation. Non-spatial models and traditional spatial\nmixed-effects models each have their advantages and disadvantages, making it\ndifficult for practitioners to determine how to carry out a spatial regression\nanalysis. We discuss the data-generating mechanisms implicitly assumed by\nvarious popular spatial regression models, and discuss the implications of\nthese assumptions. We propose Bayesian spatial filtering as an approximate\nmiddle way between non-spatial models and traditional spatial mixed models. We\nshow by simulation that our Bayesian spatial filtering model has several\ndesirable properties and hence may be a useful addition to a spatial\nstatistician's toolkit.\n", "title": "Spatial Regression and the Bayesian Filter" }
null
null
null
null
true
null
135
null
Default
null
null
null
{ "abstract": " One of the most important parameters in ionospheric plasma research also\nhaving a wide practical application in wireless satellite telecommunications is\nthe total electron content (TEC) representing the columnal electron number\ndensity. The F region with high electron density provides the biggest\ncontribution to TEC while the relatively weakly ionized plasma of the D region\n(60 km - 90 km above Earths surface) is often considered as a negligible cause\nof satellite signal disturbances. However, sudden intensive ionization\nprocesses like those induced by solar X ray flares can cause relative increases\nof electron density that are significantly larger in the D-region than in\nregions at higher altitudes. Therefore, one cannot exclude a priori the D\nregion from investigations of ionospheric influences on propagation of\nelectromagnetic signals emitted by satellites. We discuss here this problem\nwhich has not been sufficiently treated in literature so far. The obtained\nresults are based on data collected from the D region monitoring by very low\nfrequency radio waves and on vertical TEC calculations from the Global\nNavigation Satellite System (GNSS) signal analyses, and they show noticeable\nvariations in the D region electron content (TECD) during activity of a solar X\nray flare (it rises by a factor of 136 in the considered case) when TECD\ncontribution to TEC can reach several percent and which cannot be neglected in\npractical applications like global positioning procedures by satellites.\n", "title": "Behaviour of electron content in the ionospheric D-region during solar X-ray flares" }
null
null
null
null
true
null
136
null
Default
null
null
null
{ "abstract": " For the particles undergoing the anomalous diffusion with different waiting\ntime distributions for different internal states, we derive the Fokker-Planck\nand Feymann-Kac equations, respectively, describing positions of the particles\nand functional distributions of the trajectories of particles; in particular,\nthe equations governing the functional distribution of internal states are also\nobtained. The dynamics of the stochastic processes are analyzed and the\napplications, calculating the distribution of the first passage time and the\ndistribution of the fraction of the occupation time, of the equations are\ngiven.\n", "title": "Fractional compound Poisson processes with multiple internal states" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
137
null
Validated
null
null
null
{ "abstract": " Stabilizing the magnetic signal of single adatoms is a crucial step towards\ntheir successful usage in widespread technological applications such as\nhigh-density magnetic data storage devices. The quantum mechanical nature of\nthese tiny objects, however, introduces intrinsic zero-point spin-fluctuations\nthat tend to destabilize the local magnetic moment of interest by dwindling the\nmagnetic anisotropy potential barrier even at absolute zero temperature. Here,\nwe elucidate the origins and quantify the effect of the fundamental ingredients\ndetermining the magnitude of the fluctuations, namely the ($i$) local magnetic\nmoment, ($ii$) spin-orbit coupling and ($iii$) electron-hole Stoner\nexcitations. Based on a systematic first-principles study of 3d and 4d adatoms,\nwe demonstrate that the transverse contribution of the fluctuations is\ncomparable in size to the magnetic moment itself, leading to a remarkable\n$\\gtrsim$50$\\%$ reduction of the magnetic anisotropy energy. Our analysis gives\nrise to a comprehensible diagram relating the fluctuation magnitude to\ncharacteristic features of adatoms, providing practical guidelines for\ndesigning magnetically stable nanomagnets with minimal quantum fluctuations.\n", "title": "Zero-point spin-fluctuations of single adatoms" }
null
null
null
null
true
null
138
null
Default
null
null
null
{ "abstract": " We study a minimal model for the growth of a phenotypically heterogeneous\npopulation of cells subject to a fluctuating environment in which they can\nreplicate (by exploiting available resources) and modify their phenotype within\na given landscape (thereby exploring novel configurations). The model displays\nan exploration-exploitation trade-off whose specifics depend on the statistics\nof the environment. Most notably, the phenotypic distribution corresponding to\nmaximum population fitness (i.e. growth rate) requires a non-zero exploration\nrate when the magnitude of environmental fluctuations changes randomly over\ntime, while a purely exploitative strategy turns out to be optimal in two-state\nenvironments, independently of the statistics of switching times. We obtain\nanalytical insight into the limiting cases of very fast and very slow\nexploration rates by directly linking population growth to the features of the\nenvironment.\n", "title": "Exploration-exploitation tradeoffs dictate the optimal distributions of phenotypes for populations subject to fitness fluctuations" }
null
null
[ "Quantitative Biology" ]
null
true
null
139
null
Validated
null
null
null
{ "abstract": " Electronic Health Records (EHR) are data generated during routine clinical\ncare. EHR offer researchers unprecedented phenotypic breadth and depth and have\nthe potential to accelerate the pace of precision medicine at scale. A main EHR\nuse-case is creating phenotyping algorithms to define disease status, onset and\nseverity. Currently, no common machine-readable standard exists for defining\nphenotyping algorithms which often are stored in human-readable formats. As a\nresult, the translation of algorithms to implementation code is challenging and\nsharing across the scientific community is problematic. In this paper, we\nevaluate openEHR, a formal EHR data specification, for computable\nrepresentations of EHR phenotyping algorithms.\n", "title": "Evaluating openEHR for storing computable representations of electronic health record phenotyping algorithms" }
null
null
[ "Computer Science" ]
null
true
null
140
null
Validated
null
null
null
{ "abstract": " Mission critical data dissemination in massive Internet of things (IoT)\nnetworks imposes constraints on the message transfer delay between devices. Due\nto low power and communication range of IoT devices, data is foreseen to be\nrelayed over multiple device-to-device (D2D) links before reaching the\ndestination. The coexistence of a massive number of IoT devices poses a\nchallenge in maximizing the successful transmission capacity of the overall\nnetwork alongside reducing the multi-hop transmission delay in order to support\nmission critical applications. There is a delicate interplay between the\ncarrier sensing threshold of the contention based medium access protocol and\nthe choice of packet forwarding strategy selected at each hop by the devices.\nThe fundamental problem in optimizing the performance of such networks is to\nbalance the tradeoff between conflicting performance objectives such as the\nspatial frequency reuse, transmission quality, and packet progress towards the\ndestination. In this paper, we use a stochastic geometry approach to quantify\nthe performance of multi-hop massive IoT networks in terms of the spatial\nfrequency reuse and the transmission quality under different packet forwarding\nschemes. We also develop a comprehensive performance metric that can be used to\noptimize the system to achieve the best performance. The results can be used to\nselect the best forwarding scheme and tune the carrier sensing threshold to\noptimize the performance of the network according to the delay constraints and\ntransmission quality requirements.\n", "title": "Optimizing Mission Critical Data Dissemination in Massive IoT Networks" }
null
null
null
null
true
null
141
null
Default
null
null
null
{ "abstract": " We develope a two-species exclusion process with a distinct pair of entry and\nexit sites for each species of rigid rods. The relatively slower forward\nstepping of the rods in an extended bottleneck region, located in between the\ntwo entry sites, controls the extent of interference of the co-directional flow\nof the two species of rods. The relative positions of the sites of entry of the\ntwo species of rods with respect to the location of the bottleneck are\nmotivated by a biological phenomenon. However, the primary focus of the study\nhere is to explore the effects of the interference of the flow of the two\nspecies of rods on their spatio-temporal organization and the regulations of\nthis interference by the extended bottleneck. By a combination of mean-field\ntheory and computer simulation we calculate the flux of both species of rods\nand their density profiles as well as the composite phase diagrams of the\nsystem. If the bottleneck is sufficiently stringent some of the phases become\npractically unrealizable although not ruled out on the basis of any fundamental\nphysical principle. Moreover the extent of suppression of flow of the\ndownstream entrants by the flow of the upstream entrants can also be regulated\nby the strength of the bottleneck. We speculate on the possible implications of\nthe results in the context of the biological phenomenon that motivated the\nformulation of the theoretical model.\n", "title": "Interference of two co-directional exclusion processes in the presence of a static bottleneck: a biologically motivated model" }
null
null
null
null
true
null
142
null
Default
null
null
null
{ "abstract": " We introduce a large class of random Young diagrams which can be regarded as\na natural one-parameter deformation of some classical Young diagram ensembles;\na deformation which is related to Jack polynomials and Jack characters. We show\nthat each such a random Young diagram converges asymptotically to some limit\nshape and that the fluctuations around the limit are asymptotically Gaussian.\n", "title": "Gaussian fluctuations of Jack-deformed random Young diagrams" }
null
null
null
null
true
null
143
null
Default
null
null
null
{ "abstract": " We explicitly compute the critical exponents associated with logarithmic\ncorrections (the so-called hatted exponents) starting from the renormalization\ngroup equations and the mean field behavior for a wide class of models at the\nupper critical behavior (for short and long range $\\phi^n$-theories) and below\nit. This allows us to check the scaling relations among these critical\nexponents obtained by analysing the complex singularities (Lee-Yang and Fisher\nzeroes) of these models. Moreover, we have obtained an explicit method to\ncompute the $\\hat{\\coppa}$ exponent [defined by $\\xi\\sim L (\\log\nL)^{\\hat{\\coppa}}$] and, finally, we have found a new derivation of the scaling\nlaw associated with it.\n", "title": "Revisiting (logarithmic) scaling relations using renormalization group" }
null
null
null
null
true
null
144
null
Default
null
null
null
{ "abstract": " We obtain a Bernstein-type inequality for sums of Banach-valued random\nvariables satisfying a weak dependence assumption of general type and under\ncertain smoothness assumptions of the underlying Banach norm. We use this\ninequality in order to investigate in the asymptotical regime the error upper\nbounds for the broad family of spectral regularization methods for reproducing\nkernel decision rules, when trained on a sample coming from a $\\tau-$mixing\nprocess.\n", "title": "Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods" }
null
null
null
null
true
null
145
null
Default
null
null
null
{ "abstract": " The temperature-dependent evolution of the Kondo lattice is a long-standing\ntopic of theoretical and experimental investigation and yet it lacks a truly\nmicroscopic description of the relation of the basic $f$-$d$ hybridization\nprocesses to the fundamental temperature scales of Kondo screening and\nFermi-liquid lattice coherence. Here, the temperature-dependence of $f$-$d$\nhybridized band dispersions and Fermi-energy $f$ spectral weight in the Kondo\nlattice system CeCoIn$_5$ is investigated using $f$-resonant angle-resolved\nphotoemission (ARPES) with sufficient detail to allow direct comparison to\nfirst principles dynamical mean field theory (DMFT) calculations containing\nfull realism of crystalline electric field states. The ARPES results, for two\northogonal (001) and (100) cleaved surfaces and three different $f$-$d$\nhybridization scenarios, with additional microscopic insight provided by DMFT,\nreveal $f$ participation in the Fermi surface at temperatures much higher than\nthe lattice coherence temperature, $T^*\\approx$ 45 K, commonly believed to be\nthe onset for such behavior. The identification of a $T$-dependent crystalline\nelectric field degeneracy crossover in the DMFT theory $below$ $T^*$ is\nspecifically highlighted.\n", "title": "Evolution of the Kondo lattice electronic structure above the transport coherence temperature" }
null
null
null
null
true
null
146
null
Default
null
null
null
{ "abstract": " Hegarty conjectured for $n\\neq 2, 3, 5, 7$ that $\\mathbb{Z}/n\\mathbb{Z}$ has\na permutation which destroys all arithmetic progressions mod $n$. For $n\\ge\nn_0$, Hegarty and Martinsson demonstrated that $\\mathbb{Z}/n\\mathbb{Z}$ has an\narithmetic-progression destroying permutation. However $n_0\\approx 1.4\\times\n10^{14}$ and thus resolving the conjecture in full remained out of reach of any\ncomputational techniques. However, this paper using constructions modeled after\nthose used by Elkies and Swaminathan for the case of $\\mathbb{Z}/p\\mathbb{Z}$\nwith $p$ being prime, establish the conjecture in full. Furthermore our results\ndo not rely on the fact that it suffices to study when $n<n_0$ and thus our\nresults completely independent of the proof given by Hegarty and Martinsson.\n", "title": "On A Conjecture Regarding Permutations Which Destroy Arithmetic Progressions" }
null
null
null
null
true
null
147
null
Default
null
null
null
{ "abstract": " An immersion $f : {\\mathcal D} \\rightarrow \\mathcal C$ between cell complexes\nis a local homeomorphism onto its image that commutes with the characteristic\nmaps of the cell complexes. We study immersions between finite-dimensional\nconnected $\\Delta$-complexes by replacing the fundamental group of the base\nspace by an appropriate inverse monoid. We show how conjugacy classes of the\nclosed inverse submonoids of this inverse monoid may be used to classify\nconnected immersions into the complex. This extends earlier results of Margolis\nand Meakin for immersions between graphs and of Meakin and Szakács on\nimmersions into $2$-dimensional $CW$-complexes.\n", "title": "Inverse monoids and immersions of cell complexes" }
null
null
null
null
true
null
148
null
Default
null
null
null
{ "abstract": " Resolving the relationship between biodiversity and ecosystem functioning has\nbeen one of the central goals of modern ecology. Early debates about the\nrelationship were finally resolved with the advent of a statistical\npartitioning scheme that decomposed the biodiversity effect into a \"selection\"\neffect and a \"complementarity\" effect. We prove that both the biodiversity\neffect and its statistical decomposition into selection and complementarity are\nfundamentally flawed because these methods use a naïve null expectation based\non neutrality, likely leading to an overestimate of the net biodiversity\neffect, and they fail to account for the nonlinear abundance-ecosystem\nfunctioning relationships observed in nature. Furthermore, under such\nnonlinearity no statistical scheme can be devised to partition the biodiversity\neffects. We also present an alternative metric providing a more reasonable\nestimate of biodiversity effect. Our results suggest that all studies conducted\nsince the early 1990s likely overestimated the positive effects of biodiversity\non ecosystem functioning.\n", "title": "Not even wrong: The spurious link between biodiversity and ecosystem functioning" }
null
null
null
null
true
null
149
null
Default
null
null
null
{ "abstract": " The principle of democracy is that the people govern through elected\nrepresentatives. Therefore, a democracy is healthy as long as the elected\npoliticians do represent the people. We have analyzed data from the Brazilian\nelectoral court (Tribunal Superior Eleitoral, TSE) concerning money donations\nfor the electoral campaigns and the election results. Our work points to two\ndisturbing conclusions: money is a determining factor on whether a candidate is\nelected or not (as opposed to representativeness); secondly, the use of\nBenford's Law to analyze the declared donations received by the parties and\nelectoral campaigns shows evidence of fraud in the declarations. A better term\nto define Brazil's government system is what we define as chrimatocracy (govern\nby money).\n", "title": "Evidence of Fraud in Brazil's Electoral Campaigns Via the Benford's Law" }
null
null
null
null
true
null
150
null
Default
null
null
null
{ "abstract": " With the increasing commoditization of computer vision, speech recognition\nand machine translation systems and the widespread deployment of learning-based\nback-end technologies such as digital advertising and intelligent\ninfrastructures, AI (Artificial Intelligence) has moved from research labs to\nproduction. These changes have been made possible by unprecedented levels of\ndata and computation, by methodological advances in machine learning, by\ninnovations in systems software and architectures, and by the broad\naccessibility of these technologies.\nThe next generation of AI systems promises to accelerate these developments\nand increasingly impact our lives via frequent interactions and making (often\nmission-critical) decisions on our behalf, often in highly personalized\ncontexts. Realizing this promise, however, raises daunting challenges. In\nparticular, we need AI systems that make timely and safe decisions in\nunpredictable environments, that are robust against sophisticated adversaries,\nand that can process ever increasing amounts of data across organizations and\nindividuals without compromising confidentiality. These challenges will be\nexacerbated by the end of the Moore's Law, which will constrain the amount of\ndata these technologies can store and process. In this paper, we propose\nseveral open research directions in systems, architectures, and security that\ncan address these challenges and help unlock AI's potential to improve lives\nand society.\n", "title": "A Berkeley View of Systems Challenges for AI" }
null
null
null
null
true
null
151
null
Default
null
null
null
{ "abstract": " We rework and generalize equivariant infinite loop space theory, which shows\nhow to construct G-spectra from G-spaces with suitable structure. There is a\nnaive version which gives naive G-spectra for any topological group G, but our\nfocus is on the construction of genuine G-spectra when G is finite.\nWe give new information about the Segal and operadic equivariant infinite\nloop space machines, supplying many details that are missing from the\nliterature, and we prove by direct comparison that the two machines give\nequivalent output when fed equivalent input. The proof of the corresponding\nnonequivariant uniqueness theorem, due to May and Thomason, works for naive\nG-spectra for general G but fails hopelessly for genuine G-spectra when G is\nfinite. Even in the nonequivariant case, our comparison theorem is considerably\nmore precise, giving a direct point-set level comparison.\nWe have taken the opportunity to update this general area, equivariant and\nnonequivariant, giving many new proofs, filling in some gaps, and giving some\ncorrections to results in the literature.\n", "title": "Equivariant infinite loop space theory, I. The space level story" }
null
null
null
null
true
null
152
null
Default
null
null
null
{ "abstract": " We prove that any open subset $U$ of a semi-simple simply connected\nquasi-split linear algebraic group $G$ with ${codim} (G\\setminus U, G)\\geq 2$\nover a number field satisfies strong approximation by establishing a fibration\nof $G$ over a toric variety. We also prove a similar result of strong\napproximation with Brauer-Manin obstruction for a partial equivariant smooth\ncompactification of a homogeneous space where all invertible functions are\nconstant and the semi-simple part of the linear algebraic group is quasi-split.\nSome semi-abelian varieties of any given dimension where the complements of a\nrational point do not satisfy strong approximation with Brauer-Manin\nobstruction are given.\n", "title": "Arithmetic purity of strong approximation for homogeneous spaces" }
null
null
null
null
true
null
153
null
Default
null
null
null
{ "abstract": " We show that nonlocal minimal cones which are non-singular subgraphs outside\nthe origin are necessarily halfspaces.\nThe proof is based on classical ideas of~\\cite{DG1} and on the computation of\nthe linearized nonlocal mean curvature operator, which is proved to satisfy a\nsuitable maximum principle.\nWith this, we obtain new, and somehow simpler, proofs of the Bernstein-type\nresults for nonlocal minimal surfaces which have been recently established\nin~\\cite{FV}. In addition, we establish a new nonlocal Bernstein-Moser-type\nresult which classifies Lipschitz nonlocal minimal subgraphs outside a ball.\n", "title": "Flatness results for nonlocal minimal cones and subgraphs" }
null
null
null
null
true
null
154
null
Default
null
null
null
{ "abstract": " Let $f_1,\\ldots,f_k : \\mathbb{N} \\rightarrow \\mathbb{C}$ be multiplicative\nfunctions taking values in the closed unit disc. Using an analytic approach in\nthe spirit of Halász' mean value theorem, we compute multidimensional\naverages of the shape $$x^{-l} \\sum_{\\mathbf{n} \\in [x]^l} \\prod_{1 \\leq j \\leq\nk} f_j(L_j(\\mathbf{n}))$$ as $x \\rightarrow \\infty$, where $[x] := [1,x]$ and\n$L_1,\\ldots, L_k$ are affine linear forms that satisfy some natural conditions.\nOur approach gives a new proof of a result of Frantzikinakis and Host that is\ndistinct from theirs, with \\emph{explicit} main and error terms. \\\\ As an\napplication of our formulae, we establish a \\emph{local-to-global} principle\nfor Gowers norms of multiplicative functions. We also compute the asymptotic\ndensities of the sets of integers $n$ such that a given multiplicative function\n$f: \\mathbb{N} \\rightarrow \\{-1, 1\\}$ yields a fixed sign pattern of length 3\nor 4 on almost all 3- and 4-term arithmetic progressions, respectively, with\nfirst term $n$.\n", "title": "Effective Asymptotic Formulae for Multilinear Averages of Multiplicative Functions" }
null
null
null
null
true
null
155
null
Default
null
null
null
{ "abstract": " The apparent gas permeability of the porous medium is an important parameter\nin the prediction of unconventional gas production, which was first\ninvestigated systematically by Klinkenberg in 1941 and found to increase with\nthe reciprocal mean gas pressure (or equivalently, the Knudsen number).\nAlthough the underlying rarefaction effects are well-known, the reason that the\ncorrection factor in Klinkenberg's famous equation decreases when the Knudsen\nnumber increases has not been fully understood. Most of the studies idealize\nthe porous medium as a bundle of straight cylindrical tubes, however, according\nto the gas kinetic theory, this only results in an increase of the correction\nfactor with the Knudsen number, which clearly contradicts Klinkenberg's\nexperimental observations. Here, by solving the Bhatnagar-Gross-Krook equation\nin simplified (but not simple) porous media, we identify, for the first time,\ntwo key factors that can explain Klinkenberg's experimental results: the\ntortuous flow path and the non-unitary tangential momentum accommodation\ncoefficient for the gas-surface interaction. Moreover, we find that\nKlinkenberg's results can only be observed when the ratio between the apparent\nand intrinsic permeabilities is $\\lesssim30$; at large ratios (or Knudsen\nnumbers) the correction factor increases with the Knudsen number. Our numerical\nresults could also serve as benchmarking cases to assess the accuracy of\nmacroscopic models and/or numerical schemes for the modeling/simulation of\nrarefied gas flows in complex geometries over a wide range of gas rarefaction.\n", "title": "On the apparent permeability of porous media in rarefied gas flows" }
null
null
null
null
true
null
156
null
Default
null
null
null
{ "abstract": " In previous papers, threshold probabilities for the properties of a random\ndistance graph to contain strictly balanced graphs were found. We extend this\nresult to arbitrary graphs and prove that the number of copies of a strictly\nbalanced graph has asymptotically Poisson distribution at the threshold.\n", "title": "Small subgraphs and their extensions in a random distance graph" }
null
null
null
null
true
null
157
null
Default
null
null
null
{ "abstract": " Runtime enforcement can be effectively used to improve the reliability of\nsoftware applications. However, it often requires the definition of ad hoc\npolicies and enforcement strategies, which might be expensive to identify and\nimplement. This paper discusses how to exploit lifecycle events to obtain\nuseful enforcement strategies that can be easily reused across applications,\nthus reducing the cost of adoption of the runtime enforcement technology. The\npaper finally sketches how this idea can be used to define libraries that can\nautomatically overcome problems related to applications misusing them.\n", "title": "Increasing the Reusability of Enforcers with Lifecycle Events" }
null
null
null
null
true
null
158
null
Default
null
null
null
{ "abstract": " The atomic norm provides a generalization of the $\\ell_1$-norm to continuous\nparameter spaces. When applied as a sparse regularizer for line spectral\nestimation the solution can be obtained by solving a convex optimization\nproblem. This problem is known as atomic norm soft thresholding (AST). It can\nbe cast as a semidefinite program and solved by standard methods. In the\nsemidefinite formulation there are $O(N^2)$ dual variables and a standard\nprimal-dual interior point method requires at least $O(N^6)$ flops per\niteration. That has lead researcher to consider alternating direction method of\nmultipliers (ADMM) for the solution of AST, but this method is still somewhat\nslow for large problem sizes. To obtain a faster algorithm we reformulate AST\nas a non-symmetric conic program. That has two properties of key importance to\nits numerical solution: the conic formulation has only $O(N)$ dual variables\nand the Toeplitz structure inherent to AST is preserved. Based on it we derive\nFastAST which is a primal-dual interior point method for solving AST. Two\nvariants are considered with the fastest one requiring only $O(N^2)$ flops per\niteration. Extensive numerical experiments demonstrate that FastAST solves AST\nsignificantly faster than a state-of-the-art solver based on ADMM.\n", "title": "A Fast Interior Point Method for Atomic Norm Soft Thresholding" }
null
null
null
null
true
null
159
null
Default
null
null
null
{ "abstract": " We study the problem of causal structure learning over a set of random\nvariables when the experimenter is allowed to perform at most $M$ experiments\nin a non-adaptive manner. We consider the optimal learning strategy in terms of\nminimizing the portions of the structure that remains unknown given the limited\nnumber of experiments in both Bayesian and minimax setting. We characterize the\ntheoretical optimal solution and propose an algorithm, which designs the\nexperiments efficiently in terms of time complexity. We show that for bounded\ndegree graphs, in the minimax case and in the Bayesian case with uniform\npriors, our proposed algorithm is a $\\rho$-approximation algorithm, where\n$\\rho$ is independent of the order of the underlying graph. Simulations on both\nsynthetic and real data show that the performance of our algorithm is very\nclose to the optimal solution.\n", "title": "Optimal Experiment Design for Causal Discovery from Fixed Number of Experiments" }
null
null
null
null
true
null
160
null
Default
null
null
null
{ "abstract": " We present a novel data-driven nested optimization framework that addresses\nthe problem of coupling between plant and controller optimization. This\noptimization strategy is tailored towards instances where a closed-form\nexpression for the system dynamic response is unobtainable and simulations or\nexperiments are necessary. Specifically, Bayesian Optimization, which is a\ndata-driven technique for finding the optimum of an unknown and\nexpensive-to-evaluate objective function, is employed to solve a nested\noptimization problem. The underlying objective function is modeled by a\nGaussian Process (GP); then, Bayesian Optimization utilizes the predictive\nuncertainty information from the GP to determine the best subsequent control or\nplant parameters. The proposed framework differs from the majority of co-design\nliterature where there exists a closed-form model of the system dynamics.\nFurthermore, we utilize the idea of Batch Bayesian Optimization at the plant\noptimization level to generate a set of plant designs at each iteration of the\noverall optimization process, recognizing that there will exist economies of\nscale in running multiple experiments in each iteration of the plant design\nprocess. We validate the proposed framework for a Buoyant Airborne Turbine\n(BAT). We choose the horizontal stabilizer area, longitudinal center of mass\nrelative to center of buoyancy (plant parameters), and the pitch angle\nset-point (controller parameter) as our decision variables. Our results\ndemonstrate that these plant and control parameters converge to their\nrespective optimal values within only a few iterations.\n", "title": "Economically Efficient Combined Plant and Controller Design Using Batch Bayesian Optimization: Mathematical Framework and Airborne Wind Energy Case Study" }
null
null
null
null
true
null
161
null
Default
null
null
null
{ "abstract": " We explore the topological properties of quantum spin-1/2 chains with two\nIsing symmetries. This class of models does not possess any of the symmetries\nthat are required to protect the Haldane phase. Nevertheless, we show that\nthere are 4 symmetry-protected topological phases, in addition to 6 phases that\nspontaneously break one or both Ising symmetries. By mapping the model to\none-dimensional interacting fermions with particle-hole and time-reversal\nsymmetry, we obtain integrable parent Hamiltonians for the conventional and\ntopological phases of the spin model. We use these Hamiltonians to characterize\nthe physical properties of all 10 phases, identify their local and nonlocal\norder parameters, and understand the effects of weak perturbations that respect\nthe Ising symmetries. Our study provides the first explicit example of a class\nof spin chains with several topologically non-trivial phases, and binds\ntogether the topological classifications of interacting bosons and fermions.\n", "title": "The 10 phases of spin chains with two Ising symmetries" }
null
null
null
null
true
null
162
null
Default
null
null
null
{ "abstract": " Most of the codes that have an algebraic decoding algorithm are derived from\nthe Reed Solomon codes. They are obtained by taking equivalent codes, for\nexample the generalized Reed Solomon codes, or by using the so-called subfield\nsubcode method, which leads to Alternant codes and Goppa codes over the\nunderlying prime field, or over some intermediate subfield. The main advantages\nof these constructions is to preserve both the minimum distance and the\ndecoding algorithm of the underlying Reed Solomon code. In this paper, we\npropose a generalization of the subfield subcode construction by introducing\nthe notion of subspace subcodes and a generalization of the equivalence of\ncodes which leads to the notion of generalized subspace subcodes. When the\ndimension of the selected subspaces is equal to one, we show that our approach\ngives exactly the family of the codes obtained by equivalence and subfield\nsubcode technique. However, our approach highlights the links between the\nsubfield subcode of a code defined over an extension field and the operation of\npuncturing the $q$-ary image of this code. When the dimension of the subspaces\nis greater than one, we obtain codes whose alphabet is no longer a finite\nfield, but a set of r-uples. We explain why these codes are practically as\nefficient for applications as the codes defined on an extension of degree r. In\naddition, they make it possible to obtain decodable codes over a large alphabet\nhaving parameters previously inaccessible. As an application, we give some\nexamples that can be used in public key cryptosystems such as McEliece.\n", "title": "Generalized subspace subcodes with application in cryptology" }
null
null
null
null
true
null
163
null
Default
null
null
null
{ "abstract": " Motivated by the study of Nishinou-Nohara-Ueda on the Floer thoery of\nGelfand-Cetlin systems over complex partial flag manifolds, we provide a\ncomplete description of the topology of Gelfand-Cetlin fibers. We prove that\nall fibers are \\emph{smooth} isotropic submanifolds and give a complete\ndescription of the fiber to be Lagrangian in terms of combinatorics of\nGelfand-Cetlin polytope. Then we study (non-)displaceability of Lagrangian\nfibers. After a few combinatorial and numercal tests for the displaceability,\nusing the bulk-deformation of Floer cohomology by Schubert cycles, we prove\nthat every full flag manifold $\\mathcal{F}(n)$ ($n \\geq 3$) with a monotone\nKirillov-Kostant-Souriau symplectic form carries a continuum of\nnon-displaceable Lagrangian tori which degenerates to a non-torus fiber in the\nHausdorff limit. In particular, the Lagrangian $S^3$-fiber in $\\mathcal{F}(3)$\nis non-displaceable the question of which was raised by Nohara-Ueda who\ncomputed its Floer cohomology to be vanishing.\n", "title": "Lagrangian fibers of Gelfand-Cetlin systems" }
null
null
null
null
true
null
164
null
Default
null
null
null
{ "abstract": " Ensemble data assimilation methods such as the Ensemble Kalman Filter (EnKF)\nare a key component of probabilistic weather forecasting. They represent the\nuncertainty in the initial conditions by an ensemble which incorporates\ninformation coming from the physical model with the latest observations.\nHigh-resolution numerical weather prediction models ran at operational centers\nare able to resolve non-linear and non-Gaussian physical phenomena such as\nconvection. There is therefore a growing need to develop ensemble assimilation\nalgorithms able to deal with non-Gaussianity while staying computationally\nfeasible. In the present paper we address some of these needs by proposing a\nnew hybrid algorithm based on the Ensemble Kalman Particle Filter. It is fully\nformulated in ensemble space and uses a deterministic scheme such that it has\nthe ensemble transform Kalman filter (ETKF) instead of the stochastic EnKF as a\nlimiting case. A new criterion for choosing the proportion of particle filter\nand ETKF update is also proposed. The new algorithm is implemented in the COSMO\nframework and numerical experiments in a quasi-operational convective-scale\nsetup are conducted. The results show the feasibility of the new algorithm in\npractice and indicate a strong potential for such local hybrid methods, in\nparticular for forecasting non-Gaussian variables such as wind and hourly\nprecipitation.\n", "title": "A local ensemble transform Kalman particle filter for convective scale data assimilation" }
null
null
null
null
true
null
165
null
Default
null
null
null
{ "abstract": " In this paper, we consider the Tensor Robust Principal Component Analysis\n(TRPCA) problem, which aims to exactly recover the low-rank and sparse\ncomponents from their sum. Our model is based on the recently proposed\ntensor-tensor product (or t-product) [13]. Induced by the t-product, we first\nrigorously deduce the tensor spectral norm, tensor nuclear norm, and tensor\naverage rank, and show that the tensor nuclear norm is the convex envelope of\nthe tensor average rank within the unit ball of the tensor spectral norm. These\ndefinitions, their relationships and properties are consistent with matrix\ncases. Equipped with the new tensor nuclear norm, we then solve the TRPCA\nproblem by solving a convex program and provide the theoretical guarantee for\nthe exact recovery. Our TRPCA model and recovery guarantee include matrix RPCA\nas a special case. Numerical experiments verify our results, and the\napplications to image recovery and background modeling problems demonstrate the\neffectiveness of our method.\n", "title": "Tensor Robust Principal Component Analysis with A New Tensor Nuclear Norm" }
null
null
null
null
true
null
166
null
Default
null
null
null
{ "abstract": " Galaxies in the local Universe are known to follow bimodal distributions in\nthe global stellar populations properties. We analyze the distribution of the\nlocal average stellar-population ages of 654,053 sub-galactic regions resolved\non ~1-kpc scales in a volume-corrected sample of 394 galaxies, drawn from the\nCALIFA-DR3 integral-field-spectroscopy survey and complemented by SDSS imaging.\nWe find a bimodal local-age distribution, with an old and a young peak\nprimarily due to regions in early-type galaxies and star-forming regions of\nspirals, respectively. Within spiral galaxies, the older ages of bulges and\ninter-arm regions relative to spiral arms support an internal age bimodality.\nAlthough regions of higher stellar-mass surface-density, mu*, are typically\nolder, mu* alone does not determine the stellar population age and a bimodal\ndistribution is found at any fixed mu*. We identify an \"old ridge\" of regions\nof age ~9 Gyr, independent of mu*, and a \"young sequence\" of regions with age\nincreasing with mu* from 1-1.5 Gyr to 4-5 Gyr. We interpret the former as\nregions containing only old stars, and the latter as regions where the relative\ncontamination of old stellar populations by young stars decreases as mu*\nincreases. The reason why this bimodal age distribution is not inconsistent\nwith the unimodal shape of the cosmic-averaged star-formation history is that\ni) the dominating contribution by young stars biases the age low with respect\nto the average epoch of star formation, and ii) the use of a single average age\nper region is unable to represent the full time-extent of the star-formation\nhistory of \"young-sequence\" regions.\n", "title": "Resolving the age bimodality of galaxy stellar populations on kpc scales" }
null
null
null
null
true
null
167
null
Default
null
null
null
{ "abstract": " We introduce a minimal model for the evolution of functional\nprotein-interaction networks using a sequence-based mutational algorithm, and\napply the model to study neutral drift in networks that yield oscillatory\ndynamics. Starting with a functional core module, random evolutionary drift\nincreases network complexity even in the absence of specific selective\npressures. Surprisingly, we uncover a hidden order in sequence space that gives\nrise to long-term evolutionary memory, implying strong constraints on network\nevolution due to the topology of accessible sequence space.\n", "title": "Hidden long evolutionary memory in a model biochemical network" }
null
null
null
null
true
null
168
null
Default
null
null
null
{ "abstract": " The handwritten string recognition is still a challengeable task, though the\npowerful deep learning tools were introduced. In this paper, based on TAO-FCN,\nwe proposed an end-to-end system for handwritten string recognition. Compared\nwith the conventional methods, there is no preprocess nor manually designed\nrules employed. With enough labelled data, it is easy to apply the proposed\nmethod to different applications. Although the performance of the proposed\nmethod may not be comparable with the state-of-the-art approaches, it's\nusability and robustness are more meaningful for practical applications.\n", "title": "On Study of the Reliable Fully Convolutional Networks with Tree Arranged Outputs (TAO-FCN) for Handwritten String Recognition" }
null
null
[ "Computer Science" ]
null
true
null
169
null
Validated
null
null
null
{ "abstract": " We note that the necessary and sufficient conditions established by Marcel\nRiesz for the inclusion of regular Nörlund summation methods are in fact\napplicable quite generally.\n", "title": "Marcel Riesz on Nörlund Means" }
null
null
null
null
true
null
170
null
Default
null
null
null
{ "abstract": " These lectures notes were written for a summer school on Mathematics for\npost-quantum cryptography in Thiès, Senegal. They try to provide a guide for\nMasters' students to get through the vast literature on elliptic curves,\nwithout getting lost on their way to learning isogeny based cryptography. They\nare by no means a reference text on the theory of elliptic curves, nor on\ncryptography; students are encouraged to complement these notes with some of\nthe books recommended in the bibliography.\nThe presentation is divided in three parts, roughly corresponding to the\nthree lectures given. In an effort to keep the reader interested, each part\nalternates between the fundamental theory of elliptic curves, and applications\nin cryptography. We often prefer to have the main ideas flow smoothly, rather\nthan having a rigorous presentation as one would have in a more classical book.\nThe reader will excuse us for the inaccuracies and the omissions.\n", "title": "Mathematics of Isogeny Based Cryptography" }
null
null
null
null
true
null
171
null
Default
null
null
null
{ "abstract": " It has been shown recently that changing the fluidic properties of a drug can\nimprove its efficacy in ablating solid tumors. We develop a modeling framework\nfor tumor ablation, and present the simplest possible model for drug diffusion\nin a spherical tumor with leaky boundaries and assuming cell death eventually\nleads to ablation of that cell effectively making the two quantities\nnumerically equivalent. The death of a cell after a given exposure time depends\non both the concentration of the drug and the amount of oxygen available to the\ncell. Higher oxygen availability leads to cell death at lower drug\nconcentrations. It can be assumed that a minimum concentration is required for\na cell to die, effectively connecting diffusion with efficacy. The\nconcentration threshold decreases as exposure time increases, which allows us\nto compute dose-response curves. Furthermore, these curves can be plotted at\nmuch finer time intervals compared to that of experiments, which is used to\nproduce a dose-threshold-response surface giving an observer a complete picture\nof the drug's efficacy for an individual. In addition, since the diffusion,\nleak coefficients, and the availability of oxygen is different for different\nindividuals and tumors, we produce artificial replication data through\nbootstrapping to simulate error. While the usual data-driven model with\nSigmoidal curves use 12 free parameters, our mechanistic model only has two\nfree parameters, allowing it to be open to scrutiny rather than forcing\nagreement with data. Even so, the simplest model in our framework, derived\nhere, shows close agreement with the bootstrapped curves, and reproduces well\nestablished relations, such as Haber's rule.\n", "title": "Modeling of drug diffusion in a solid tumor leading to tumor cell death" }
null
null
null
null
true
null
172
null
Default
null
null
null
{ "abstract": " To identify the estimand in missing data problems and observational studies,\nit is common to base the statistical estimation on the \"missing at random\" and\n\"no unmeasured confounder\" assumptions. However, these assumptions are\nunverifiable using empirical data and pose serious threats to the validity of\nthe qualitative conclusions of the statistical inference. A sensitivity\nanalysis asks how the conclusions may change if the unverifiable assumptions\nare violated to a certain degree. In this paper we consider a marginal\nsensitivity model which is a natural extension of Rosenbaum's sensitivity model\nthat is widely used for matched observational studies. We aim to construct\nconfidence intervals based on inverse probability weighting estimators, such\nthat asymptotically the intervals have at least nominal coverage of the\nestimand whenever the data generating distribution is in the collection of\nmarginal sensitivity models. We use a percentile bootstrap and a generalized\nminimax/maximin inequality to transform this intractable problem to a linear\nfractional programming problem, which can be solved very efficiently. We\nillustrate our method using a real dataset to estimate the causal effect of\nfish consumption on blood mercury level.\n", "title": "Sensitivity analysis for inverse probability weighting estimators via the percentile bootstrap" }
null
null
null
null
true
null
173
null
Default
null
null
null
{ "abstract": " In this paper, we provide an analysis of self-organized network management,\nwith an end-to-end perspective of the network. Self-organization as applied to\ncellular networks is usually referred to Self-organizing Networks (SONs), and\nit is a key driver for improving Operations, Administration, and Maintenance\n(OAM) activities. SON aims at reducing the cost of installation and management\nof 4G and future 5G networks, by simplifying operational tasks through the\ncapability to configure, optimize and heal itself. To satisfy 5G network\nmanagement requirements, this autonomous management vision has to be extended\nto the end to end network. In literature and also in some instances of products\navailable in the market, Machine Learning (ML) has been identified as the key\ntool to implement autonomous adaptability and take advantage of experience when\nmaking decisions. In this paper, we survey how network management can\nsignificantly benefit from ML solutions. We review and provide the basic\nconcepts and taxonomy for SON, network management and ML. We analyse the\navailable state of the art in the literature, standardization, and in the\nmarket. We pay special attention to 3rd Generation Partnership Project (3GPP)\nevolution in the area of network management and to the data that can be\nextracted from 3GPP networks, in order to gain knowledge and experience in how\nthe network is working, and improve network performance in a proactive way.\nFinally, we go through the main challenges associated with this line of\nresearch, in both 4G and in what 5G is getting designed, while identifying new\ndirections for research.\n", "title": "From 4G to 5G: Self-organized Network Management meets Machine Learning" }
null
null
null
null
true
null
174
null
Default
null
null
null
{ "abstract": " Understanding smart grid cyber attacks is key for developing appropriate\nprotection and recovery measures. Advanced attacks pursue maximized impact at\nminimized costs and detectability. This paper conducts risk analysis of\ncombined data integrity and availability attacks against the power system state\nestimation. We compare the combined attacks with pure integrity attacks - false\ndata injection (FDI) attacks. A security index for vulnerability assessment to\nthese two kinds of attacks is proposed and formulated as a mixed integer linear\nprogramming problem. We show that such combined attacks can succeed with fewer\nresources than FDI attacks. The combined attacks with limited knowledge of the\nsystem model also expose advantages in keeping stealth against the bad data\ndetection. Finally, the risk of combined attacks to reliable system operation\nis evaluated using the results from vulnerability assessment and attack impact\nanalysis. The findings in this paper are validated and supported by a detailed\ncase study.\n", "title": "Cyber Risk Analysis of Combined Data Attacks Against Power System State Estimation" }
null
null
null
null
true
null
175
null
Default
null
null
null
{ "abstract": " We propose a family of near-metrics based on local graph diffusion to capture\nsimilarity for a wide class of data sets. These quasi-metametrics, as their\nnames suggest, dispense with one or two standard axioms of metric spaces,\nspecifically distinguishability and symmetry, so that similarity between data\npoints of arbitrary type and form could be measured broadly and effectively.\nThe proposed near-metric family includes the forward k-step diffusion and its\nreverse, typically on the graph consisting of data objects and their features.\nBy construction, this family of near-metrics is particularly appropriate for\ncategorical data, continuous data, and vector representations of images and\ntext extracted via deep learning approaches. We conduct extensive experiments\nto evaluate the performance of this family of similarity measures and compare\nand contrast with traditional measures of similarity used for each specific\napplication and with the ground truth when available. We show that for\nstructured data including categorical and continuous data, the near-metrics\ncorresponding to normalized forward k-step diffusion (k small) work as one of\nthe best performing similarity measures; for vector representations of text and\nimages including those extracted from deep learning, the near-metrics derived\nfrom normalized and reverse k-step graph diffusion (k very small) exhibit\noutstanding ability to distinguish data points from different classes.\n", "title": "A New Family of Near-metrics for Universal Similarity" }
null
null
null
null
true
null
176
null
Default
null
null
null
{ "abstract": " Recommender system is an important component of many web services to help\nusers locate items that match their interests. Several studies showed that\nrecommender systems are vulnerable to poisoning attacks, in which an attacker\ninjects fake data to a given system such that the system makes recommendations\nas the attacker desires. However, these poisoning attacks are either agnostic\nto recommendation algorithms or optimized to recommender systems that are not\ngraph-based. Like association-rule-based and matrix-factorization-based\nrecommender systems, graph-based recommender system is also deployed in\npractice, e.g., eBay, Huawei App Store. However, how to design optimized\npoisoning attacks for graph-based recommender systems is still an open problem.\nIn this work, we perform a systematic study on poisoning attacks to graph-based\nrecommender systems. Due to limited resources and to avoid detection, we assume\nthe number of fake users that can be injected into the system is bounded. The\nkey challenge is how to assign rating scores to the fake users such that the\ntarget item is recommended to as many normal users as possible. To address the\nchallenge, we formulate the poisoning attacks as an optimization problem,\nsolving which determines the rating scores for the fake users. We also propose\ntechniques to solve the optimization problem. We evaluate our attacks and\ncompare them with existing attacks under white-box (recommendation algorithm\nand its parameters are known), gray-box (recommendation algorithm is known but\nits parameters are unknown), and black-box (recommendation algorithm is\nunknown) settings using two real-world datasets. Our results show that our\nattack is effective and outperforms existing attacks for graph-based\nrecommender systems. For instance, when 1% fake users are injected, our attack\ncan make a target item recommended to 580 times more normal users in certain\nscenarios.\n", "title": "Poisoning Attacks to Graph-Based Recommender Systems" }
null
null
null
null
true
null
177
null
Default
null
null
null
{ "abstract": " This paper describes the Stockholm University/University of Groningen\n(SU-RUG) system for the SIGMORPHON 2017 shared task on morphological\ninflection. Our system is based on an attentional sequence-to-sequence neural\nnetwork model using Long Short-Term Memory (LSTM) cells, with joint training of\nmorphological inflection and the inverse transformation, i.e. lemmatization and\nmorphological analysis. Our system outperforms the baseline with a large\nmargin, and our submission ranks as the 4th best team for the track we\nparticipate in (task 1, high-resource).\n", "title": "SU-RUG at the CoNLL-SIGMORPHON 2017 shared task: Morphological Inflection with Attentional Sequence-to-Sequence Models" }
null
null
null
null
true
null
178
null
Default
null
null
null
{ "abstract": " Neuroscientists classify neurons into different types that perform similar\ncomputations at different locations in the visual field. Traditional methods\nfor neural system identification do not capitalize on this separation of 'what'\nand 'where'. Learning deep convolutional feature spaces that are shared among\nmany neurons provides an exciting path forward, but the architectural design\nneeds to account for data limitations: While new experimental techniques enable\nrecordings from thousands of neurons, experimental time is limited so that one\ncan sample only a small fraction of each neuron's response space. Here, we show\nthat a major bottleneck for fitting convolutional neural networks (CNNs) to\nneural data is the estimation of the individual receptive field locations, a\nproblem that has been scratched only at the surface thus far. We propose a CNN\narchitecture with a sparse readout layer factorizing the spatial (where) and\nfeature (what) dimensions. Our network scales well to thousands of neurons and\nshort recordings and can be trained end-to-end. We evaluate this architecture\non ground-truth data to explore the challenges and limitations of CNN-based\nsystem identification. Moreover, we show that our network model outperforms\ncurrent state-of-the art system identification models of mouse primary visual\ncortex.\n", "title": "Neural system identification for large populations separating \"what\" and \"where\"" }
null
null
null
null
true
null
179
null
Default
null
null
null
{ "abstract": " The extremely low efficiency is regarded as the bottleneck of Wireless Power\nTransfer (WPT) technology. To tackle this problem, either enlarging the\ntransfer power or changing the infrastructure of WPT system could be an\nintuitively proposed way. However, the drastically important issue on the user\nexposure of electromagnetic radiation is rarely considered while we try to\nimprove the efficiency of WPT. In this paper, a Distributed Antenna Power\nBeacon (DA-PB) based WPT system where these antennas are uniformly distributed\non a circle is analyzed and optimized with the safety electromagnetic radiation\nlevel (SERL) requirement. In this model, three key questions are intended to be\nanswered: 1) With the SERL, what is the performance of the harvested power at\nthe users ? 2) How do we configure the parameters to maximize the efficiency of\nWPT? 3) Under the same constraints, does the DA-PB still have performance gain\nthan the Co-located Antenna PB (CA-PB)? First, the minimum antenna height of\nDA-PB is derived to make the radio frequency (RF) electromagnetic radiation\npower density at any location of the charging cell lower than the SERL\npublished by the Federal Communications Commission (FCC). Second, the\nclosed-form expressions of average harvested Direct Current (DC) power per user\nin the charging cell for pass-loss exponent 2 and 4 are also provided. In order\nto maximize the average efficiency of WPT, the optimal radii for distributed\nantennas elements (DAEs) are derived when the pass-loss exponent takes the\ntypical value $2$ and $4$. For comparison, the CA-PB is also analyzed as a\nbenchmark. Simulation results verify our derived theoretical results. And it is\nshown that the proposed DA-PB indeed achieves larger average harvested DC power\nthan CA-PB and can improve the efficiency of WPT.\n", "title": "On the Deployment of Distributed Antennas for Wireless Power Transfer with Safety Electromagnetic Radiation Level Requirement" }
null
null
[ "Computer Science" ]
null
true
null
180
null
Validated
null
null
null
{ "abstract": " A numerical method for particle-laden fluids interacting with a deformable\nsolid domain and mobile rigid parts is proposed and implemented in a full\nengineering system. The fluid domain is modeled with a lattice Boltzmann\nrepresentation, the particles and rigid parts are modeled with a discrete\nelement representation, and the deformable solid domain is modeled using a\nLagrangian mesh. The main issue of this work, since separately each of these\nmethods is a mature tool, is to develop coupling and model-reduction approaches\nin order to efficiently simulate coupled problems of this nature, as occur in\nvarious geological and engineering applications. The lattice Boltzmann method\nincorporates a large-eddy simulation technique using the Smagorinsky turbulence\nmodel. The discrete element method incorporates spherical and polyhedral\nparticles for stiff contact interactions. A neo-Hookean hyperelastic model is\nused for the deformable solid. We provide a detailed description of how to\ncouple the three solvers within a unified algorithm. The technique we propose\nfor rubber modeling/coupling exploits a simplification that prevents having to\nsolve a finite-element problem each time step. We also develop a technique to\nreduce the domain size of the full system by replacing certain zones with\nquasi-analytic solutions, which act as effective boundary conditions for the\nlattice Boltzmann method. The major ingredients of the routine are are\nseparately validated. To demonstrate the coupled method in full, we simulate\nslurry flows in two kinds of piston-valve geometries. The dynamics of the valve\nand slurry are studied and reported over a large range of input parameters.\n", "title": "A simulation technique for slurries interacting with moving parts and deformable solids with applications" }
null
null
null
null
true
null
181
null
Default
null
null
null
{ "abstract": " We construct a Schwinger-Keldysh effective field theory for relativistic\nhydrodynamics for charged matter in a thermal background using a superspace\nformalism. Superspace allows us to efficiently impose the symmetries of the\nproblem and to obtain a simple expression for the effective action. We show\nthat the theory we obtain is compatible with the Kubo-Martin-Schwinger\ncondition, which in turn implies that Green's functions obey the\nfluctuation-dissipation theorem. Our approach complements and extends existing\nformulations found in the literature.\n", "title": "Dissipative hydrodynamics in superspace" }
null
null
null
null
true
null
182
null
Default
null
null
null
{ "abstract": " Observables have a dual nature in both classical and quantum kinematics: they\nare at the same time \\emph{quantities}, allowing to separate states by means of\ntheir numerical values, and \\emph{generators of transformations}, establishing\nrelations between different states. In this work, we show how this two-fold\nrole of observables constitutes a key feature in the conceptual analysis of\nclassical and quantum kinematics, shedding a new light on the distinguishing\nfeature of the quantum at the kinematical level. We first take a look at the\nalgebraic description of both classical and quantum observables in terms of\nJordan-Lie algebras and show how the two algebraic structures are the precise\nmathematical manifestation of the two-fold role of observables. Then, we turn\nto the geometric reformulation of quantum kinematics in terms of Kähler\nmanifolds. A key achievement of this reformulation is to show that the two-fold\nrole of observables is the constitutive ingredient defining what an observable\nis. Moreover, it points to the fact that, from the restricted point of view of\nthe transformational role of observables, classical and quantum kinematics\nbehave in exactly the same way. Finally, we present Landsman's general\nframework of Poisson spaces with transition probability, which highlights with\nunmatched clarity that the crucial difference between the two kinematics lies\nin the way the two roles of observables are related to each other.\n", "title": "The Two-fold Role of Observables in Classical and Quantum Kinematics" }
null
null
null
null
true
null
183
null
Default
null
null
null
{ "abstract": " Let $(M,g)$ be a smooth compact Riemannian manifold of dimension $n$ with\nsmooth boundary $\\partial M$. Suppose that $(M,g)$ admits a scalar-flat\nconformal metric. We prove that the supremum of the isoperimetric quotient over\nthe scalar-flat conformal class is strictly larger than the best constant of\nthe isoperimetric inequality in the Euclidean space, and consequently is\nachieved, if either (i) $n\\ge 12$ and $\\partial M$ has a nonumbilic point; or\n(ii) $n\\ge 10$, $\\partial M$ is umbilic and the Weyl tensor does not vanish at\nsome boundary point.\n", "title": "On the isoperimetric quotient over scalar-flat conformal classes" }
null
null
null
null
true
null
184
null
Default
null
null
null
{ "abstract": " Random feature maps are ubiquitous in modern statistical machine learning,\nwhere they generalize random projections by means of powerful, yet often\ndifficult to analyze nonlinear operators. In this paper, we leverage the\n\"concentration\" phenomenon induced by random matrix theory to perform a\nspectral analysis on the Gram matrix of these random feature maps, here for\nGaussian mixture models of simultaneously large dimension and size. Our results\nare instrumental to a deeper understanding on the interplay of the nonlinearity\nand the statistics of the data, thereby allowing for a better tuning of random\nfeature-based techniques.\n", "title": "On the Spectrum of Random Features Maps of High Dimensional Data" }
null
null
null
null
true
null
185
null
Default
null
null
null
{ "abstract": " The calculation of minimum energy paths for transitions such as atomic and/or\nspin re-arrangements is an important task in many contexts and can often be\nused to determine the mechanism and rate of transitions. An important challenge\nis to reduce the computational effort in such calculations, especially when ab\ninitio or electron density functional calculations are used to evaluate the\nenergy since they can require large computational effort. Gaussian process\nregression is used here to reduce significantly the number of energy\nevaluations needed to find minimum energy paths of atomic rearrangements. By\nusing results of previous calculations to construct an approximate energy\nsurface and then converge to the minimum energy path on that surface in each\nGaussian process iteration, the number of energy evaluations is reduced\nsignificantly as compared with regular nudged elastic band calculations. For a\ntest problem involving rearrangements of a heptamer island on a crystal\nsurface, the number of energy evaluations is reduced to less than a fifth. The\nscaling of the computational effort with the number of degrees of freedom as\nwell as various possible further improvements to this approach are discussed.\n", "title": "Minimum energy path calculations with Gaussian process regression" }
null
null
null
null
true
null
186
null
Default
null
null
null
{ "abstract": " Social media has changed the ways of communication, where everyone is\nequipped with the power to express their opinions to others in online\ndiscussion platforms. Previously, a number of stud- ies have been presented to\nidentify opinion leaders in online discussion networks. Feng (\"Are you\nconnected? Evaluating information cascade in online discussion about the\n#RaceTogether campaign\", Computers in Human Behavior, 2016) identified five\ntypes of central users and their communication patterns in an online\ncommunication network of a limited time span. However, to trace the change in\ncommunication pattern, a long-term analysis is required. In this study, we\ncritically analyzed framework presented by Feng based on five types of central\nusers in online communication network and their communication pattern in a\nlong-term manner. We take another case study presented by Udnor et al.\n(\"Determining social media impact on the politics of developing countries using\nsocial network analytics\", Program, 2016) to further understand the dynamics as\nwell as to perform validation . Results indicate that there may not exist all\nof these central users in an online communication network in a long-term\nmanner. Furthermore, we discuss the changing positions of opinion leaders and\ntheir power to keep isolates interested in an online discussion network.\n", "title": "Evaluating Roles of Central Users in Online Communication Networks: A Case Study of #PanamaLeaks" }
null
null
null
null
true
null
187
null
Default
null
null
null
{ "abstract": " Let $E_n(f)_{\\alpha,\\beta,\\gamma}$ denote the error of best approximation by\npolynomials of degree at most $n$ in the space\n$L^2(\\varpi_{\\alpha,\\beta,\\gamma})$ on the triangle $\\{(x,y): x, y \\ge 0, x+y\n\\le 1\\}$, where $\\varpi_{\\alpha,\\beta,\\gamma}(x,y) := x^\\alpha y ^\\beta\n(1-x-y)^\\gamma$ for $\\alpha,\\beta,\\gamma > -1$. Our main result gives a sharp\nestimate of $E_n(f)_{\\alpha,\\beta,\\gamma}$ in terms of the error of best\napproximation for higher order derivatives of $f$ in appropriate Sobolev\nspaces. The result also leads to a characterization of\n$E_n(f)_{\\alpha,\\beta,\\gamma}$ by a weighted $K$-functional.\n", "title": "Best polynomial approximation on the triangle" }
null
null
null
null
true
null
188
null
Default
null
null
null
{ "abstract": " Due to the increasing dependency of critical infrastructure on synchronized\nclocks, network time synchronization protocols have become an attractive target\nfor attackers. We identify data origin authentication as the key security\nobjective and suggest to employ recently proposed high-performance digital\nsignature schemes (Ed25519 and MQQ-SIG)) as foundation of a novel set of\nsecurity measures to secure multicast time synchronization. We conduct\nexperiments to verify the computational and communication efficiency for using\nthese signatures in the standard time synchronization protocols NTP and PTP. We\npropose additional security measures to prevent replay attacks and to mitigate\ndelay attacks. Our proposed solutions cover 1-step mode for NTP and PTP and we\nextend our security measures specifically to 2-step mode (PTP) and show that\nthey have no impact on time synchronization's precision.\n", "title": "SecureTime: Secure Multicast Time Synchronization" }
null
null
null
null
true
null
189
null
Default
null
null
null
{ "abstract": " We implement an efficient numerical method to calculate response functions of\ncomplex impurities based on the Density Matrix Renormalization Group (DMRG) and\nuse it as the impurity-solver of the Dynamical Mean Field Theory (DMFT). This\nmethod uses the correction vector to obtain precise Green's functions on the\nreal frequency axis at zero temperature. By using a self-consistent bath\nconfiguration with very low entanglement, we take full advantage of the DMRG to\ncalculate dynamical response functions paving the way to treat large effective\nimpurities such as those corresponding to multi-orbital interacting models and\nmulti-site or multi-momenta clusters. This method leads to reliable\ncalculations of non-local self energies at arbitrary dopings and interactions\nand at any energy scale.\n", "title": "Solving the multi-site and multi-orbital Dynamical Mean Field Theory using Density Matrix Renormalization" }
null
null
null
null
true
null
190
null
Default
null
null
null
{ "abstract": " Bulk and surface electronic structures, calculated using density functional\ntheory and a tight-binding model Hamiltonian, reveal the existence of two\ntopologically invariant (TI) surface states in the family of cubic Bi\nperovskites (ABiO$_3$; A = Na, K, Rb, Cs, Mg, Ca, Sr and Ba). The two TI\nstates, one lying in the valence band (TI-V) and other lying in the conduction\nband (TI-C) are formed out of bonding and antibonding states of the\nBi-$\\{$s,p$\\}$ - O-$\\{$p$\\}$ coordinated covalent interaction. Below a certain\ncritical thickness of the film, which varies with A, TI states of top and\nbottom surfaces couple to destroy the Dirac type linear dispersion and\nconsequently to open surface energy gaps. The origin of s-p band inversion,\nnecessary to form a TI state, classifies the family of ABiO$_3$ into two. For\nclass-I (A = Na, K, Rb, Cs and Mg) the band inversion, leading to TI-C state,\nis induced by spin-orbit coupling of the Bi-p states and for class-II (A = Ca,\nSr and Ba) the band inversion is induced through weak but sensitive second\nneighbor Bi-Bi interactions.\n", "title": "Topologically Invariant Double Dirac States in Bismuth based Perovskites: Consequence of Ambivalent Charge States and Covalent Bonding" }
null
null
null
null
true
null
191
null
Default
null
null
null
{ "abstract": " It is often recommended that identifiers for ontology terms should be\nsemantics-free or meaningless. In practice, ontology developers tend to use\nnumeric identifiers, starting at 1 and working upwards. In this paper we\npresent a critique of current ontology semantics-free identifiers;\nmonotonically increasing numbers have a number of significant usability flaws\nwhich make them unsuitable as a default option, and we present a series of\nalternatives. We have provide an implementation of these alternatives which can\nbe freely combined.\n", "title": "Identitas: A Better Way To Be Meaningless" }
null
null
null
null
true
null
192
null
Default
null
null
null
{ "abstract": " Deep learning methods have achieved high performance in sound recognition\ntasks. Deciding how to feed the training data is important for further\nperformance improvement. We propose a novel learning method for deep sound\nrecognition: Between-Class learning (BC learning). Our strategy is to learn a\ndiscriminative feature space by recognizing the between-class sounds as\nbetween-class sounds. We generate between-class sounds by mixing two sounds\nbelonging to different classes with a random ratio. We then input the mixed\nsound to the model and train the model to output the mixing ratio. The\nadvantages of BC learning are not limited only to the increase in variation of\nthe training data; BC learning leads to an enlargement of Fisher's criterion in\nthe feature space and a regularization of the positional relationship among the\nfeature distributions of the classes. The experimental results show that BC\nlearning improves the performance on various sound recognition networks,\ndatasets, and data augmentation schemes, in which BC learning proves to be\nalways beneficial. Furthermore, we construct a new deep sound recognition\nnetwork (EnvNet-v2) and train it with BC learning. As a result, we achieved a\nperformance surpasses the human level.\n", "title": "Learning from Between-class Examples for Deep Sound Recognition" }
null
null
null
null
true
null
193
null
Default
null
null
null
{ "abstract": " We propose a linear-time, single-pass, top-down algorithm for multiple\ntesting on directed acyclic graphs (DAGs), where nodes represent hypotheses and\nedges specify a partial ordering in which hypotheses must be tested. The\nprocedure is guaranteed to reject a sub-DAG with bounded false discovery rate\n(FDR) while satisfying the logical constraint that a rejected node's parents\nmust also be rejected. It is designed for sequential testing settings, when the\nDAG structure is known a priori, but the $p$-values are obtained selectively\n(such as in a sequence of experiments), but the algorithm is also applicable in\nnon-sequential settings when all $p$-values can be calculated in advance (such\nas variable/model selection). Our DAGGER algorithm, shorthand for Greedily\nEvolving Rejections on DAGs, provably controls the false discovery rate under\nindependence, positive dependence or arbitrary dependence of the $p$-values.\nThe DAGGER procedure specializes to known algorithms in the special cases of\ntrees and line graphs, and simplifies to the classical Benjamini-Hochberg\nprocedure when the DAG has no edges. We explore the empirical performance of\nDAGGER using simulations, as well as a real dataset corresponding to a gene\nontology, showing favorable performance in terms of time and power.\n", "title": "DAGGER: A sequential algorithm for FDR control on DAGs" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
194
null
Validated
null
null
null
{ "abstract": " In this paper, we consider a Hamiltonian system combining a nonlinear Schr\\\"\nodinger equation (NLS) and an ordinary differential equation (ODE). This system\nis a simplified model of the NLS around soliton solutions. Following Nakanishi\n\\cite{NakanishiJMSJ}, we show scattering of $L^2$ small $H^1$ radial solutions.\nThe proof is based on Nakanishi's framework and Fermi Golden Rule estimates on\n$L^4$ in time norms.\n", "title": "On nonlinear profile decompositions and scattering for a NLS-ODE model" }
null
null
null
null
true
null
195
null
Default
null
null
null
{ "abstract": " We relate the concepts used in decentralized ledger technology to studies of\nepisodic memory in the mammalian brain. Specifically, we introduce the standard\nconcepts of linked list, hash functions, and sharding, from computer science.\nWe argue that these concepts may be more relevant to studies of the neural\nmechanisms of memory than has been previously appreciated. In turn, we also\nhighlight that certain phenomena studied in the brain, namely metacognition,\nreality monitoring, and how perceptual conscious experiences come about, may\ninspire development in blockchain technology too, specifically regarding\nprobabilistic consensus protocols.\n", "title": "Blockchain and human episodic memory" }
null
null
[ "Quantitative Biology" ]
null
true
null
196
null
Validated
null
null
null
{ "abstract": " Time-varying network topologies can deeply influence dynamical processes\nmediated by them. Memory effects in the pattern of interactions among\nindividuals are also known to affect how diffusive and spreading phenomena take\nplace. In this paper we analyze the combined effect of these two ingredients on\nepidemic dynamics on networks. We study the susceptible-infected-susceptible\n(SIS) and the susceptible-infected-removed (SIR) models on the recently\nintroduced activity-driven networks with memory. By means of an activity-based\nmean-field approach we derive, in the long time limit, analytical predictions\nfor the epidemic threshold as a function of the parameters describing the\ndistribution of activities and the strength of the memory effects. Our results\nshow that memory reduces the threshold, which is the same for SIS and SIR\ndynamics, therefore favouring epidemic spreading. The theoretical approach\nperfectly agrees with numerical simulations in the long time asymptotic regime.\nStrong aging effects are present in the preasymptotic regime and the epidemic\nthreshold is deeply affected by the starting time of the epidemics. We discuss\nin detail the origin of the model-dependent preasymptotic corrections, whose\nunderstanding could potentially allow for epidemic control on correlated\ntemporal networks.\n", "title": "Epidemic Spreading and Aging in Temporal Networks with Memory" }
null
null
null
null
true
null
197
null
Default
null
null
null
{ "abstract": " A long-standing obstacle to progress in deep learning is the problem of\nvanishing and exploding gradients. Although, the problem has largely been\novercome via carefully constructed initializations and batch normalization,\narchitectures incorporating skip-connections such as highway and resnets\nperform much better than standard feedforward architectures despite well-chosen\ninitialization and batch normalization. In this paper, we identify the\nshattered gradients problem. Specifically, we show that the correlation between\ngradients in standard feedforward networks decays exponentially with depth\nresulting in gradients that resemble white noise whereas, in contrast, the\ngradients in architectures with skip-connections are far more resistant to\nshattering, decaying sublinearly. Detailed empirical evidence is presented in\nsupport of the analysis, on both fully-connected networks and convnets.\nFinally, we present a new \"looks linear\" (LL) initialization that prevents\nshattering, with preliminary experiments showing the new initialization allows\nto train very deep networks without the addition of skip-connections.\n", "title": "The Shattered Gradients Problem: If resnets are the answer, then what is the question?" }
null
null
null
null
true
null
198
null
Default
null
null
null
{ "abstract": " We study the band structure topology and engineering from the interplay\nbetween local moments and itinerant electrons in the context of pyrochlore\niridates. For the metallic iridate Pr$_2$Ir$_2$O$_7$, the Ir $5d$ conduction\nelectrons interact with the Pr $4f$ local moments via the $f$-$d$ exchange.\nWhile the Ir electrons form a Luttinger semimetal, the Pr moments can be tuned\ninto an ordered spin ice with a finite ordering wavevector, dubbed\n\"Melko-Hertog-Gingras\" state, by varying Ir and O contents. We point out that\nthe ordered spin ice of the Pr local moments generates an internal magnetic\nfield that reconstructs the band structure of the Luttinger semimetal. Besides\nthe broad existence of Weyl nodes, we predict that the magnetic translation of\nthe \"Melko-Hertog-Gingras\" state for the Pr moments protects the Dirac band\ntouching at certain time reversal invariant momenta for the Ir conduction\nelectrons. We propose the magnetic fields to control the Pr magnetic structure\nand thereby indirectly influence the topological and other properties of the Ir\nelectrons. Our prediction may be immediately tested in the ordered\nPr$_2$Ir$_2$O$_7$ samples. We expect our work to stimulate a detailed\nexamination of the band structure, magneto-transport, and other properties of\nPr$_2$Ir$_2$O$_7$.\n", "title": "Pr$_2$Ir$_2$O$_7$: when Luttinger semimetal meets Melko-Hertog-Gingras spin ice state" }
null
null
null
null
true
null
199
null
Default
null
null
null
{ "abstract": " Boundary value problems for Sturm-Liouville operators with potentials from\nthe class $W_2^{-1}$ on a star-shaped graph are considered. We assume that the\npotentials are known on all the edges of the graph except two, and show that\nthe potentials on the remaining edges can be constructed by fractional parts of\ntwo spectra. A uniqueness theorem is proved, and an algorithm for the\nconstructive solution of the partial inverse problem is provided. The main\ningredient of the proofs is the Riesz-basis property of specially constructed\nsystems of functions.\n", "title": "A 2-edge partial inverse problem for the Sturm-Liouville operators with singular potentials on a star-shaped graph" }
null
null
[ "Mathematics" ]
null
true
null
200
null
Validated
null
null