text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We develope a self-consistent description of the Broad Line Region based on\nthe concept of the failed wind powered by the radiation pressure acting on\ndusty accretion disk atmosphere in Keplerian motion. The material raised high\nabove the disk is illuminated, dust evaportes, and the matter falls back\ntowards the disk. This material is the source of emission lines. The model\npredicts the inner and outer radius of the region, the cloud dynamics under the\ndust radiation pressure and, subsequently, just the gravitational field of the\ncentral black hole, which results in assymetry between the rise and fall.\nKnowledge of the dynamics allows to predict the shapes of the emission lines as\nfunctions of the basic parameters of an active nucleus: black hole mass,\naccretion rate, black hole spin (or accretion efficiency) and the viewing angle\nwith respect to the symmetry axis. Here we show preliminary results based on\nanalytical approximations to the cloud motion.\n", "title": "Self-consistent dynamical model of the Broad Line Region" }
null
null
null
null
true
null
1101
null
Default
null
null
null
{ "abstract": " When internal states of atoms are manipulated using coherent optical or\nradio-frequency (RF) radiation, it is essential to know the polarization of the\nradiation with respect to the quantization axis of the atom. We first present a\nmeasurement of the two-dimensional spatial distribution of the electric-field\namplitude of a linearly-polarized pulsed RF electric field at $\\sim 25.6\\,$GHz\nand its angle with respect to a static electric field. The measurements exploit\ncoherent population transfer between the $35$s and $35$p Rydberg states of\nhelium atoms in a pulsed supersonic beam. Based on this experimental result, we\ndevelop a general framework in the form of a set of equations relating the five\nindependent polarization parameters of a coherently oscillating field in a\nfixed laboratory frame to Rabi rates of transitions between a ground and three\nexcited states of an atom with arbitrary quantization axis. We then explain how\nthese equations can be used to fully characterize the polarization in a minimum\nof five Rabi rate measurements by rotation of an external bias-field, or,\nknowing the polarization of the driving field, to determine the orientation of\nthe static field using two measurements. The presented technique is not limited\nto Rydberg atoms and RF fields but can also be applied to characterize optical\nfields. The technique has potential for sensing the spatiotemporal properties\nof electromagnetic fields, e.g., in metrology devices or in hybrid experiments\ninvolving atoms close to surfaces.\n", "title": "Measuring the polarization of electromagnetic fields using Rabi-rate measurements with spatial resolution: experiment and theory" }
null
null
null
null
true
null
1102
null
Default
null
null
null
{ "abstract": " Modern processors are highly optimized systems where every single cycle of\ncomputation time matters. Many optimizations depend on the data that is being\nprocessed. Software-based microarchitectural attacks exploit effects of these\noptimizations. Microarchitectural side-channel attacks leak secrets from\ncryptographic computations, from general purpose computations, or from the\nkernel. This leakage even persists across all common isolation boundaries, such\nas processes, containers, and virtual machines. Microarchitectural fault\nattacks exploit the physical imperfections of modern computer systems.\nShrinking process technology introduces effects between isolated hardware\nelements that can be exploited by attackers to take control of the entire\nsystem. These attacks are especially interesting in scenarios where the\nattacker is unprivileged or even sandboxed.\nIn this thesis, we focus on microarchitectural attacks and defenses on\ncommodity systems. We investigate known and new side channels and show that\nmicroarchitectural attacks can be fully automated. Furthermore, we show that\nthese attacks can be mounted in highly restricted environments such as\nsandboxed JavaScript code in websites. We show that microarchitectural attacks\nexist on any modern computer system, including mobile devices (e.g.,\nsmartphones), personal computers, and commercial cloud systems. This thesis\nconsists of two parts. In the first part, we provide background on modern\nprocessor architectures and discuss state-of-the-art attacks and defenses in\nthe area of microarchitectural side-channel attacks and microarchitectural\nfault attacks. In the second part, a selection of our papers are provided\nwithout modification from their original publications. I have co-authored these\npapers, which have subsequently been anonymously peer-reviewed, accepted, and\npresented at renowned international conferences.\n", "title": "Software-based Microarchitectural Attacks" }
null
null
null
null
true
null
1103
null
Default
null
null
null
{ "abstract": " Semantic segmentation and object detection research have recently achieved\nrapid progress. However, the former task has no notion of different instances\nof the same object, and the latter operates at a coarse, bounding-box level. We\npropose an Instance Segmentation system that produces a segmentation map where\neach pixel is assigned an object class and instance identity label. Most\napproaches adapt object detectors to produce segments instead of boxes. In\ncontrast, our method is based on an initial semantic segmentation module, which\nfeeds into an instance subnetwork. This subnetwork uses the initial\ncategory-level segmentation, along with cues from the output of an object\ndetector, within an end-to-end CRF to predict instances. This part of our model\nis dynamically instantiated to produce a variable number of instances per\nimage. Our end-to-end approach requires no post-processing and considers the\nimage holistically, instead of processing independent proposals. Therefore,\nunlike some related work, a pixel cannot belong to multiple instances.\nFurthermore, far more precise segmentations are achieved, as shown by our\nstate-of-the-art results (particularly at high IoU thresholds) on the Pascal\nVOC and Cityscapes datasets.\n", "title": "Pixelwise Instance Segmentation with a Dynamically Instantiated Network" }
null
null
null
null
true
null
1104
null
Default
null
null
null
{ "abstract": " Matrix factorization is a key tool in data analysis; its applications include\nrecommender systems, correlation analysis, signal processing, among others.\nBinary matrices are a particular case which has received significant attention\nfor over thirty years, especially within the field of data mining. Dictionary\nlearning refers to a family of methods for learning overcomplete basis (also\ncalled frames) in order to efficiently encode samples of a given type; this\narea, now also about twenty years old, was mostly developed within the signal\nprocessing field. In this work we propose two binary matrix factorization\nmethods based on a binary adaptation of the dictionary learning paradigm to\nbinary matrices. The proposed algorithms focus on speed and scalability; they\nwork with binary factors combined with bit-wise operations and a few auxiliary\ninteger ones. Furthermore, the methods are readily applicable to online binary\nmatrix factorization. Another important issue in matrix factorization is the\nchoice of rank for the factors; we address this model selection problem with an\nefficient method based on the Minimum Description Length principle. Our\npreliminary results show that the proposed methods are effective at producing\ninterpretable factorizations of various data types of different nature.\n", "title": "Binary Matrix Factorization via Dictionary Learning" }
null
null
null
null
true
null
1105
null
Default
null
null
null
{ "abstract": " Mild Cognitive Impairment (MCI) is a mental disorder difficult to diagnose.\nLinguistic features, mainly from parsers, have been used to detect MCI, but\nthis is not suitable for large-scale assessments. MCI disfluencies produce\nnon-grammatical speech that requires manual or high precision automatic\ncorrection of transcripts. In this paper, we modeled transcripts into complex\nnetworks and enriched them with word embedding (CNE) to better represent short\ntexts produced in neuropsychological assessments. The network measurements were\napplied with well-known classifiers to automatically identify MCI in\ntranscripts, in a binary classification task. A comparison was made with the\nperformance of traditional approaches using Bag of Words (BoW) and linguistic\nfeatures for three datasets: DementiaBank in English, and Cinderella and\nArizona-Battery in Portuguese. Overall, CNE provided higher accuracy than using\nonly complex networks, while Support Vector Machine was superior to other\nclassifiers. CNE provided the highest accuracies for DementiaBank and\nCinderella, but BoW was more efficient for the Arizona-Battery dataset probably\nowing to its short narratives. The approach using linguistic features yielded\nhigher accuracy if the transcriptions of the Cinderella dataset were manually\nrevised. Taken together, the results indicate that complex networks enriched\nwith embedding is promising for detecting MCI in large-scale assessments\n", "title": "Enriching Complex Networks with Word Embeddings for Detecting Mild Cognitive Impairment from Speech Transcripts" }
null
null
[ "Computer Science" ]
null
true
null
1106
null
Validated
null
null
null
{ "abstract": " Slater's condition -- existence of a \"strictly feasible solution\" -- is a\ncommon assumption in conic optimization. Without strict feasibility,\nfirst-order optimality conditions may be meaningless, the dual problem may\nyield little information about the primal, and small changes in the data may\nrender the problem infeasible. Hence, failure of strict feasibility can\nnegatively impact off-the-shelf numerical methods, such as primal-dual interior\npoint methods, in particular. New optimization modelling techniques and convex\nrelaxations for hard nonconvex problems have shown that the loss of strict\nfeasibility is a more pronounced phenomenon than has previously been realized.\nIn this text, we describe various reasons for the loss of strict feasibility,\nwhether due to poor modelling choices or (more interestingly) rich underlying\nstructure, and discuss ways to cope with it and, in many pronounced cases, how\nto use it as an advantage. In large part, we emphasize the facial reduction\npreprocessing technique due to its mathematical elegance, geometric\ntransparency, and computational potential.\n", "title": "The many faces of degeneracy in conic optimization" }
null
null
null
null
true
null
1107
null
Default
null
null
null
{ "abstract": " We extend the homotopy theories based on point reduction for finite spaces\nand simplicial complexes to finite acyclic categories and $\\Delta$-complexes,\nrespectively. The functors of classifying spaces and face posets are compatible\nwith these homotopy theories. In contrast with the classical settings of finite\nspaces and simplicial complexes, the universality of morphisms and simplices\nplays a central role in this paper.\n", "title": "Strong homotopy types of acyclic categories and $Δ$-complexes" }
null
null
null
null
true
null
1108
null
Default
null
null
null
{ "abstract": " For characterizing the Brownian motion in a bounded domain: $\\Omega$, it is\nwell-known that the boundary conditions of the classical diffusion equation\njust rely on the given information of the solution along the boundary of a\ndomain; on the contrary, for the Lévy flights or tempered Lévy flights in a\nbounded domain, it involves the information of a solution in the complementary\nset of $\\Omega$, i.e., $\\mathbb{R}^n\\backslash \\Omega$, with the potential\nreason that paths of the corresponding stochastic process are discontinuous.\nGuided by probability intuitions and the stochastic perspectives of anomalous\ndiffusion, we show the reasonable ways, ensuring the clear physical meaning and\nwell-posedness of the partial differential equations (PDEs), of specifying\n`boundary' conditions for space fractional PDEs modeling the anomalous\ndiffusion. Some properties of the operators are discussed, and the\nwell-posednesses of the PDEs with generalized boundary conditions are proved.\n", "title": "Boundary problems for the fractional and tempered fractional operators" }
null
null
null
null
true
null
1109
null
Default
null
null
null
{ "abstract": " Since its inception Bohmian mechanics has been generally regarded as a\nhidden-variable theory aimed at providing an objective description of quantum\nphenomena. To date, this rather narrow conception of Bohm's proposal has caused\nit more rejection than acceptance. Now, after 65 years of Bohmian mechanics,\nshould still be such an interpretational aspect the prevailing appraisal? Why\nnot favoring a more pragmatic view, as a legitimate picture of quantum\nmechanics, on equal footing in all respects with any other more conventional\nquantum picture? These questions are used here to introduce a discussion on an\nalternative way to deal with Bohmian mechanics at present, enhancing its aspect\nas an efficient and useful picture or formulation to tackle, explore, describe\nand explain quantum phenomena where phase and correlation (entanglement) are\nkey elements. This discussion is presented through two complementary blocks.\nThe first block is aimed at briefly revisiting the historical context that gave\nrise to the appearance of Bohmian mechanics, and how this approach or analogous\nones have been used in different physical contexts. This discussion is used to\nemphasize a more pragmatic view to the detriment of the more conventional\nhidden-variable (ontological) approach that has been a leitmotif within the\nquantum foundations. The second block focuses on some particular formal aspects\nof Bohmian mechanics supporting the view presented here, with special emphasis\non the physical meaning of the local phase field and the associated velocity\nfield encoded within the wave function. As an illustration, a simple model of\nYoung's two-slit experiment is considered. The simplicity of this model allows\nto understand in an easy manner how the information conveyed by the Bohmian\nformulation relates to other more conventional concepts in quantum mechanics.\nThis sort of pedagogical application is also aimed at ...\n", "title": "Bohm's approach to quantum mechanics: Alternative theory or practical picture?" }
null
null
null
null
true
null
1110
null
Default
null
null
null
{ "abstract": " While conventional lasers are based on gain media with three or four real\nlevels, unconventional lasers including virtual levels and two-photon processes\noffer new opportunities. We study lasing that involves a two-photon process\nthrough a virtual lower level, which we realize in a cloud of cold ytterbium\natoms that are magneto-optically trapped inside a cavity. We pump the atoms on\nthe narrow $^1$S$_0$ $\\to$ $^3$P$_1$ line and generate laser emission on the\nsame transition. Lasing is verified by a threshold behavior of output power\nvs.\\ pump power and atom number, a flat $g^{(2)}$ correlation function above\nthreshold, and the polarization properties of the output. In the proposed\nlasing mechanism the MOT beams create the virtual lower level of the lasing\ntransition. The laser process runs continuously, needs no further repumping,\nand might be adapted to other atoms or transitions such as the ultra narrow\n$^1$S$_0$ $\\to$ $^3$P$_0$ clock transition in ytterbium.\n", "title": "Continuous-wave virtual-state lasing from cold ytterbium atoms" }
null
null
null
null
true
null
1111
null
Default
null
null
null
{ "abstract": " The evaluation of possible climate change consequence on extreme rainfall has\nsignificant implications for the design of engineering structure and\nsocioeconomic resources development. While many studies have assessed the\nimpact of climate change on design rainfall using global and regional climate\nmodel (RCM) predictions, to date, there has been no comprehensive comparison or\nevaluation of intensity-duration-frequency (IDF) statistics at regional scale,\nconsidering both stationary versus nonstationary models for the future climate.\nTo understand how extreme precipitation may respond to future IDF curves, we\nused an ensemble of three RCMs participating in the North-American (NA)-CORDEX\ndomain over eight rainfall stations across Southern Ontario, one of the most\ndensely populated and major economic region in Canada. The IDF relationships\nare derived from multi-model RCM simulations and compared with the\nstation-based observations. We modeled precipitation extremes, at different\ndurations using extreme value distributions considering parameters that are\neither stationary or nonstationary, as a linear function of time. Our results\nshowed that extreme precipitation intensity driven by future climate forcing\nshows a significant increase in intensity for 10-year events in 2050s\n(2030-2070) relative to 1970-2010 baseline period across most of the locations.\nHowever, for longer return periods, an opposite trend is noted. Surprisingly,\nin term of design storms, no significant differences were found when comparing\nstationary and nonstationary IDF estimation methods for the future (2050s) for\nthe larger return periods. The findings, which are specific to regional\nprecipitation extremes, suggest no immediate reason for alarm, but the need for\nprogressive updating of the design standards in light of global warming.\n", "title": "Assessment of Future Changes in Intensity-Duration-Frequency Curves for Southern Ontario using North American (NA)-CORDEX Models with Nonstationary Methods" }
null
null
null
null
true
null
1112
null
Default
null
null
null
{ "abstract": " We study the problem of finding the cycle of minimum cost-to-time ratio in a\ndirected graph with $ n $ nodes and $ m $ edges. This problem has a long\nhistory in combinatorial optimization and has recently seen interesting\napplications in the context of quantitative verification. We focus on strongly\npolynomial algorithms to cover the use-case where the weights are relatively\nlarge compared to the size of the graph. Our main result is an algorithm with\nrunning time $ \\tilde O (m^{3/4} n^{3/2}) $, which gives the first improvement\nover Megiddo's $ \\tilde O (n^3) $ algorithm [JACM'83] for sparse graphs. We\nfurther demonstrate how to obtain both an algorithm with running time $ n^3 /\n2^{\\Omega{(\\sqrt{\\log n})}} $ on general graphs and an algorithm with running\ntime $ \\tilde O (n) $ on constant treewidth graphs. To obtain our main result,\nwe develop a parallel algorithm for negative cycle detection and single-source\nshortest paths that might be of independent interest.\n", "title": "Improved Algorithms for Computing the Cycle of Minimum Cost-to-Time Ratio in Directed Graphs" }
null
null
null
null
true
null
1113
null
Default
null
null
null
{ "abstract": " Polarized extinction and emission from dust in the interstellar medium (ISM)\nare hard to interpret, as they have a complex dependence on dust optical\nproperties, grain alignment and magnetic field orientation. This is\nparticularly true in molecular clouds. The data available today are not yet\nused to their full potential.\nThe combination of emission and extinction, in particular, provides\ninformation not available from either of them alone. We combine data from the\nscientific literature on polarized dust extinction with Planck data on\npolarized emission and we use them to constrain the possible variations in dust\nand environmental conditions inside molecular clouds, and especially\ntranslucent lines of sight, taking into account magnetic field orientation.\nWe focus on the dependence between \\lambda_max -- the wavelength of maximum\npolarization in extinction -- and other observables such as the extinction\npolarization, the emission polarization and the ratio of the two. We set out to\nreproduce these correlations using Monte-Carlo simulations where the relevant\nquantities in a dust model -- grain alignment, size distribution and magnetic\nfield orientation -- vary to mimic the diverse conditions expected inside\nmolecular clouds.\nNone of the quantities chosen can explain the observational data on its own:\nthe best results are obtained when all quantities vary significantly across and\nwithin clouds. However, some of the data -- most notably the stars with low\nemission-to-extinction polarization ratio -- are not reproduced by our\nsimulation. Our results suggest not only that dust evolution is necessary to\nexplain polarization in molecular clouds, but that a simple change in size\ndistribution is not sufficient to explain the data, and point the way for\nfuture and more sophisticated models.\n", "title": "Interplay of dust alignment, grain growth and magnetic fields in polarization: lessons from the emission-to-extinction ratio" }
null
null
[ "Physics" ]
null
true
null
1114
null
Validated
null
null
null
{ "abstract": " Gaussian processes (GPs) are powerful non-parametric function estimators.\nHowever, their applications are largely limited by the expensive computational\ncost of the inference procedures. Existing stochastic or distributed\nsynchronous variational inferences, although have alleviated this issue by\nscaling up GPs to millions of samples, are still far from satisfactory for\nreal-world large applications, where the data sizes are often orders of\nmagnitudes larger, say, billions. To solve this problem, we propose ADVGP, the\nfirst Asynchronous Distributed Variational Gaussian Process inference for\nregression, on the recent large-scale machine learning platform,\nPARAMETERSERVER. ADVGP uses a novel, flexible variational framework based on a\nweight space augmentation, and implements the highly efficient, asynchronous\nproximal gradient optimization. While maintaining comparable or better\npredictive performance, ADVGP greatly improves upon the efficiency of the\nexisting variational methods. With ADVGP, we effortlessly scale up GP\nregression to a real-world application with billions of samples and demonstrate\nan excellent, superior prediction accuracy to the popular linear models.\n", "title": "Asynchronous Distributed Variational Gaussian Processes for Regression" }
null
null
null
null
true
null
1115
null
Default
null
null
null
{ "abstract": " Bandit based optimisation has a remarkable advantage over gradient based\napproaches due to their global perspective, which eliminates the danger of\ngetting stuck at local optima. However, for continuous optimisation problems or\nproblems with a large number of actions, bandit based approaches can be\nhindered by slow learning. Gradient based approaches, on the other hand,\nnavigate quickly in high-dimensional continuous spaces through local\noptimisation, following the gradient in fine grained steps. Yet, apart from\nbeing susceptible to local optima, these schemes are less suited for online\nlearning due to their reliance on extensive trial-and-error before the optimum\ncan be identified. In this paper, we propose a Bayesian approach that unifies\nthe above two paradigms in one single framework, with the aim of combining\ntheir advantages. At the heart of our approach we find a stochastic linear\napproximation of the function to be optimised, where both the gradient and\nvalues of the function are explicitly captured. This allows us to learn from\nboth noisy function and gradient observations, and predict these properties\nacross the action space to support optimisation. We further propose an\naccompanying bandit driven exploration scheme that uses Bayesian credible\nbounds to trade off exploration against exploitation. Our empirical results\ndemonstrate that by unifying bandit and gradient based learning, one obtains\nconsistently improved performance across a wide spectrum of problem\nenvironments. Furthermore, even when gradient feedback is unavailable, the\nflexibility of our model, including gradient prediction, still allows us\noutperform competing approaches, although with a smaller margin. Due to the\npervasiveness of bandit based optimisation, our scheme opens up for improved\nperformance both in meta-optimisation and in applications where gradient\nrelated information is readily available.\n", "title": "Bayesian Unification of Gradient and Bandit-based Learning for Accelerated Global Optimisation" }
null
null
null
null
true
null
1116
null
Default
null
null
null
{ "abstract": " Intel Software Guard Extension (SGX) offers software applications enclave to\nprotect their confidentiality and integrity from malicious operating systems.\nThe SSL/TLS protocol, which is the de facto standard for protecting\ntransport-layer network communications, has been broadly deployed for a secure\ncommunication channel. However, in this paper, we show that the marriage\nbetween SGX and SSL may not be smooth sailing.\nParticularly, we consider a category of side-channel attacks against SSL/TLS\nimplementations in secure enclaves, which we call the control-flow inference\nattacks. In these attacks, the malicious operating system kernel may perform a\npowerful man-in-the-kernel attack to collect execution traces of the enclave\nprograms at page, cacheline, or branch level, while positioning itself in the\nmiddle of the two communicating parties. At the center of our work is a\ndifferential analysis framework, dubbed Stacco, to dynamically analyze the\nSSL/TLS implementations and detect vulnerabilities that can be exploited as\ndecryption oracles. Surprisingly, we found exploitable vulnerabilities in the\nlatest versions of all the SSL/TLS libraries we have examined.\nTo validate the detected vulnerabilities, we developed a man-in-the-kernel\nadversary to demonstrate Bleichenbacher attacks against the latest OpenSSL\nlibrary running in the SGX enclave (with the help of Graphene) and completely\nbroke the PreMasterSecret encrypted by a 4096-bit RSA public key with only\n57286 queries. We also conducted CBC padding oracle attacks against the latest\nGnuTLS running in Graphene-SGX and an open-source SGX-implementation of mbedTLS\n(i.e., mbedTLS-SGX) that runs directly inside the enclave, and showed that it\nonly needs 48388 and 25717 queries, respectively, to break one block of AES\nciphertext. Empirical evaluation suggests these man-in-the-kernel attacks can\nbe completed within 1 or 2 hours.\n", "title": "Stacco: Differentially Analyzing Side-Channel Traces for Detecting SSL/TLS Vulnerabilities in Secure Enclaves" }
null
null
null
null
true
null
1117
null
Default
null
null
null
{ "abstract": " Sensor setups consisting of a combination of 3D range scanner lasers and\nstereo vision systems are becoming a popular choice for on-board perception\nsystems in vehicles; however, the combined use of both sources of information\nimplies a tedious calibration process. We present a method for extrinsic\ncalibration of lidar-stereo camera pairs without user intervention. Our\ncalibration approach is aimed to cope with the constraints commonly found in\nautomotive setups, such as low-resolution and specific sensor poses. To\ndemonstrate the performance of our method, we also introduce a novel approach\nfor the quantitative assessment of the calibration results, based on a\nsimulation environment. Tests using real devices have been conducted as well,\nproving the usability of the system and the improvement over the existing\napproaches. Code is available at this http URL\n", "title": "Automatic Extrinsic Calibration for Lidar-Stereo Vehicle Sensor Setups" }
null
null
null
null
true
null
1118
null
Default
null
null
null
{ "abstract": " In paper, we study the representation theory of super $W(2,2)$ algebra\n${\\mathfrak{L}}$. We prove that ${\\mathfrak{L}}$ has no mixed irreducible\nmodules and give the classification of irreducible modules of intermediate\nseries. We determinate the conjugate-linear anti-involution of ${\\mathfrak{L}}$\nand give the unitary modules of intermediate series.\n", "title": "Representations of Super $W(2,2)$ algebra $\\mathfrak{L}$" }
null
null
[ "Mathematics" ]
null
true
null
1119
null
Validated
null
null
null
{ "abstract": " Software developers frequently issue generic natural language queries for\ncode search while using code search engines (e.g., GitHub native search,\nKrugle). Such queries often do not lead to any relevant results due to\nvocabulary mismatch problems. In this paper, we propose a novel technique that\nautomatically identifies relevant and specific API classes from Stack Overflow\nQ & A site for a programming task written as a natural language query, and then\nreformulates the query for improved code search. We first collect candidate API\nclasses from Stack Overflow using pseudo-relevance feedback and two term\nweighting algorithms, and then rank the candidates using Borda count and\nsemantic proximity between query keywords and the API classes. The semantic\nproximity has been determined by an analysis of 1.3 million questions and\nanswers of Stack Overflow. Experiments using 310 code search queries report\nthat our technique suggests relevant API classes with 48% precision and 58%\nrecall which are 32% and 48% higher respectively than those of the\nstate-of-the-art. Comparisons with two state-of-the-art studies and three\npopular search engines (e.g., Google, Stack Overflow, and GitHub native search)\nreport that our reformulated queries (1) outperform the queries of the\nstate-of-the-art, and (2) significantly improve the code search results\nprovided by these contemporary search engines.\n", "title": "Effective Reformulation of Query for Code Search using Crowdsourced Knowledge and Extra-Large Data Analytics" }
null
null
null
null
true
null
1120
null
Default
null
null
null
{ "abstract": " The Ethereum blockchain network is a decentralized platform enabling smart\ncontract execution and transactions of Ether (ETH) [1], its designated\ncryptocurrency. Ethereum is the second most popular cryptocurrency with a\nmarket cap of more than 100 billion USD, with hundreds of thousands of\ntransactions executed daily by hundreds of thousands of unique wallets. Tens of\nthousands of those wallets are newly generated each day. The Ethereum platform\nenables anyone to freely open multiple new wallets [2] free of charge\n(resulting in a large number of wallets that are controlled by the same\nentities). This attribute makes the Ethereum network a breeding space for\nactivity by software robots (bots). The existence of bots is widespread in\ndifferent digital technologies and there are various approaches to detect their\nactivity such as rule-base, clustering, machine learning and more [3,4]. In\nthis work we demonstrate how bot detection can be implemented using a network\ntheory approach.\n", "title": "Detecting Bot Activity in the Ethereum Blockchain Network" }
null
null
null
null
true
null
1121
null
Default
null
null
null
{ "abstract": " Conjunctivochalasis is a common cause of tear dysfunction due to the\nconjunctiva becoming loose and wrinkly with age. The current solutions to this\ndisease include either surgical excision in the operating room, or\nthermoreduction of the loose tissue with hot wire in the clinic. We developed a\nnear-infrared (NIR) laser thermal conjunctivoplasty (LTC) system, which gently\nshrinks the redundant tissue. The NIR light is mainly absorbed by water, so the\nheating is even and there is no bleeding. The system utilizes a 1460-nm\nprogrammable laser diode system as a light source. A miniaturized handheld\nprobe delivers the laser light and focuses the laser into a 10x1 mm2 line. A\nfoot pedal is used to deliver a preset number of calibrated laser pulses. A\nfold of loose conjunctiva is grasped by a pair of forceps. The infrared laser\nlight is delivered through an optical fiber and a laser line is focused exactly\non the conjunctival fold by a cylindrical lens. Ex vivo experiments using\nporcine eye were performed with the optimal laser parameters. It was found that\nup to 50% of conjunctiva shrinkage could be achieved.\n", "title": "Near-infrared laser thermal conjunctivoplasty" }
null
null
null
null
true
null
1122
null
Default
null
null
null
{ "abstract": " Here we report the preparation and superconductivity of the 133-type Cr-based\nquasi-one-dimensional (Q1D) RbCr3As3 single crystals. The samples were prepared\nby the deintercalation of Rb+ ions from the 233-type Rb2Cr3As3 crystals which\nwere grown from a high-temperature solution growth method. The RbCr3As3\ncompound crystallizes in a centrosymmetric structure with the space group of\nP63/m (No. 176) different with its non-centrosymmetric Rb2Cr3As3\nsuperconducting precursor, and the refined lattice parameters are a = 9.373(3)\n{\\AA} and c = 4.203(7) {\\AA}. Electrical resistivity and magnetic\nsusceptibility characterizations reveal the occurrence of superconductivity\nwith an interestingly higher onset Tc of 7.3 K than other Cr-based\nsuperconductors, and a high upper critical field Hc2(0) near 70 T in this\n133-type RbCr3As3 crystals.\n", "title": "Superconductivity at 7.3 K in the 133-type Cr-based RbCr3As3 single crystals" }
null
null
[ "Physics" ]
null
true
null
1123
null
Validated
null
null
null
{ "abstract": " A numerical method for free boundary problems for the equation \\[\nu_{xx}-q(x)u=u_t \\] is proposed. The method is based on recent results from\ntransmutation operators theory allowing one to construct efficiently a complete\nsystem of solutions for this equation generalizing the system of heat\npolynomials. The corresponding implementation algorithm is presented.\n", "title": "Solution of parabolic free boundary problems using transmuted heat polynomials" }
null
null
null
null
true
null
1124
null
Default
null
null
null
{ "abstract": " Neural-Network Quantum States have been recently introduced as an Ansatz for\ndescribing the wave function of quantum many-body systems. We show that there\nare strong connections between Neural-Network Quantum States in the form of\nRestricted Boltzmann Machines and some classes of Tensor-Network states in\narbitrary dimensions. In particular we demonstrate that short-range Restricted\nBoltzmann Machines are Entangled Plaquette States, while fully connected\nRestricted Boltzmann Machines are String-Bond States with a nonlocal geometry\nand low bond dimension. These results shed light on the underlying architecture\nof Restricted Boltzmann Machines and their efficiency at representing many-body\nquantum states. String-Bond States also provide a generic way of enhancing the\npower of Neural-Network Quantum States and a natural generalization to systems\nwith larger local Hilbert space. We compare the advantages and drawbacks of\nthese different classes of states and present a method to combine them\ntogether. This allows us to benefit from both the entanglement structure of\nTensor Networks and the efficiency of Neural-Network Quantum States into a\nsingle Ansatz capable of targeting the wave function of strongly correlated\nsystems. While it remains a challenge to describe states with chiral\ntopological order using traditional Tensor Networks, we show that\nNeural-Network Quantum States and their String-Bond States extension can\ndescribe a lattice Fractional Quantum Hall state exactly. In addition, we\nprovide numerical evidence that Neural-Network Quantum States can approximate a\nchiral spin liquid with better accuracy than Entangled Plaquette States and\nlocal String-Bond States. Our results demonstrate the efficiency of neural\nnetworks to describe complex quantum wave functions and pave the way towards\nthe use of String-Bond States as a tool in more traditional machine-learning\napplications.\n", "title": "Neural-Network Quantum States, String-Bond States, and Chiral Topological States" }
null
null
null
null
true
null
1125
null
Default
null
null
null
{ "abstract": " We reported the usage of grating-based X-ray phase-contrast imaging in\nnondestructive testing of grating imperfections. It was found that\nelectroplating flaws could be easily detected by conventional absorption\nsignal, and in particular, we observed that the grating defects resulting from\nuneven ultraviolet exposure could be clearly discriminated with phase-contrast\nsignal. The experimental results demonstrate that grating-based X-ray\nphase-contrast imaging, with a conventional low-brilliance X-ray source, a\nlarge field of view and a reasonable compact setup, which simultaneously yields\nphase- and attenuation-contrast signal of the sample, can be ready-to-use in\nfast nondestructive testing of various imperfections in gratings and other\nsimilar photoetching products.\n", "title": "Nondestructive testing of grating imperfections using grating-based X-ray phase-contrast imaging" }
null
null
null
null
true
null
1126
null
Default
null
null
null
{ "abstract": " Most state-of-the-art information extraction approaches rely on token-level\nlabels to find the areas of interest in text. Unfortunately, these labels are\ntime-consuming and costly to create, and consequently, not available for many\nreal-life IE tasks. To make matters worse, token-level labels are usually not\nthe desired output, but just an intermediary step. End-to-end (E2E) models,\nwhich take raw text as input and produce the desired output directly, need not\ndepend on token-level labels. We propose an E2E model based on pointer\nnetworks, which can be trained directly on pairs of raw input and output text.\nWe evaluate our model on the ATIS data set, MIT restaurant corpus and the MIT\nmovie corpus and compare to neural baselines that do use token-level labels. We\nachieve competitive results, within a few percentage points of the baselines,\nshowing the feasibility of E2E information extraction without the need for\ntoken-level labels. This opens up new possibilities, as for many tasks\ncurrently addressed by human extractors, raw input and output data are\navailable, but not token-level labels.\n", "title": "End-to-End Information Extraction without Token-Level Supervision" }
null
null
null
null
true
null
1127
null
Default
null
null
null
{ "abstract": " We prove Lipschitz continuity of viscosity solutions to a class of two-phase\nfree boundary problems governed by fully nonlinear operators.\n", "title": "Lipschitz regularity of solutions to two-phase free boundary problems" }
null
null
null
null
true
null
1128
null
Default
null
null
null
{ "abstract": " Hardware acceleration is an enabler for ubiquitous and efficient deep\nlearning. With hardware accelerators being introduced in datacenter and edge\ndevices, it is time to acknowledge that hardware specialization is central to\nthe deep learning system stack.\nThis technical report presents the Versatile Tensor Accelerator (VTA), an\nopen, generic, and customizable deep learning accelerator design. VTA is a\nprogrammable accelerator that exposes a RISC-like programming abstraction to\ndescribe operations at the tensor level. We designed VTA to expose the most\nsalient and common characteristics of mainstream deep learning accelerators,\nsuch as tensor operations, DMA load/stores, and explicit compute/memory\narbitration.\nVTA is more than a standalone accelerator design: it's an end-to-end solution\nthat includes drivers, a JIT runtime, and an optimizing compiler stack based on\nTVM. The current release of VTA includes a behavioral hardware simulator, as\nwell as the infrastructure to deploy VTA on low-cost FPGA development boards\nfor fast prototyping.\nBy extending the TVM stack with a customizable, and open source deep learning\nhardware accelerator design, we are exposing a transparent end-to-end deep\nlearning stack from the high-level deep learning framework, down to the actual\nhardware design and implementation. This forms a truly end-to-end, from\nsoftware-to-hardware open source stack for deep learning systems.\n", "title": "VTA: An Open Hardware-Software Stack for Deep Learning" }
null
null
null
null
true
null
1129
null
Default
null
null
null
{ "abstract": " We propose a new localized inference algorithm for answering marginalization\nqueries in large graphical models with the correlation decay property. Given a\nquery variable and a large graphical model, we define a much smaller model in a\nlocal region around the query variable in the target model so that the marginal\ndistribution of the query variable can be accurately approximated. We introduce\ntwo approximation error bounds based on the Dobrushin's comparison theorem and\napply our bounds to derive a greedy expansion algorithm that efficiently guides\nthe selection of neighbor nodes for localized inference. We verify our\ntheoretical bounds on various datasets and demonstrate that our localized\ninference algorithm can provide fast and accurate approximation for large\ngraphical models.\n", "title": "Efficient Localized Inference for Large Graphical Models" }
null
null
null
null
true
null
1130
null
Default
null
null
null
{ "abstract": " We study simultaneous collisions of two, three, and four kinks and antikinks\nof the $\\phi^6$ model at the same spatial point. Unlike the $\\phi^4$ kinks, the\n$\\phi^6$ kinks are asymmetric and this enriches the variety of the collision\nscenarios. In our numerical simulations we observe both reflection and bound\nstate formation depending on the number of kinks and on their spatial ordering\nin the initial configuration. We also analyze the extreme values of the energy\ndensities and the field gradient observed during the collisions. Our results\nsuggest that very high energy densities can be produced in multi-kink\ncollisions in a controllable manner. Appearance of high energy density spots in\nmulti-kink collisions can be important in various physical applications of the\nKlein-Gordon model.\n", "title": "Multi-kink collisions in the $ϕ^6$ model" }
null
null
null
null
true
null
1131
null
Default
null
null
null
{ "abstract": " We introduce InverseFaceNet, a deep convolutional inverse rendering framework\nfor faces that jointly estimates facial pose, shape, expression, reflectance\nand illumination from a single input image. By estimating all parameters from\njust a single image, advanced editing possibilities on a single face image,\nsuch as appearance editing and relighting, become feasible in real time. Most\nprevious learning-based face reconstruction approaches do not jointly recover\nall dimensions, or are severely limited in terms of visual quality. In\ncontrast, we propose to recover high-quality facial pose, shape, expression,\nreflectance and illumination using a deep neural network that is trained using\na large, synthetically created training corpus. Our approach builds on a novel\nloss function that measures model-space similarity directly in parameter space\nand significantly improves reconstruction accuracy. We further propose a\nself-supervised bootstrapping process in the network training loop, which\niteratively updates the synthetic training corpus to better reflect the\ndistribution of real-world imagery. We demonstrate that this strategy\noutperforms completely synthetically trained networks. Finally, we show\nhigh-quality reconstructions and compare our approach to several\nstate-of-the-art approaches.\n", "title": "InverseFaceNet: Deep Monocular Inverse Face Rendering" }
null
null
null
null
true
null
1132
null
Default
null
null
null
{ "abstract": " The Partial Information Decomposition (PID) [arXiv:1004.2515] provides a\ntheoretical framework to characterize and quantify the structure of\nmultivariate information sharing. A new method (Idep) has recently been\nproposed for computing a two-predictor PID over discrete spaces.\n[arXiv:1709.06653] A lattice of maximum entropy probability models is\nconstructed based on marginal dependency constraints, and the unique\ninformation that a particular predictor has about the target is defined as the\nminimum increase in joint predictor-target mutual information when that\nparticular predictor-target marginal dependency is constrained. Here, we apply\nthe Idep approach to Gaussian systems, for which the marginally constrained\nmaximum entropy models are Gaussian graphical models. Closed form solutions for\nthe Idep PID are derived for both univariate and multivariate Gaussian systems.\nNumerical and graphical illustrations are provided, together with practical and\ntheoretical comparisons of the Idep PID with the minimum mutual information PID\n(Immi). [arXiv:1411.2832] In particular, it is proved that the Immi method\ngenerally produces larger estimates of redundancy and synergy than does the\nIdep method. In discussion of the practical examples, the PIDs are complemented\nby the use of deviance tests for the comparison of Gaussian graphical models.\n", "title": "Exact partial information decompositions for Gaussian systems based on dependency constraints" }
null
null
null
null
true
null
1133
null
Default
null
null
null
{ "abstract": " We present convolutional neural network (CNN) based approaches for\nunsupervised multimodal subspace clustering. The proposed framework consists of\nthree main stages - multimodal encoder, self-expressive layer, and multimodal\ndecoder. The encoder takes multimodal data as input and fuses them to a latent\nspace representation. The self-expressive layer is responsible for enforcing\nthe self-expressiveness property and acquiring an affinity matrix corresponding\nto the data points. The decoder reconstructs the original input data. The\nnetwork uses the distance between the decoder's reconstruction and the original\ninput in its training. We investigate early, late and intermediate fusion\ntechniques and propose three different encoders corresponding to them for\nspatial fusion. The self-expressive layers and multimodal decoders are\nessentially the same for different spatial fusion-based approaches. In addition\nto various spatial fusion-based methods, an affinity fusion-based network is\nalso proposed in which the self-expressive layer corresponding to different\nmodalities is enforced to be the same. Extensive experiments on three datasets\nshow that the proposed methods significantly outperform the state-of-the-art\nmultimodal subspace clustering methods.\n", "title": "Deep Multimodal Subspace Clustering Networks" }
null
null
null
null
true
null
1134
null
Default
null
null
null
{ "abstract": " For fluctuating currents in non-equilibrium steady states, the recently\ndiscovered thermodynamic uncertainty relation expresses a fundamental relation\nbetween their variance and the overall entropic cost associated with the\ndriving. We show that this relation holds not only for the long-time limit of\nfluctuations, as described by large deviation theory, but also for fluctuations\non arbitrary finite time scales. This generalization facilitates applying the\nthermodynamic uncertainty relation to single molecule experiments, for which\ninfinite timescales are not accessible. Importantly, often this finite-time\nvariant of the relation allows inferring a bound on the entropy production that\nis even stronger than the one obtained from the long-time limit. We illustrate\nthe relation for the fluctuating work that is performed by a stochastically\nswitching laser tweezer on a trapped colloidal particle.\n", "title": "Finite-time generalization of the thermodynamic uncertainty relation" }
null
null
null
null
true
null
1135
null
Default
null
null
null
{ "abstract": " With the proliferation of mobile devices and the internet of things,\ndeveloping principled solutions for privacy in time series applications has\nbecome increasingly important. While differential privacy is the gold standard\nfor database privacy, many time series applications require a different kind of\nguarantee, and a number of recent works have used some form of inferential\nprivacy to address these situations.\nHowever, a major barrier to using inferential privacy in practice is its lack\nof graceful composition -- even if the same or related sensitive data is used\nin multiple releases that are safe individually, the combined release may have\npoor privacy properties. In this paper, we study composition properties of a\nform of inferential privacy called Pufferfish when applied to time-series data.\nWe show that while general Pufferfish mechanisms may not compose gracefully, a\nspecific Pufferfish mechanism, called the Markov Quilt Mechanism, which was\nrecently introduced, has strong composition properties comparable to that of\npure differential privacy when applied to time series data.\n", "title": "Composition Properties of Inferential Privacy for Time-Series Data" }
null
null
null
null
true
null
1136
null
Default
null
null
null
{ "abstract": " Understanding the origin, nature, and functional significance of complex\npatterns of neural activity, as recorded by diverse electrophysiological and\nneuroimaging techniques, is a central challenge in neuroscience. Such patterns\ninclude collective oscillations emerging out of neural synchronization as well\nas highly heterogeneous outbursts of activity interspersed by periods of\nquiescence, called \"neuronal avalanches.\" Much debate has been generated about\nthe possible scale invariance or criticality of such avalanches and its\nrelevance for brain function. Aimed at shedding light onto this, here we\nanalyze the large-scale collective properties of the cortex by using a\nmesoscopic approach following the principle of parsimony of Landau-Ginzburg.\nOur model is similar to that of Wilson-Cowan for neural dynamics but crucially,\nincludes stochasticity and space; synaptic plasticity and inhibition are\nconsidered as possible regulatory mechanisms. Detailed analyses uncover a phase\ndiagram including down-state, synchronous, asynchronous, and up-state phases\nand reveal that empirical findings for neuronal avalanches are consistently\nreproduced by tuning our model to the edge of synchronization. This reveals\nthat the putative criticality of cortical dynamics does not correspond to a\nquiescent-to-active phase transition as usually assumed in theoretical\napproaches but to a synchronization phase transition, at which incipient\noscillations and scale-free avalanches coexist. Furthermore, our model also\naccounts for up and down states as they occur (e.g., during deep sleep). This\napproach constitutes a framework to rationalize the possible collective phases\nand phase transitions of cortical networks in simple terms, thus helping to\nshed light on basic aspects of brain functioning from a very broad perspective.\n", "title": "Landau-Ginzburg theory of cortex dynamics: Scale-free avalanches emerge at the edge of synchronization" }
null
null
null
null
true
null
1137
null
Default
null
null
null
{ "abstract": " The main challenge of online multi-object tracking is to reliably associate\nobject trajectories with detections in each video frame based on their tracking\nhistory. In this work, we propose the Recurrent Autoregressive Network (RAN), a\ntemporal generative modeling framework to characterize the appearance and\nmotion dynamics of multiple objects over time. The RAN couples an external\nmemory and an internal memory. The external memory explicitly stores previous\ninputs of each trajectory in a time window, while the internal memory learns to\nsummarize long-term tracking history and associate detections by processing the\nexternal memory. We conduct experiments on the MOT 2015 and 2016 datasets to\ndemonstrate the robustness of our tracking method in highly crowded and\noccluded scenes. Our method achieves top-ranked results on the two benchmarks.\n", "title": "Recurrent Autoregressive Networks for Online Multi-Object Tracking" }
null
null
null
null
true
null
1138
null
Default
null
null
null
{ "abstract": " Classical plasma with arbitrary degree of degeneration of electronic gas is\nconsidered. In plasma N (N>2) collinear electromagnatic waves are propagated.\nIt is required to find the response of plasma to these waves. Distribution\nfunction in square-law approximation on quantities of two small parameters from\nVlasov equation is received. The formula for electric current calculation is\ndeduced. It is demonstrated that the nonlinearity account leads to occurrence\nof the longitudinal electric current directed along a wave vector. This\nlongitudinal current is orthogonal to the known transversal current received at\nthe linear analysis. The case of small values of wave number is considered.\n", "title": "The occurrence of transverse and longitudinal electric currents in the classical plasma under the action of N transverse electromagnetic waves" }
null
null
null
null
true
null
1139
null
Default
null
null
null
{ "abstract": " The paper studies a PDE model for the growth of a tree stem or a vine, having\nthe form of a differential inclusion with state constraints. The equations\ndescribe the elongation due to cell growth, and the response to gravity and to\nexternal obstacles.\nThe main theorem shows that the evolution problem is well posed, until a\nspecific \"breakdown configuration\" is reached. A formula is proved,\ncharacterizing the reaction produced by unilateral constraints. At a.e. time t,\nthis is determined by the minimization of an elastic energy functional under\nsuitable constraints.\n", "title": "Well-posedness of a Model for the Growth of Tree Stems and Vines" }
null
null
null
null
true
null
1140
null
Default
null
null
null
{ "abstract": " The discovery of multiple stellar populations in Milky Way globular clusters\n(GCs) has stimulated various follow-up studies on helium-enhanced stellar\npopulations. Here we present the evolutionary population synthesis models for\nthe spectro-photometric evolution of simple stellar populations (SSPs) with\nvarying initial helium abundance ($Y_{\\rm ini}$). We show that $Y_{\\rm ini}$\nbrings about {dramatic} changes in spectro-photometric properties of SSPs. Like\nthe normal-helium SSPs, the integrated spectro-photometric evolution of\nhelium-enhanced SSPs is also dependent on metallicity and age for a given\n$Y_{\\rm ini}$. {We discuss the implications and prospects for the\nhelium-enhanced populations in relation to the second-generation populations\nfound in the Milky Way GCs.} All of the models are available at\n\\url{this http URL}.\n", "title": "Yonsei evolutionary population synthesis (YEPS). II. Spectro-photometric evolution of helium-enhanced stellar populations" }
null
null
null
null
true
null
1141
null
Default
null
null
null
{ "abstract": " We establish a large deviation theorem for the empirical spectral\ndistribution of random covariance matrices whose entries are independent random\nvariables with mean 0, variance 1 and having controlled forth moments. Some new\nproperties of Laguerre polynomials are also given.\n", "title": "Large deviation theorem for random covariance matrices" }
null
null
null
null
true
null
1142
null
Default
null
null
null
{ "abstract": " A reinforcement learning agent that needs to pursue different goals across\nepisodes requires a goal-conditional policy. In addition to their potential to\ngeneralize desirable behavior to unseen goals, such policies may also enable\nhigher-level planning based on subgoals. In sparse-reward environments, the\ncapacity to exploit information about the degree to which an arbitrary goal has\nbeen achieved while another goal was intended appears crucial to enable sample\nefficient learning. However, reinforcement learning agents have only recently\nbeen endowed with such capacity for hindsight. In this paper, we demonstrate\nhow hindsight can be introduced to policy gradient methods, generalizing this\nidea to a broad class of successful algorithms. Our experiments on a diverse\nselection of sparse-reward environments show that hindsight leads to a\nremarkable increase in sample efficiency.\n", "title": "Hindsight policy gradients" }
null
null
null
null
true
null
1143
null
Default
null
null
null
{ "abstract": " Photoionized nebulae, comprising HII regions and planetary nebulae, are\nexcellent laboratories to investigate the nucleosynthesis and chemical\nevolution of several elements in the Galaxy and other galaxies of the Local\nGroup. Our purpose in this investigation is threefold: (i) compare the\nabundances of HII regions and planetary nebulae in each system in order to\ninvestigate the differences derived from the age and origin of these objects,\n(ii) compare the chemical evolution in different systems, such as the Milky\nWay, the Magellanic Clouds, and other galaxies of the Local Group, and (iii)\ninvestigate to what extent the nucleosynthesis contributions from the\nprogenitor stars affect the observed abundances in planetary nebulae, which\nconstrains the nucleosynthesis of intermediate mass stars. We show that all\nobjects in the samples present similar trends concerning distance-independent\ncorrelations, and some constraints can be defined on the production of He and N\nby the PN progenitor stars.\n", "title": "Abundances in photoionized nebulae of the Local Group and nucleosynthesis of intermediate mass stars" }
null
null
[ "Physics" ]
null
true
null
1144
null
Validated
null
null
null
{ "abstract": " The prediction of cancer prognosis and metastatic potential immediately after\nthe initial diagnoses is a major challenge in current clinical research. The\nrelevance of such a signature is clear, as it will free many patients from the\nagony and toxic side-effects associated with the adjuvant chemotherapy\nautomatically and sometimes carelessly subscribed to them. Motivated by this\nissue, Ein-Dor (2006) and Zuk (2007) presented a Bayesian model which leads to\nthe following conclusion: Thousands of samples are needed to generate a robust\ngene list for predicting outcome. This conclusion is based on existence of some\nstatistical assumptions. The current work raises doubts over this determination\nby showing that: (1) These assumptions are not consistent with additional\nassumptions such as sparsity and Gaussianity. (2) The empirical Bayes\nmethodology which was suggested in order to test the relevant assumptions\ndoesn't detect severe violations of the model assumptions and consequently an\noverestimation of the required sample size might be incurred.\n", "title": "Are Thousands of Samples Really Needed to Generate Robust Gene-List for Prediction of Cancer Outcome?" }
null
null
null
null
true
null
1145
null
Default
null
null
null
{ "abstract": " We propose a novel estimation procedure for scale-by-scale lead-lag\nrelationships of financial assets observed at a high-frequency in a\nnon-synchronous manner. The proposed estimation procedure does not require any\ninterpolation processing of the original data and is applicable to quite fine\nresolution data. The validity of the proposed estimators is shown under the\ncontinuous-time framework developed in our previous work Hayashi and Koike\n(2016). An empirical application shows promising results of the proposed\napproach.\n", "title": "Multi-scale analysis of lead-lag relationships in high-frequency financial markets" }
null
null
null
null
true
null
1146
null
Default
null
null
null
{ "abstract": " We consider a general branching population where the lifetimes of individuals\nare i.i.d.\\ with arbitrary distribution and where each individual gives birth\nto new individuals at Poisson times independently from each other. In addition,\nwe suppose that individuals experience mutations at Poissonian rate $\\theta$\nunder the infinitely many alleles assumption assuming that types are\ntransmitted from parents to offspring. This mechanism leads to a partition of\nthe population by type, called the allelic partition. The main object of this\nwork is the frequency spectrum $A(k,t)$ which counts the number of families of\nsize $k$ in the population at time $t$. The process $(A(k,t),\\\nt\\in\\mathbb{R}_+)$ is an example of non-Markovian branching process belonging\nto the class of general branching processes counted by random characteristics.\nIn this work, we propose methods of approximation to replace the frequency\nspectrum by simpler quantities. Our main goal is study the asymptotic error\nmade during these approximations through central limit theorems. In a last\nsection, we perform several numerical analysis using this model, in particular\nto analyze the behavior of one of these approximations with respect to Sabeti's\nExtended Haplotype Homozygosity [18].\n", "title": "Approximations of the allelic frequency spectrum in general supercritical branching populations" }
null
null
[ "Mathematics" ]
null
true
null
1147
null
Validated
null
null
null
{ "abstract": " As the distribution grid moves toward a tightly-monitored network, it is\nimportant to automate the analysis of the enormous amount of data produced by\nthe sensors to increase the operators situational awareness about the system.\nIn this paper, focusing on Micro-Phasor Measurement Unit ($\\mu$PMU) data, we\npropose a hierarchical architecture for monitoring the grid and establish a set\nof analytics and sensor fusion primitives for the detection of abnormal\nbehavior in the control perimeter. Due to the key role of the $\\mu$PMU devices\nin our architecture, a source-constrained optimal $\\mu$PMU placement is also\ndescribed that finds the best location of the devices with respect to our\nrules. The effectiveness of the proposed methods are tested through the\nsynthetic and real $\\mu$PMU data.\n", "title": "Anomaly Detection Using Optimally-Placed Micro-PMU Sensors in Distribution Grids" }
null
null
null
null
true
null
1148
null
Default
null
null
null
{ "abstract": " Investigation of the electron-phonon interaction (EPI) in LaCuSb2 and\nLa(Cu0.8Ag0.2)Sb2 compounds by Yanson point-contact spectroscopy (PCS) has been\ncarried out. Point-contact spectra display a pronounced broad maximum in the\nrange of 10÷20 mV caused by EPI. Variation of the position of this maximum\nis likely connected with anisotropic phonon spectrum in these layered\ncompounds. The absence of phonon features after the main maximum allows the\nassessment of the Debye energy of about 40 meV. The EPI constant for the\nLaCuSb2 compound was estimated to be {\\lambda}=0.2+/-0.03. A zero-bias minimum\nin differential resistance for the latter compound is observed for some point\ncontacts, which vanishes at about 6 K, pointing to the formation of\nsuperconducting phase under point contact, while superconducting critical\ntemperature of the bulk sample is only 1K.\n", "title": "Electron-Phonon Interaction in Ternary Rare-Earth Copper Antimonides LaCuSb2 and La(Cu0.8Ag0.2)Sb2 probed by Yanson Point-Contact Spectroscopy" }
null
null
null
null
true
null
1149
null
Default
null
null
null
{ "abstract": " The magnetic phases of a triangular-lattice antiferromagnet, CuCrO$_2$, were\ninvestigated in magnetic fields along to the $c$ axis, $H$ // [001], up to 120\nT. Faraday rotation and magneto-absorption spectroscopy were used to unveil the\nrich physics of magnetic phases. An up-up-down (UUD) magnetic structure phase\nwas observed around 90--105 T at temperatures around 10 K. Additional distinct\nanomalies adjacent to the UUD phase were uncovered and the Y-shaped and the\nV-shaped phases are proposed to be viable candidates. These ordered phases are\nemerged as a result of the interplay of geometrical spin frustration, single\nion anisotropy and thermal fluctuations in an environment of extremely high\nmagnetic fields.\n", "title": "Ultrahigh Magnetic Field Phases in Frustrated Triangular-lattice Magnet CuCrO$_2$" }
null
null
null
null
true
null
1150
null
Default
null
null
null
{ "abstract": " The remarkable development of deep learning in medicine and healthcare domain\npresents obvious privacy issues, when deep neural networks are built on users'\npersonal and highly sensitive data, e.g., clinical records, user profiles,\nbiomedical images, etc. However, only a few scientific studies on preserving\nprivacy in deep learning have been conducted. In this paper, we focus on\ndeveloping a private convolutional deep belief network (pCDBN), which\nessentially is a convolutional deep belief network (CDBN) under differential\nprivacy. Our main idea of enforcing epsilon-differential privacy is to leverage\nthe functional mechanism to perturb the energy-based objective functions of\ntraditional CDBNs, rather than their results. One key contribution of this work\nis that we propose the use of Chebyshev expansion to derive the approximate\npolynomial representation of objective functions. Our theoretical analysis\nshows that we can further derive the sensitivity and error bounds of the\napproximate polynomial representation. As a result, preserving differential\nprivacy in CDBNs is feasible. We applied our model in a health social network,\ni.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for\nhuman behavior prediction, human behavior classification, and handwriting digit\nrecognition tasks. Theoretical analysis and rigorous experimental evaluations\nshow that the pCDBN is highly effective. It significantly outperforms existing\nsolutions.\n", "title": "Preserving Differential Privacy in Convolutional Deep Belief Networks" }
null
null
null
null
true
null
1151
null
Default
null
null
null
{ "abstract": " We show that a sufficient condition for the weak limit of a sequence of\n$W^1_q$-homeomorphisms with finite distortion to be almost everywhere injective\nfor $q \\geq n-1$, can be stated by means of composition operators. Applying\nthis result, we study nonlinear elasticity problems with respect to these new\nclasses of mappings. Furthermore, we impose loose growth conditions on the\nstored-energy function for the class of $W^1_n$-homeomorphisms with finite\ndistortion and integrable inner as well as outer distortion coefficients.\n", "title": "Injectivity almost everywhere and mappings with finite distortion in nonlinear elasticity" }
null
null
null
null
true
null
1152
null
Default
null
null
null
{ "abstract": " Consider the classical Erdos-Renyi random graph process wherein one starts\nwith an empty graph on $n$ vertices at time $t=0$. At each stage, an edge is\nchosen uniformly at random and placed in the graph. After the original\nfundamental work in [19], Erdős suggested that one should view the original\nrandom graph process as a \"race of components\". This suggested understanding\nfunctionals such as the time for fixation of the identity of the maximal\ncomponent, sometimes referred to as the \"leader problem\". Using refined\ncombinatorial techniques, {\\L}uczak [25] provided a complete analysis of this\nquestion including the close relationship to the critical scaling window of the\nErdos-Renyi process. In this paper, we abstract this problem to the context of\nthe multiplicative coalescent which by the work of Aldous in [3] describes the\nevolution of the Erdos-Renyi random graph in the critical regime. Further,\ndifferent entrance boundaries of this process have arisen in the study of heavy\ntailed network models in the critical regime with degree exponent $\\tau \\in\n(3,4)$. The leader problem in the context of the Erdos-Renyi random graph also\nplayed an important role in the study of the scaling limit of the minimal\nspanning tree on the complete graph [2]. In this paper we provide a\nprobabilistic analysis of the leader problem for the multiplicative coalescent\nin the context of entrance boundaries of relevance to critical random graphs.\nAs a special case we recover {\\L}uczak's result in [25] for the Erdos-Renyi\nrandom graph.\n", "title": "A probabilistic approach to the leader problem in random graphs" }
null
null
[ "Mathematics" ]
null
true
null
1153
null
Validated
null
null
null
{ "abstract": " The Least Significant Bit (LSB) substitution is an old and simple data hiding\nmethod that could almost effortlessly be implemented in spatial or transform\ndomain over any digital media. This method can be attacked by several\nsteganalysis methods, because it detectably changes statistical and perceptual\ncharacteristics of the cover signal. A typical method for steganalysis of the\nLSB substitution is the histogram attack that attempts to diagnose anomalies in\nthe cover image's histogram. A well-known method to stand the histogram attack\nis the LSB+ steganography that intentionally embeds some extra bits to make the\nhistogram look natural. However, the LSB+ method still affects the perceptual\nand statistical characteristics of the cover signal. In this paper, we propose\na new method for image steganography, called LSB++, which improves over the\nLSB+ image steganography by decreasing the amount of changes made to the\nperceptual and statistical attributes of the cover image. We identify some\nsensitive pixels affecting the signal characteristics, and then lock and keep\nthem from the extra bit embedding process of the LSB+ method, by introducing a\nnew embedding key. Evaluation results show that, without reducing the embedding\ncapacity, our method can decrease potentially detectable changes caused by the\nembedding process.\n", "title": "An improvement on LSB+ method" }
null
null
null
null
true
null
1154
null
Default
null
null
null
{ "abstract": " In this work we outline the mechanisms contributing to the oxygen reduction\nreaction in nanostructured cathodes of La0.8Sr0.2MnO3 (LSM) for Solid Oxide\nFuel Cells (SOFC). These cathodes, developed from LSM nanostructured tubes, can\nbe used at lower temperatures compared to microstructured ones, and this is a\ncrucial fact to avoid the degradation of the fuel cell components. This\nreduction of the operating temperatures stems mainly from two factors: i) the\nappearance of significant oxide ion diffusion through the cathode material in\nwhich the nanostructure plays a key role and ii) an optimized gas phase\ndiffusion of oxygen through the porous structure of the cathode, which becomes\nnegligible. A detailed analysis of our Electrochemical Impedance Spectroscopy\nsupported by first principles calculations point towards an improved overall\ncathodic performance driven by a fast transport of oxide ions through the\ncathode surface.\n", "title": "Oxygen reduction mechanisms in nanostructured La0.8Sr0.2MnO3 cathodes for Solid Oxide Fuel Cells" }
null
null
null
null
true
null
1155
null
Default
null
null
null
{ "abstract": " Machine learning has shown much promise in helping improve the quality of\nmedical, legal, and economic decision-making. In these applications, machine\nlearning models must satisfy two important criteria: (i) they must be causal,\nsince the goal is typically to predict individual treatment effects, and (ii)\nthey must be interpretable, so that human decision makers can validate and\ntrust the model predictions. There has recently been much progress along each\ndirection independently, yet the state-of-the-art approaches are fundamentally\nincompatible. We propose a framework for learning causal interpretable\nmodels---from observational data---that can be used to predict individual\ntreatment effects. Our framework can be used with any algorithm for learning\ninterpretable models. Furthermore, we prove an error bound on the treatment\neffects predicted by our model. Finally, in an experiment on real-world data,\nwe show that the models trained using our framework significantly outperform a\nnumber of baselines.\n", "title": "Learning Interpretable Models with Causal Guarantees" }
null
null
null
null
true
null
1156
null
Default
null
null
null
{ "abstract": " Lenses are crucial to light-enabled technologies. Conventional lenses have\nbeen perfected to achieve near-diffraction-limited resolution and minimal\nchromatic aberrations. However, such lenses are bulky and cannot focus light\ninto a hotspot smaller than half wavelength of light. Pupil filters, initially\nsuggested by Toraldo di Francia, can overcome the resolution constraints of\nconventional lenses, but are not intrinsically chromatically corrected. Here we\nreport single-element planar lenses that not only deliver sub-wavelength\nfocusing (beating the diffraction limit of conventional refractive lenses) but\nalso focus light of different colors into the same hotspot. Using the principle\nof super-oscillations we designed and fabricated a range of binary dielectric\nand metallic lenses for visible and infrared parts of the spectrum that are\nmanufactured on silicon wafers, silica substrates and optical fiber tips. Such\nlow cost, compact lenses could be useful in mobile devices, data storage,\nsurveillance, robotics, space applications, imaging, manufacturing with light,\nand spatially resolved nonlinear microscopies.\n", "title": "Achromatic super-oscillatory lenses with sub-wavelength focusing" }
null
null
[ "Physics" ]
null
true
null
1157
null
Validated
null
null
null
{ "abstract": " We present the extension of variational Monte Carlo (VMC) to the calculation\nof electronic excitation energies and oscillator strengths using time-dependent\nlinear-response theory. By exploiting the analogy existing between the linear\nmethod for wave-function optimisation and the generalised eigenvalue equation\nof linear-response theory, we formulate the equations of linear-response VMC\n(LR-VMC). This LR-VMC approach involves the first-and second-order derivatives\nof the wave function with respect to the parameters. We perform first tests of\nthe LR-VMC method within the Tamm-Dancoff approximation using\nsingle-determinant Jastrow-Slater wave functions with different Slater basis\nsets on some singlet and triplet excitations of the beryllium atom. Comparison\nwith reference experimental data and with configuration-interaction-singles\n(CIS) results shows that LR-VMC generally outperforms CIS for excitation\nenergies and is thus a promising approach for calculating electronic\nexcited-state properties of atoms and molecules.\n", "title": "Time-dependent linear-response variational Monte Carlo" }
null
null
[ "Physics" ]
null
true
null
1158
null
Validated
null
null
null
{ "abstract": " This paper studies power allocation for distributed estimation of an unknown\nscalar random source in sensor networks with a multiple-antenna fusion center\n(FC), where wireless sensors are equipped with radio-frequency based energy\nharvesting technology. The sensors' observation is locally processed by using\nan uncoded amplify-and-forward scheme. The processed signals are then sent to\nthe FC, and are coherently combined at the FC, at which the best linear\nunbiased estimator (BLUE) is adopted for reliable estimation. We aim to solve\nthe following two power allocation problems: 1) minimizing distortion under\nvarious power constraints; and 2) minimizing total transmit power under\ndistortion constraints, where the distortion is measured in terms of\nmean-squared error of the BLUE. Two iterative algorithms are developed to solve\nthe non-convex problems, which converge at least to a local optimum. In\nparticular, the above algorithms are designed to jointly optimize the\namplification coefficients, energy beamforming, and receive filtering. For each\nproblem, a suboptimal design, a single-antenna FC scenario, and a common\nharvester deployment for colocated sensors, are also studied. Using the\npowerful semidefinite relaxation framework, our result is shown to be valid for\nany number of sensors, each with different noise power, and for an arbitrarily\nnumber of antennas at the FC.\n", "title": "Wireless Power Transfer for Distributed Estimation in Sensor Networks" }
null
null
null
null
true
null
1159
null
Default
null
null
null
{ "abstract": " Using algebraic methods, and motivated by the one variable case, we study a\nmultipoint interpolation problem in the setting of several complex variables.\nThe duality realized by the residue generator associated with an underlying\nGorenstein algebra, using the Lagrange interpolation polynomial, plays a key\nrole in the arguments.\n", "title": "About a non-standard interpolation problem" }
null
null
null
null
true
null
1160
null
Default
null
null
null
{ "abstract": " Motivated by recent experiments on $\\alpha$-RuCl$_3$, we investigate a\npossible quantum spin liquid ground state of the honeycomb-lattice spin model\nwith bond-dependent interactions. We consider the $K-\\Gamma$ model, where $K$\nand $\\Gamma$ represent the Kitaev and symmetric-anisotropic interactions\nbetween spin-1/2 moments on the honeycomb lattice. Using the infinite density\nmatrix renormalization group (iDMRG), we provide compelling evidence for the\nexistence of quantum spin liquid phases in an extended region of the phase\ndiagram. In particular, we use transfer matrix spectra to show the evolution of\ntwo-particle excitations with well-defined two-dimensional dispersion, which is\na strong signature of quantum spin liquid. These results are compared with\npredictions from Majorana mean-field theory and used to infer the quasiparticle\nexcitation spectra. Further, we compute the dynamical structure factor using\nfinite size cluster computations and show that the results resemble the\nscattering continuum seen in neutron scattering experiments on\n$\\alpha$-RuCl$_3$. We discuss these results in light of recent and future\nexperiments.\n", "title": "Quantum spin liquid signatures in Kitaev-like frustrated magnets" }
null
null
null
null
true
null
1161
null
Default
null
null
null
{ "abstract": " The aim of this study is to investigate the magnetospheric disturbances\neffects on complicated nonlinear system of atmospheric processes. During\nsubstorms and storms, the ionosphere was subjected to rather a significant\nJoule heating, and the power of precipitating energetic particles was also\ngreat. Nevertheless, there were no abnormal variations of meteoparameters in\nthe lower atmosphere. If there is a mechanism for the powerful magnetospheric\ndisturbance effect on meteorological processes in the atmosphere, it supposes a\nmore complicated series of many intermediates, and is not associated directly\nwith the energy that arrives into the ionosphere during storms. I discuss the\nproblem of the effect of the solar wind electric field sharp increase via the\nglobal electric circuit during magnetospheric disturbances on the cloud layer\nformation.\n", "title": "Equivalent electric circuit of magnetosphere-ionosphere-atmosphere interaction" }
null
null
[ "Physics" ]
null
true
null
1162
null
Validated
null
null
null
{ "abstract": " In the new approach to study the optical response of periodic structures,\nsuccessfully applied to study the optical properties of blue-emitting InGaN/GaN\nsuperlattices, the spontaneous charge polarization was neglected. To search the\neffect of this quantum confined Stark phenomenon we study the optical response,\nassuming parabolic band edge modulations in the conduction and valence bands.\nWe discuss the consequences on the eigenfunction symmetries and the ensuing\noptical transition selection rules. Using the new approach in the WKB\napproximation of the finite periodic systems theory, we determine the energy\neigenvalues, their corresponding eigenfunctions and the subband structures in\nthe conduction and valence bands. We calculate the photoluminescence as a\nfunction of the charge localization strength, and compare with the experimental\nresult. We show that for subbands close to the barrier edge the optical\nresponse and the surface states are sensitive to charge polarization strength.\n", "title": "Charge polarization effects on the optical response of blue-emitting superlattices" }
null
null
[ "Physics" ]
null
true
null
1163
null
Validated
null
null
null
{ "abstract": " Overfitting, which happens when the number of parameters in a model is too\nlarge compared to the number of data points available for determining these\nparameters, is a serious and growing problem in survival analysis. While modern\nmedicine presents us with data of unprecedented dimensionality, these data\ncannot yet be used effectively for clinical outcome prediction. Standard error\nmeasures in maximum likelihood regression, such as p-values and z-scores, are\nblind to overfitting, and even for Cox's proportional hazards model (the main\ntool of medical statisticians), one finds in literature only rules of thumb on\nthe number of samples required to avoid overfitting. In this paper we present a\nmathematical theory of overfitting in regression models for time-to-event data,\nwhich aims to increase our quantitative understanding of the problem and\nprovide practical tools with which to correct regression outcomes for the\nimpact of overfitting. It is based on the replica method, a statistical\nmechanical technique for the analysis of heterogeneous many-variable systems\nthat has been used successfully for several decades in physics, biology, and\ncomputer science, but not yet in medical statistics. We develop the theory\ninitially for arbitrary regression models for time-to-event data, and verify\nits predictions in detail for the popular Cox model.\n", "title": "Replica analysis of overfitting in regression models for time-to-event data" }
null
null
null
null
true
null
1164
null
Default
null
null
null
{ "abstract": " Russell is a logical framework for the specification and implementation of\ndeductive systems. It is a high-level language with respect to Metamath\nlanguage, so inherently it uses a Metamath foundations, i.e. it doesn't rely on\nany particular formal calculus, but rather is a pure logical framework. The\nmain difference with Metamath is in the proof language and approach to syntax:\nthe proofs have a declarative form, i.e. consist of actual expressions, which\nare used in proofs, while syntactic grammar rules are separated from the\nmeaningful rules of inference.\nRussell is implemented in c++14 and is distributed under GPL v3 license. The\nrepository contains translators from Metamath to Russell and back. Original\nMetamath theorem base (almost 30 000 theorems) can be translated to Russell,\nverified, translated back to Metamath and verified with the original Metamath\nverifier. Russell can be downloaded from the repository\nthis https URL\n", "title": "System Description: Russell - A Logical Framework for Deductive Systems" }
null
null
null
null
true
null
1165
null
Default
null
null
null
{ "abstract": " \"Let us call the novel quantities which, in addition to the vectors and\ntensors, have appeared in the quantum mechanics of the spinning electron, and\nwhich in the case of the Lorentz group are quite differently transformed from\ntensors, as spinors for short. Is there no spinor analysis that every physicist\ncan learn, such as tensor analysis, and with the aid of which all the possible\nspinors can be formed, and secondly, all the invariant equations in which\nspinors occur?\" So Mr Ehrenfest asked me and the answer will be given below.\n", "title": "Spinor analysis" }
null
null
[ "Physics" ]
null
true
null
1166
null
Validated
null
null
null
{ "abstract": " Distances between sequences based on their $k$-mer frequency counts can be\nused to reconstruct phylogenies without first computing a sequence alignment.\nPast work has shown that effective use of k-mer methods depends on 1)\nmodel-based corrections to distances based on $k$-mers and 2) breaking long\nsequences into blocks to obtain repeated trials from the sequence-generating\nprocess. Good performance of such methods is based on having many high-quality\nblocks with many homologous sites, which can be problematic to guarantee a\npriori.\nNature provides natural blocks of sequences into homologous regions---namely,\nthe genes. However, directly using past work in this setting is problematic\nbecause of possible discordance between different gene trees and the underlying\nspecies tree. Using the multispecies coalescent model as a basis, we derive\nmodel-based moment formulas that involve the divergence times and the\ncoalescent parameters. From this setting, we prove identifiability results for\nthe tree and branch length parameters under the Jukes-Cantor model of sequence\nmutations.\n", "title": "Identifiability of phylogenetic parameters from k-mer data under the coalescent" }
null
null
null
null
true
null
1167
null
Default
null
null
null
{ "abstract": " In the exciton-polariton system, a linear dispersive photon field is coupled\nto a nonlinear exciton field. Short-time analysis of the lossless system shows\nthat, when the photon field is excited, the time required for that field to\nexhibit nonlinear effects is longer than the time required for the nonlinear\nSchrödinger equation, in which the photon field itself is nonlinear. When the\ninitial condition is scaled by $\\epsilon^\\alpha$, it is found that the relative\nerror committed by omitting the nonlinear term in the exciton-polariton system\nremains within $\\epsilon$ for all times up to $t=C\\epsilon^\\beta$, where\n$\\beta=(1-\\alpha(p-1))/(p+2)$. This is in contrast to $\\beta=1-\\alpha(p-1)$ for\nthe nonlinear Schrödinger equation.\n", "title": "Short-Time Nonlinear Effects in the Exciton-Polariton System" }
null
null
null
null
true
null
1168
null
Default
null
null
null
{ "abstract": " We present the results of our search for the faint galaxies near the end of\nthe Reionisation Epoch. This has been done using very deep OSIRIS images\nobtained at the Gran Telescopio Canarias (GTC). Our observations focus around\ntwo close, massive Lyman Alpha Emitters (LAEs) at redshift 6.5, discovered in\nthe SXDS field within a large-scale overdense region (Ouchi et al. 2010). The\ntotal GTC observing time in three medium band filters (F883w35, F913w25 and\nF941w33) is over 34 hours covering $7.0\\times8.5$ arcmin$^2$ (or $\\sim30,000$\nMpc$^3$ at $z=6.5$). In addition to the two spectroscopically confirmed LAEs in\nthe field, we have identified 45 other LAE candidates. The preliminary\nluminosity function derived from our observations, assuming a spectroscopic\nconfirmation success rate of $\\frac{2}{3}$ as in previous surveys, suggests\nthis area is about 2 times denser than the general field galaxy population at\n$z=6.5$. If confirmed spectroscopically, our results will imply the discovery\nof one of the earliest protoclusters in the universe, which will evolve to\nresemble the most massive galaxy clusters today.\n", "title": "GTC Observations of an Overdense Region of LAEs at z=6.5" }
null
null
null
null
true
null
1169
null
Default
null
null
null
{ "abstract": " A recent paper [X. Guo, A. Mandelis, J. Tolev and K. Tang, J. Appl. Phys.,\n121, 095101 (2017)] intends to demonstrate that from the photothermal\nradiometry signal obtained on a coated opaque sample in 1D transfer, one should\nbe able to identify separately the following three parameters of the coating:\nthermal diffusivity, thermal conductivity and thickness. In this comment, it is\nshown that the three parameters are correlated in the considered experimental\narrangement, the identifiability criterion is in error and the thickness\ninferred therefrom is not trustable.\n", "title": "Comment on Photothermal radiometry parametric identifiability theory for reliable and unique nondestructive coating thickness and thermophysical measurements, J. Appl. Phys. 121(9), 095101 (2017)" }
null
null
null
null
true
null
1170
null
Default
null
null
null
{ "abstract": " A fundamental question in systems biology is what combinations of mean and\nvariance of the species present in a stochastic biochemical reaction network\nare attainable by perturbing the system with an external signal. To address\nthis question, we show that the moments evolution in any generic network can be\neither approximated or, under suitable assumptions, computed exactly as the\nsolution of a switched affine system. Motivated by this application, we propose\na new method to approximate the reachable set of switched affine systems. A\nremarkable feature of our approach is that it allows one to easily compute\nprojections of the reachable set for pairs of moments of interest, without\nrequiring the computation of the full reachable set, which can be prohibitive\nfor large networks. As a second contribution, we also show how to select the\nexternal signal in order to maximize the probability of reaching a target set.\nTo illustrate the method we study a renown model of controlled gene expression\nand we derive estimates of the reachable set, for the protein mean and\nvariance, that are more accurate than those available in the literature and\nconsistent with experimental data.\n", "title": "Computing the projected reachable set of switched affine systems: an application to systems biology" }
null
null
null
null
true
null
1171
null
Default
null
null
null
{ "abstract": " We address the problem of temporal action localization in videos. We pose\naction localization as a structured prediction over arbitrary-length temporal\nwindows, where each window is scored as the sum of frame-wise classification\nscores. Additionally, our model classifies the start, middle, and end of each\naction as separate components, allowing our system to explicitly model each\naction's temporal evolution and take advantage of informative temporal\ndependencies present in this structure. In this framework, we localize actions\nby searching for the structured maximal sum, a problem for which we develop a\nnovel, provably-efficient algorithmic solution. The frame-wise classification\nscores are computed using features from a deep Convolutional Neural Network\n(CNN), which are trained end-to-end to directly optimize for a novel structured\nobjective. We evaluate our system on the THUMOS 14 action detection benchmark\nand achieve competitive performance.\n", "title": "Temporal Action Localization by Structured Maximal Sums" }
null
null
null
null
true
null
1172
null
Default
null
null
null
{ "abstract": " Cassava is the third largest source of carbohydrates for human food in the\nworld but is vulnerable to virus diseases, which threaten to destabilize food\nsecurity in sub-Saharan Africa. Novel methods of cassava disease detection are\nneeded to support improved control which will prevent this crisis. Image\nrecognition offers both a cost effective and scalable technology for disease\ndetection. New transfer learning methods offer an avenue for this technology to\nbe easily deployed on mobile devices. Using a dataset of cassava disease images\ntaken in the field in Tanzania, we applied transfer learning to train a deep\nconvolutional neural network to identify three diseases and two types of pest\ndamage (or lack thereof). The best trained model accuracies were 98% for brown\nleaf spot (BLS), 96% for red mite damage (RMD), 95% for green mite damage\n(GMD), 98% for cassava brown streak disease (CBSD), and 96% for cassava mosaic\ndisease (CMD). The best model achieved an overall accuracy of 93% for data not\nused in the training process. Our results show that the transfer learning\napproach for image recognition of field images offers a fast, affordable, and\neasily deployable strategy for digital plant disease detection.\n", "title": "Using Transfer Learning for Image-Based Cassava Disease Detection" }
null
null
null
null
true
null
1173
null
Default
null
null
null
{ "abstract": " Ability to continuously learn and adapt from limited experience in\nnonstationary environments is an important milestone on the path towards\ngeneral intelligence. In this paper, we cast the problem of continuous\nadaptation into the learning-to-learn framework. We develop a simple\ngradient-based meta-learning algorithm suitable for adaptation in dynamically\nchanging and adversarial scenarios. Additionally, we design a new multi-agent\ncompetitive environment, RoboSumo, and define iterated adaptation games for\ntesting various aspects of continuous adaptation strategies. We demonstrate\nthat meta-learning enables significantly more efficient adaptation than\nreactive baselines in the few-shot regime. Our experiments with a population of\nagents that learn and compete suggest that meta-learners are the fittest.\n", "title": "Continuous Adaptation via Meta-Learning in Nonstationary and Competitive Environments" }
null
null
null
null
true
null
1174
null
Default
null
null
null
{ "abstract": " We investigate the use of alternative divergences to Kullback-Leibler (KL) in\nvariational inference(VI), based on the Variational Dropout \\cite{kingma2015}.\nStochastic gradient variational Bayes (SGVB) \\cite{aevb} is a general framework\nfor estimating the evidence lower bound (ELBO) in Variational Bayes. In this\nwork, we extend the SGVB estimator with using Alpha-Divergences, which are\nalternative to divergences to VI' KL objective. The Gaussian dropout can be\nseen as a local reparametrization trick of the SGVB objective. We extend the\nVariational Dropout to use alpha divergences for variational inference. Our\nresults compare $\\alpha$-divergence variational dropout with standard\nvariational dropout with correlated and uncorrelated weight noise. We show that\nthe $\\alpha$-divergence with $\\alpha \\rightarrow 1$ (or KL divergence) is still\na good measure for use in variational inference, in spite of the efficient use\nof Alpha-divergences for Dropout VI \\cite{Li17}. $\\alpha \\rightarrow 1$ can\nyield the lowest training error, and optimizes a good lower bound for the\nevidence lower bound (ELBO) among all values of the parameter $\\alpha \\in\n[0,\\infty)$.\n", "title": "Alpha-Divergences in Variational Dropout" }
null
null
null
null
true
null
1175
null
Default
null
null
null
{ "abstract": " The curvature properties of Robinson-Trautman metric have been investigated.\nIt is shown that Robinson-Trautman metric admits several kinds of\npseudosymmetric type structures such as Weyl pseudosymmetric, Ricci\npseudosymmetric, pseudosymmetric Weyl conformal curvature tensor etc. Also it\nis shown that the difference $R\\cdot R - Q(S,R)$ is linearly dependent with\n$Q(g,C)$ but the metric is not Ricci generalized pseudosymmetric. Moreover, it\nis proved that this metric is Roter type, 2-quasi-Einstein, Ricci tensor is\nRiemann compatible and its Weyl conformal curvature 2-forms are recurrent. It\nis also shown that the energy momentum tensor of the metric is pseudosymmetric\nand the conditions under which such tensor is of Codazzi type and cyclic\nparallel have been investigated. Finally, we have made a comparison between the\ncurvature properties of Robinson-Trautman metric and Som-Raychaudhuri metric.\n", "title": "Curvature properties of Robinson-Trautman metric" }
null
null
null
null
true
null
1176
null
Default
null
null
null
{ "abstract": " We prove that the Dehn invariant of any flexible polyhedron in Euclidean\nspace of dimension greater than or equal to 3 is constant during the flexion.\nIn dimensions 3 and 4 this implies that any flexible polyhedron remains\nscissors congruent to itself during the flexion. This proves the Strong Bellows\nConjecture posed by Connelly in 1979. It was believed that this conjecture was\ndisproved by Alexandrov and Connelly in 2009. However, we find an error in\ntheir counterexample. Further, we show that the Dehn invariant of a flexible\npolyhedron in either sphere or Lobachevsky space of dimension greater than or\nequal to 3 is constant during the flexion if and only if this polyhedron\nsatisfies the usual Bellows Conjecture, i.e., its volume is constant during\nevery flexion of it. Using previous results due to the first listed author, we\ndeduce that the Dehn invariant is constant during the flexion for every bounded\nflexible polyhedron in odd-dimensional Lobachevsky space and for every flexible\npolyhedron with sufficiently small edge lengths in any space of constant\ncurvature of dimension greater than or equal to 3.\n", "title": "Dehn invariant of flexible polyhedra" }
null
null
null
null
true
null
1177
null
Default
null
null
null
{ "abstract": " Skillful mobile operation in three-dimensional environments is a primary\ntopic of study in Artificial Intelligence. The past two years have seen a surge\nof creative work on navigation. This creative output has produced a plethora of\nsometimes incompatible task definitions and evaluation protocols. To coordinate\nongoing and future research in this area, we have convened a working group to\nstudy empirical methodology in navigation research. The present document\nsummarizes the consensus recommendations of this working group. We discuss\ndifferent problem statements and the role of generalization, present evaluation\nmeasures, and provide standard scenarios that can be used for benchmarking.\n", "title": "On Evaluation of Embodied Navigation Agents" }
null
null
null
null
true
null
1178
null
Default
null
null
null
{ "abstract": " We utilize variational method to investigate the Kondo screening of a\nspin-1/2 magnetic impurity in tilted Dirac surface states with the Dirac cone\ntilted along the $k_y$-axis. We mainly study about the effect of the tilting\nterm on the binding energy and the spin-spin correlation between magnetic\nimpurity and conduction electrons, and compare the results with the\ncounterparts in a two dimensional helical metal. The binding energy has a\ncritical value while the Dirac cone is slightly tilted. However, as the tilting\nterm increases, the density of states around the Fermi surface becomes\nsignificant, such that the impurity and the host material always favor a bound\nstate. The diagonal and the off-diagonal terms of the spin-spin correlation\nbetween the magnetic impurity and conduction electrons are also studied. Due to\nthe spin-orbit coupling and the tilting of the spectra, various components of\nspin-spin correlation show very strong anisotropy in coordinate space, and are\nof power-law decay with respect to the spatial displacements.\n", "title": "Single Magnetic Impurity in Tilted Dirac Surface States" }
null
null
null
null
true
null
1179
null
Default
null
null
null
{ "abstract": " Human action recognition in videos is one of the most challenging tasks in\ncomputer vision. One important issue is how to design discriminative features\nfor representing spatial context and temporal dynamics. Here, we introduce a\npath signature feature to encode information from intra-frame and inter-frame\ncontexts. A key step towards leveraging this feature is to construct the proper\ntrajectories (paths) for the data steam. In each frame, the correlated\nconstraints of human joints are treated as small paths, then the spatial path\nsignature features are extracted from them. In video data, the evolution of\nthese spatial features over time can also be regarded as paths from which the\ntemporal path signature features are extracted. Eventually, all these features\nare concatenated to constitute the input vector of a fully connected neural\nnetwork for action classification. Experimental results on four standard\nbenchmark action datasets, J-HMDB, SBU Dataset, Berkeley MHAD, and NTURGB+D\ndemonstrate that the proposed approach achieves state-of-the-art accuracy even\nin comparison with recent deep learning based models.\n", "title": "Leveraging the Path Signature for Skeleton-based Human Action Recognition" }
null
null
null
null
true
null
1180
null
Default
null
null
null
{ "abstract": " Reconstruction of population histories is a central problem in population\ngenetics. Existing coalescent-based methods, like the seminal work of Li and\nDurbin (Nature, 2011), attempt to solve this problem using sequence data but\nhave no rigorous guarantees. Determining the amount of data needed to correctly\nreconstruct population histories is a major challenge. Using a variety of tools\nfrom information theory, the theory of extremal polynomials, and approximation\ntheory, we prove new sharp information-theoretic lower bounds on the problem of\nreconstructing population structure -- the history of multiple subpopulations\nthat merge, split and change sizes over time. Our lower bounds are exponential\nin the number of subpopulations, even when reconstructing recent histories. We\ndemonstrate the sharpness of our lower bounds by providing algorithms for\ndistinguishing and learning population histories with matching dependence on\nthe number of subpopulations.\n", "title": "How Many Subpopulations is Too Many? Exponential Lower Bounds for Inferring Population Histories" }
null
null
null
null
true
null
1181
null
Default
null
null
null
{ "abstract": " Source localization in ocean acoustics is posed as a machine learning problem\nin which data-driven methods learn source ranges directly from observed\nacoustic data. The pressure received by a vertical linear array is preprocessed\nby constructing a normalized sample covariance matrix (SCM) and used as the\ninput. Three machine learning methods (feed-forward neural networks (FNN),\nsupport vector machines (SVM) and random forests (RF)) are investigated in this\npaper, with focus on the FNN. The range estimation problem is solved both as a\nclassification problem and as a regression problem by these three machine\nlearning algorithms. The results of range estimation for the Noise09 experiment\nare compared for FNN, SVM, RF and conventional matched-field processing and\ndemonstrate the potential of machine learning for underwater source\nlocalization..\n", "title": "Source localization in an ocean waveguide using supervised machine learning" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
1182
null
Validated
null
null
null
{ "abstract": " Illegal insider trading of stocks is based on releasing non-public\ninformation (e.g., new product launch, quarterly financial report, acquisition\nor merger plan) before the information is made public. Detecting illegal\ninsider trading is difficult due to the complex, nonlinear, and non-stationary\nnature of the stock market. In this work, we present an approach that detects\nand predicts illegal insider trading proactively from large heterogeneous\nsources of structured and unstructured data using a deep-learning based\napproach combined with discrete signal processing on the time series data. In\naddition, we use a tree-based approach that visualizes events and actions to\naid analysts in their understanding of large amounts of unstructured data.\nUsing existing data, we have discovered that our approach has a good success\nrate in detecting illegal insider trading patterns.\n", "title": "Mining Illegal Insider Trading of Stocks: A Proactive Approach" }
null
null
null
null
true
null
1183
null
Default
null
null
null
{ "abstract": " Using deep multi-wavelength photometry of galaxies from ZFOURGE, we group\ngalaxies at $2.5<z<4.0$ by the shape of their spectral energy distributions\n(SEDs). We identify a population of galaxies with excess emission in the\n$K_s$-band, which corresponds to [OIII]+H$\\beta$ emission at $2.95<z<3.65$.\nThis population includes 78% of the bluest galaxies with UV slopes steeper than\n$\\beta = -2$. We de-redshift and scale this photometry to build two composite\nSEDs, enabling us to measure equivalent widths of these Extreme [OIII]+H$\\beta$\nEmission Line Galaxies (EELGs) at $z\\sim3.5$. We identify 60 galaxies that\ncomprise a composite SED with [OIII]+H$\\beta$ rest-frame equivalent width of\n$803\\pm228$\\AA\\ and another 218 galaxies in a composite SED with equivalent\nwidth of $230\\pm90$\\AA. These EELGs are analogous to the `green peas' found in\nthe SDSS, and are thought to be undergoing their first burst of star formation\ndue to their blue colors ($\\beta < -1.6$), young ages\n($\\log(\\rm{age}/yr)\\sim7.2$), and low dust attenuation values. Their strong\nnebular emission lines and compact sizes (typically $\\sim1.4$ kpc) are\nconsistent with the properties of the star-forming galaxies possibly\nresponsible for reionizing the universe at $z>6$. Many of the EELGs also\nexhibit Lyman-$\\alpha$ emission. Additionally, we find that many of these\nsources are clustered in an overdensity in the Chandra Deep Field South, with\nfive spectroscopically confirmed members at $z=3.474 \\pm 0.004$. The spatial\ndistribution and photometric redshifts of the ZFOURGE population further\nconfirm the overdensity highlighted by the EELGs.\n", "title": "Discovery of Extreme [OIII]+H$β$ Emitting Galaxies Tracing an Overdensity at z~3.5 in CDF-South" }
null
null
null
null
true
null
1184
null
Default
null
null
null
{ "abstract": " Boron subphthalocyanine chloride is an electron donor material used in small\nmolecule organic photovoltaics with an unusually large molecular dipole moment.\nUsing first-principles calculations, we investigate enhancing the electronic\nand optical properties of boron subphthalocyanine chloride, by substituting the\nboron and chlorine atoms with other trivalent and halogen atoms in order to\nmodify the molecular dipole moment. Gas phase molecular structures and\nproperties are predicted with hybrid functionals. Using positions and\norientations of the known compounds as the starting coordinates for these\nmolecules, stable crystalline structures are derived following a procedure that\ninvolves perturbation and accurate total energy minimization. Electronic\nstructure and photonic properties of the predicted crystals are computed using\nthe GW method and the Bethe-Salpeter equation, respectively. Finally, a simple\ntransport model is use to demonstrate the importance of molecular dipole\nmoments on device performance.\n", "title": "Predictive Simulations for Tuning Electronic and Optical Properties of SubPc Derivatives" }
null
null
null
null
true
null
1185
null
Default
null
null
null
{ "abstract": " Recent machine learning models have shown that including attention as a\ncomponent results in improved model accuracy and interpretability, despite the\nconcept of attention in these approaches only loosely approximating the brain's\nattention mechanism. Here we extend this work by building a more brain-inspired\ndeep network model of the primate ATTention Network (ATTNet) that learns to\nshift its attention so as to maximize the reward. Using deep reinforcement\nlearning, ATTNet learned to shift its attention to the visual features of a\ntarget category in the context of a search task. ATTNet's dorsal layers also\nlearned to prioritize these shifts of attention so as to maximize success of\nthe ventral pathway classification and receive greater reward. Model behavior\nwas tested against the fixations made by subjects searching images for the same\ncued category. Both subjects and ATTNet showed evidence for attention being\npreferentially directed to target goals, behaviorally measured as oculomotor\nguidance to targets. More fundamentally, ATTNet learned to shift its attention\nto target like objects and spatially route its visual inputs to accomplish the\ntask. This work makes a step toward a better understanding of the role of\nattention in the brain and other computational systems.\n", "title": "Learning to attend in a brain-inspired deep neural network" }
null
null
null
null
true
null
1186
null
Default
null
null
null
{ "abstract": " In the present paper we consider the problem of estimating a\nthree-dimensional function $f$ based on observations from its noisy Laplace\nconvolution. Our study is motivated by the analysis of Dynamic Contrast\nEnhanced (DCE) imaging data. We construct an adaptive wavelet-Laguerre\nestimator of $f$, derive minimax lower bounds for the $L^2$-risk when $f$\nbelongs to a three-dimensional Laguerre-Sobolev ball and demonstrate that the\nwavelet-Laguerre estimator is adaptive and asymptotically near-optimal in a\nwide range of Laguerre-Sobolev spaces. We carry out a limited simulations study\nand show that the estimator performs well in a finite sample setting. Finally,\nwe use the technique for the solution of the Laplace deconvolution problem on\nthe basis of DCE Computerized Tomography data.\n", "title": "Anisotropic functional Laplace deconvolution" }
null
null
[ "Statistics" ]
null
true
null
1187
null
Validated
null
null
null
{ "abstract": " For a given many-electron molecule, it is possible to define a corresponding\none-electron Schrödinger equation, using potentials derived from simple\natomic densities, whose solution predicts fairly accurate molecular orbitals\nfor single- and multi-determinant wavefunctions for the molecule. The energy is\nnot predicted and must be evaluated by calculating Coulomb and exchange\ninteractions over the predicted orbitals. Potentials are found by minimizing\nthe energy of predicted wavefunctions. There exist slightly less accurate\naverage potentials for first-row atoms that can be used without modification in\ndifferent molecules. For a test set of molecules representing different bonding\nenvironments, these average potentials give wavefunctions with energies that\ndeviate from exact self-consistent field or configuration interaction energies\nby less than 0.08 eV and 0.03 eV per bond or valence electron pair,\nrespectively.\n", "title": "Prediction of many-electron wavefunctions using atomic potentials" }
null
null
null
null
true
null
1188
null
Default
null
null
null
{ "abstract": " Using the formalism of the classical nucleation theory, we derive an\nexpression for the reversible work $W_*$ of formation of a binary crystal\nnucleus in a liquid binary solution of non-stoichiometric composition\n(incongruent crystallization). Applied to the crystallization of aqueous nitric\nacid (NA) droplets, the new expression more adequately takes account of the\neffect of nitric acid vapor compared to the conventional expression of\nMacKenzie, Kulmala, Laaksonen, and Vesala (MKLV) [J.Geophys.Res. 102, 19729\n(1997)]. The predictions of both MKLV and modified expressions for the average\nliquid-solid interfacial tension $\\sigma^{ls}$ of nitric acid dihydrate (NAD)\ncrystals are compared by using existing experimental data on the incongruent\ncrystallization of aqueous NA droplets of composition relevant to polar\nstratospheric clouds (PSCs). The predictions based on the MKLV expression are\nhigher by about 5% compared to predictions based on our modified expression.\nThis results in similar differences between the predictions of both expressions\nfor the solid-vapor interfacial tension $\\sigma^{sv}$ of NAD crystal nuclei.\nThe latter can be obtained by analyzing of experimental data on crystal\nnucleation rates in aqueous NA droplets and exploiting the dominance of the\nsurface-stimulated mode of crystal nucleation in small droplets and its\nnegligibility in large ones. Applying that method, our expression for $W_*$\nprovides an estimate for $\\sigma^{sv}$ of NAD in the range from 92 dyn/cm to\n100 dyn/cm, while the MKLV expression predicts it in the range from 95 dyn/cm\nto 105 dyn/cm. The predictions of both expressions for $W_*$ become identical\nin the case of congruent crystallization; this was also demonstrated by\napplying our method to the nucleation of nitric acid trihydrate (NAT) crystals\nin PSC droplets of stoichiometric composition.\n", "title": "Free energy of formation of a crystal nucleus in incongruent solidification: Implication for modeling the crystallization of aqueous nitric acid droplets in type 1 polar stratospheric clouds" }
null
null
null
null
true
null
1189
null
Default
null
null
null
{ "abstract": " Most machine learning classifiers give predictions for new examples\naccurately, yet without indicating how trustworthy predictions are. In the\nmedical domain, this hampers their integration in decision support systems,\nwhich could be useful in the clinical practice. We use a supervised learning\napproach that combines Ensemble learning with Conformal Predictors to predict\nconversion from Mild Cognitive Impairment to Alzheimer's Disease. Our goal is\nto enhance the classification performance (Ensemble learning) and complement\neach prediction with a measure of credibility (Conformal Predictors). Our\nresults showed the superiority of the proposed approach over a similar ensemble\nframework with standard classifiers.\n", "title": "Ensemble learning with Conformal Predictors: Targeting credible predictions of conversion from Mild Cognitive Impairment to Alzheimer's Disease" }
null
null
null
null
true
null
1190
null
Default
null
null
null
{ "abstract": " Deep reinforcement learning for multi-agent cooperation and competition has\nbeen a hot topic recently. This paper focuses on cooperative multi-agent\nproblem based on actor-critic methods under local observations settings. Multi\nagent deep deterministic policy gradient obtained state of art results for some\nmulti-agent games, whereas, it cannot scale well with growing amount of agents.\nIn order to boost scalability, we propose a parameter sharing deterministic\npolicy gradient method with three variants based on neural networks, including\nactor-critic sharing, actor sharing and actor sharing with partially shared\ncritic. Benchmarks from rllab show that the proposed method has advantages in\nlearning speed and memory efficiency, well scales with growing amount of\nagents, and moreover, it can make full use of reward sharing and\nexchangeability if possible.\n", "title": "Parameter Sharing Deep Deterministic Policy Gradient for Cooperative Multi-agent Reinforcement Learning" }
null
null
null
null
true
null
1191
null
Default
null
null
null
{ "abstract": " We study the data reliability problem for a community of devices forming a\nmobile cloud storage system. We consider the application of regenerating codes\nfor file maintenance within a geographically-limited area. Such codes require\nlower bandwidth to regenerate lost data fragments compared to file replication\nor reconstruction. We investigate threshold-based repair strategies where data\nrepair is initiated after a threshold number of data fragments have been lost\ndue to node mobility. We show that at a low departure-to-repair rate regime, a\nlazy repair strategy in which repairs are initiated after several nodes have\nleft the system outperforms eager repair in which repairs are initiated after a\nsingle departure. This optimality is reversed when nodes are highly mobile. We\nfurther compare distributed and centralized repair strategies and derive the\noptimal repair threshold for minimizing the average repair cost per unit of\ntime, as a function of underlying code parameters. In addition, we examine\ncooperative repair strategies and show performance improvements compared to\nnon-cooperative codes. We investigate several models for the time needed for\nnode repair including a simple fixed time model that allows for the computation\nof closed-form expressions and a more realistic model that takes into account\nthe number of repaired nodes. We derive the conditions under which the former\nmodel approximates the latter. Finally, an extended model where additional\nfailures are allowed during the repair process is investigated. Overall, our\nresults establish the joint effect of code design and repair algorithms on the\nmaintenance cost of distributed storage systems.\n", "title": "Repair Strategies for Storage on Mobile Clouds" }
null
null
null
null
true
null
1192
null
Default
null
null
null
{ "abstract": " This paper studies a mean-variance portfolio selection problem under partial\ninformation with drift uncertainty. It is proved that all the contingent claims\nin this model are attainable in the sense of Xiong and Zhou. Further, we\npropose a numerical scheme to approximate the optimal portfolio. Malliavin\ncalculus and the strong law of large numbers play important roles in this\nscheme.\n", "title": "Mean-variance portfolio selection under partial information with drift uncertainty" }
null
null
null
null
true
null
1193
null
Default
null
null
null
{ "abstract": " We obtain estimation error rates for estimators obtained by aggregation of\nregularized median-of-means tests, following a construction of Le Cam. The\nresults hold with exponentially large probability -- as in the gaussian\nframework with independent noise- under only weak moments assumptions on data\nand without assuming independence between noise and design. Any norm may be\nused for regularization. When it has some sparsity inducing power we recover\nsparse rates of convergence.\nThe procedure is robust since a large part of data may be corrupted, these\noutliers have nothing to do with the oracle we want to reconstruct. Our general\nrisk bound is of order \\begin{equation*} \\max\\left(\\mbox{minimax rate in the\ni.i.d. setup}, \\frac{\\text{number of outliers}}{\\text{number of\nobservations}}\\right) \\enspace. \\end{equation*}In particular, the number of\noutliers may be as large as (number of data) $\\times$(minimax rate) without\naffecting this rate. The other data do not have to be identically distributed\nbut should only have equivalent $L^1$ and $L^2$ moments.\nFor example, the minimax rate $s \\log(ed/s)/N$ of recovery of a $s$-sparse\nvector in $\\mathbb{R}^d$ is achieved with exponentially large probability by a\nmedian-of-means version of the LASSO when the noise has $q_0$ moments for some\n$q_0>2$, the entries of the design matrix should have $C_0\\log(ed)$ moments and\nthe dataset can be corrupted up to $C_1 s \\log(ed/s)$ outliers.\n", "title": "Learning from MOM's principles: Le Cam's approach" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
1194
null
Validated
null
null
null
{ "abstract": " In this paper, we are interested in a Neumann-type series for modified Bessel\nfunctions of the first kind which arises in the study of Dunkl operators\nassociated with dihedral groups and as an instance of the Laguerre semigroup\nconstructed by Ben Said-Kobayashi-Orsted. We first revisit the particular case\ncorresponding to the group of square-preserving symmetries for which we give\ntwo new and different proofs other than the existing ones. The first proof uses\nthe expansion of powers in a Neumann series of Bessel functions while the\nsecond one is based on a quadratic transformation for the Gauss hypergeometric\nfunction and opens the way to derive further expressions when the orders of the\nunderlying dihedral groups are powers of two. More generally, we give another\nproof of De Bie \\& al formula expressing this series as a $\\Phi_2$-Horn\nconfluent hypergeometric function. In the course of proving, we shed the light\non the occurrence of multiple angles in their formula through elementary\nsymmetric functions, and get a new representation of Gegenbauer polynomials.\n", "title": "On a Neumann-type series for modified Bessel functions of the first kind" }
null
null
null
null
true
null
1195
null
Default
null
null
null
{ "abstract": " In this paper, we investigate the integral of $x^n\\log^m(\\sin(x))$ for\nnatural numbers $m$ and $n$. In doing so, we recover some well-known results\nand remark on some relations to the log-sine integral\n$\\operatorname{Ls}_{n+m+1}^{(n)}(\\theta)$. Later, we use properties of Bell\npolynomials to find a closed expression for the derivative of the central\nbinomial and shifted central binomial coefficients in terms of polygamma\nfunctions and harmonic numbers.\n", "title": "Generalized Log-sine integrals and Bell polynomials" }
null
null
null
null
true
null
1196
null
Default
null
null
null
{ "abstract": " For the past three years we have been conducting a survey for WR stars in the\nLarge and Small Magellanic Clouds (LMC, SMC). Our previous work has resulted in\nthe discovery of a new type of WR star in the LMC, which we are calling WN3/O3.\nThese stars have the emission-line properties of a WN3 star (strong N V but no\nN IV), plus the absorption-line properties of an O3 star (Balmer hydrogen plus\nPickering He II but no He I). Yet these stars are 15x fainter than an O3 V star\nwould be by itself, ruling out these being WN3+O3 binaries. Here we report the\ndiscovery of two more members of this class, bringing the total number of these\nobjects to 10, 6.5% of the LMC's total WR population. The optical spectra of\nnine of these WN3/O3s are virtually indistinguishable from each other, but one\nof the newly found stars is significantly different, showing a lower excitation\nemission and absorption spectrum (WN4/O4-ish). In addition, we have newly\nclassified three unusual Of-type stars, including one with a strong C III 4650\nline, and two rapidly rotating \"Oef\" stars. We also \"rediscovered\" a low mass\nx-ray binary, RX J0513.9-6951, and demonstrate its spectral variability.\nFinally, we discuss the spectra of ten low priority WR candidates that turned\nout not to have He II emission. These include both a Be star and a B[e] star.\n", "title": "A Modern Search for Wolf-Rayet Stars in the Magellanic Clouds. III. A Third Year of Discoveries" }
null
null
null
null
true
null
1197
null
Default
null
null
null
{ "abstract": " We propose a new mathematical model for the spread of Zika virus. Special\nattention is paid to the transmission of microcephaly. Numerical simulations\nshow the accuracy of the model with respect to the Zika outbreak occurred in\nBrazil.\n", "title": "Mathematical modeling of Zika disease in pregnant women and newborns with microcephaly in Brazil" }
null
null
null
null
true
null
1198
null
Default
null
null
null
{ "abstract": " In a given problem, the Bayesian statistical paradigm requires the\nspecification of a prior distribution that quantifies relevant information\nabout the unknowns of main interest external to the data. In cases where little\nsuch information is available, the problem under study may possess an\ninvariance under a transformation group that encodes a lack of information,\nleading to a unique prior---this idea was explored at length by E.T. Jaynes.\nPrevious successful examples have included location-scale invariance under\nlinear transformation, multiplicative invariance of the rate at which events in\na counting process are observed, and the derivation of the Haldane prior for a\nBernoulli success probability. In this paper we show that this method can be\nextended, by generalizing Jaynes, in two ways: (1) to yield families of\napproximately invariant priors, and (2) to the infinite-dimensional setting,\nyielding families of priors on spaces of distribution functions. Our results\ncan be used to describe conditions under which a particular Dirichlet Process\nposterior arises from an optimal Bayesian analysis, in the sense that\ninvariances in the prior and likelihood lead to one and only one posterior\ndistribution.\n", "title": "A Noninformative Prior on a Space of Distribution Functions" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
1199
null
Validated
null
null
null
{ "abstract": " The Network of Noisy Leaky Integrate and Fire (NNLIF) model describes the\nbehavior of a neural network at mesoscopic level. It is one of the simplest\nself-contained mean-field models considered for that purpose. Even so, to study\nthe mathematical properties of the model some simplifications were necessary\nCáceres-Carrillo-Perthame(2011), Cáceres-Perthame(2014),\nCáceres-Schneider(2017), which disregard crucial phenomena. In this work we\ndeal with the general NNLIF model without simplifications. It involves a\nnetwork with two populations (excitatory and inhibitory), with transmission\ndelays between the neurons and where the neurons remain in a refractory state\nfor a certain time. We have studied the number of steady states in terms of the\nmodel parameters, the long time behaviour via the entropy method and\nPoincaré's inequality, blow-up phenomena, and the importance of transmission\ndelays between excitatory neurons to prevent blow-up and to give rise to\nsynchronous solutions. Besides analytical results, we have presented a\nnumerical resolutor for this model, based on high order flux-splitting WENO\nschemes and an explicit third order TVD Runge-Kutta method, in order to\ndescribe the wide range of phenomena exhibited by the network: blow-up,\nasynchronous/synchronous solutions and instability/stability of the steady\nstates; the solver also allows us to observe the time evolution of the firing\nrates, refractory states and the probability distributions of the excitatory\nand inhibitory populations.\n", "title": "Towards a realistic NNLIF model: Analysis and numerical solver for excitatory-inhibitory networks with delay and refractory periods" }
null
null
null
null
true
null
1200
null
Default
null
null