text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " We propose a novel combination of optimization tools with learning theory\nbounds in order to analyze the sample complexity of optimal kernel sum\nclassifiers. This contrasts the typical learning theoretic results which hold\nfor all (potentially suboptimal) classifiers. Our work also justifies\nassumptions made in prior work on multiple kernel learning. As a byproduct of\nour analysis, we also provide a new form of Rademacher complexity for\nhypothesis classes containing only optimal classifiers.\n",
"title": "On the Statistical Efficiency of Optimal Kernel Sum Classifiers"
}
| null | null | null | null | true | null |
4801
| null |
Default
| null | null |
null |
{
"abstract": " We present the first experimental demonstration of a multiple-radiofrequency\ndressed potential for the configurable magnetic confinement of ultracold atoms.\nWe load cold $^{87}$Rb atoms into a double well potential with an adjustable\nbarrier height, formed by three radiofrequencies applied to atoms in a static\nquadrupole magnetic field. Our multiple-radiofrequency approach gives precise\ncontrol over the double well characteristics, including the depth of individual\nwells and the height of the barrier, and enables reliable transfer of atoms\nbetween the available trapping geometries. We have characterised the\nmultiple-radiofrequency dressed system using radiofrequency spectroscopy,\nfinding good agreement with the eigenvalues numerically calculated using\nFloquet theory. This method creates trapping potentials that can be\nreconfigured by changing the amplitudes, polarizations and frequencies of the\napplied dressing fields, and easily extended with additional dressing\nfrequencies.\n",
"title": "Ultracold atoms in multiple-radiofrequency dressed adiabatic potentials"
}
| null | null | null | null | true | null |
4802
| null |
Default
| null | null |
null |
{
"abstract": " We show that Variational Autoencoders consistently fail to learn marginal\ndistributions in latent and visible space. We ask whether this is a consequence\nof matching conditional distributions, or a limitation of explicit model and\nposterior distributions. We explore alternatives provided by marginal\ndistribution matching and implicit distributions through the use of Generative\nAdversarial Networks in variational inference. We perform a large-scale\nevaluation of several VAE-GAN hybrids and explore the implications of class\nprobability estimation for learning distributions. We conclude that at present\nVAE-GAN hybrids have limited applicability: they are harder to scale, evaluate,\nand use for inference compared to VAEs; and they do not improve over the\ngeneration quality of GANs.\n",
"title": "Distribution Matching in Variational Inference"
}
| null | null | null | null | true | null |
4803
| null |
Default
| null | null |
null |
{
"abstract": " We survey the technique of constructing customized models of size continuum\nin omega steps and illustrate the method by giving new proofs of mostly old\nresults within this rubric. One new theorem, which is joint with Saharon\nShelah, is that a pseudominimal theory has an atomic model of size continuum.\n",
"title": "Henkin constructions of models with size continuum"
}
| null | null | null | null | true | null |
4804
| null |
Default
| null | null |
null |
{
"abstract": " We show that for all integers $m\\geq 2$, and all integers $k\\geq 2$, the\northogonal groups $\\Orth^{\\pm}(2m,\\Fk)$ act on abstract regular polytopes of\nrank $2m$, and the symplectic groups $\\Sp(2m,\\Fk)$ act on abstract regular\npolytopes of rank $2m+1$.\n",
"title": "Orthogonal groups in characteristic 2 acting on polytopes of high rank"
}
| null | null | null | null | true | null |
4805
| null |
Default
| null | null |
null |
{
"abstract": " Full autonomy for fixed-wing unmanned aerial vehicles (UAVs) requires the\ncapability to autonomously detect potential landing sites in unknown and\nunstructured terrain, allowing for self-governed mission completion or handling\nof emergency situations. In this work, we propose a perception system\naddressing this challenge by detecting landing sites based on their texture and\ngeometric shape without using any prior knowledge about the environment. The\nproposed method considers hazards within the landing region such as terrain\nroughness and slope, surrounding obstacles that obscure the landing approach\npath, and the local wind field that is estimated by the on-board EKF. The\nlatter enables applicability of the proposed method on small-scale autonomous\nplanes without landing gear. A safe approach path is computed based on the UAV\ndynamics, expected state estimation and actuator uncertainty, and the on-board\ncomputed elevation map. The proposed framework has been successfully tested on\nphoto-realistic synthetic datasets and in challenging real-world environments.\n",
"title": "Free LSD: Prior-Free Visual Landing Site Detection for Autonomous Planes"
}
| null | null | null | null | true | null |
4806
| null |
Default
| null | null |
null |
{
"abstract": " We address the problem of efficient acoustic-model refinement (continuous\nretraining) using semi-supervised and active learning for a low resource Indian\nlanguage, wherein the low resource constraints are having i) a small labeled\ncorpus from which to train a baseline `seed' acoustic model and ii) a large\ntraining corpus without orthographic labeling or from which to perform a data\nselection for manual labeling at low costs. The proposed semi-supervised\nlearning decodes the unlabeled large training corpus using the seed model and\nthrough various protocols, selects the decoded utterances with high reliability\nusing confidence levels (that correlate to the WER of the decoded utterances)\nand iterative bootstrapping. The proposed active learning protocol uses\nconfidence level based metric to select the decoded utterances from the large\nunlabeled corpus for further labeling. The semi-supervised learning protocols\ncan offer a WER reduction, from a poorly trained seed model, by as much as 50%\nof the best WER-reduction realizable from the seed model's WER, if the large\ncorpus were labeled and used for acoustic-model training. The active learning\nprotocols allow that only 60% of the entire training corpus be manually\nlabeled, to reach the same performance as the entire data.\n",
"title": "Semi-supervised and Active-learning Scenarios: Efficient Acoustic Model Refinement for a Low Resource Indian Language"
}
| null | null | null | null | true | null |
4807
| null |
Default
| null | null |
null |
{
"abstract": " The spatial distribution of Cherenkov radiation from cascade showers\ngenerated by muons in water has been measured with Cherenkov water calorimeter\n(CWC) NEVOD. This result allowed to improve the techniques of treating cascade\nshowers with unknown axes by means of CWC response analysis. The techniques of\nselecting the events with high energy cascade showers and reconstructing their\nparameters are discussed. Preliminary results of measurements of the spectrum\nof cascade showers in the energy range 100 GeV - 20 TeV generated by cosmic ray\nmuons at large zenith angles and their comparison with expectation are\npresented.\n",
"title": "Energy spectrum of cascade showers generated by cosmic ray muons in water"
}
| null | null | null | null | true | null |
4808
| null |
Default
| null | null |
null |
{
"abstract": " The pentagram map is a discrete dynamical system defined on the space of\npolygons in the plane. In the first paper on the subject, R. Schwartz proved\nthat the pentagram map produces from each convex polygon a sequence of\nsuccessively smaller polygons that converges exponentially to a point. We\ninvestigate the limit point itself, giving an explicit description of its\nCartesian coordinates as roots of certain degree three polynomials.\n",
"title": "The limit point of the pentagram map"
}
| null | null | null | null | true | null |
4809
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we study the problem of photoacoustic inversion in a weakly\nattenuating medium. We present explicit reconstruction formulas in such media\nand show that the inversion based on such formulas is moderately ill--posed.\nMoreover, we present a numerical algorithm for imaging and demonstrate in\nnumerical experiments the feasibility of this approach.\n",
"title": "Reconstruction formulas for Photoacoustic Imaging in Attenuating Media"
}
| null | null | null | null | true | null |
4810
| null |
Default
| null | null |
null |
{
"abstract": " Recently, fundamental conditions on the sampling patterns have been obtained\nfor finite completability of low-rank matrices or tensors given the\ncorresponding ranks. In this paper, we consider the scenario where the rank is\nnot given and we aim to approximate the unknown rank based on the location of\nsampled entries and some given completion. We consider a number of data models,\nincluding single-view matrix, multi-view matrix, CP tensor, tensor-train tensor\nand Tucker tensor. For each of these data models, we provide an upper bound on\nthe rank when an arbitrary low-rank completion is given. We characterize these\nbounds both deterministically, i.e., with probability one given that the\nsampling pattern satisfies certain combinatorial properties, and\nprobabilistically, i.e., with high probability given that the sampling\nprobability is above some threshold. Moreover, for both single-view matrix and\nCP tensor, we are able to show that the obtained upper bound is exactly equal\nto the unknown rank if the lowest-rank completion is given. Furthermore, we\nprovide numerical experiments for the case of single-view matrix, where we use\nnuclear norm minimization to find a low-rank completion of the sampled data and\nwe observe that in most of the cases the proposed upper bound on the rank is\nequal to the true rank.\n",
"title": "Rank Determination for Low-Rank Data Completion"
}
| null | null | null | null | true | null |
4811
| null |
Default
| null | null |
null |
{
"abstract": " Driven by growing interest in the sciences, industry, and among the broader\npublic, a large number of empirical studies have been conducted in recent years\nof the structure of networks ranging from the internet and the world wide web\nto biological networks and social networks. The data produced by these\nexperiments are often rich and multimodal, yet at the same time they may\ncontain substantial measurement error. In practice, this means that the true\nnetwork structure can differ greatly from naive estimates made from the raw\ndata, and hence that conclusions drawn from those naive estimates may be\nsignificantly in error. In this paper we describe a technique that circumvents\nthis problem and allows us to make optimal estimates of the true structure of\nnetworks in the presence of both richly textured data and significant\nmeasurement uncertainty. We give example applications to two different social\nnetworks, one derived from face-to-face interactions and one from self-reported\nfriendships.\n",
"title": "Network structure from rich but noisy data"
}
| null | null | null | null | true | null |
4812
| null |
Default
| null | null |
null |
{
"abstract": " We contribute a general apparatus for dependent tactic-based proof refinement\nin the LCF tradition, in which the statements of subgoals may express a\ndependency on the proofs of other subgoals; this form of dependency is\nextremely useful and can serve as an algorithmic alternative to extensions of\nLCF based on non-local instantiation of schematic variables. Additionally, we\nintroduce a novel behavioral distinction between refinement rules and tactics\nbased on naturality. Our framework, called Dependent LCF, is already deployed\nin the nascent RedPRL proof assistant for computational cubical type theory.\n",
"title": "Algebraic Foundations of Proof Refinement"
}
| null | null | null | null | true | null |
4813
| null |
Default
| null | null |
null |
{
"abstract": " The goal of semantic parsing is to map natural language to a machine\ninterpretable meaning representation language (MRL). One of the constraints\nthat limits full exploration of deep learning technologies for semantic parsing\nis the lack of sufficient annotation training data. In this paper, we propose\nusing sequence-to-sequence in a multi-task setup for semantic parsing with a\nfocus on transfer learning. We explore three multi-task architectures for\nsequence-to-sequence modeling and compare their performance with an\nindependently trained model. Our experiments show that the multi-task setup\naids transfer learning from an auxiliary task with large labeled data to a\ntarget task with smaller labeled data. We see absolute accuracy gains ranging\nfrom 1.0% to 4.4% in our in- house data set, and we also see good gains ranging\nfrom 2.5% to 7.0% on the ATIS semantic parsing tasks with syntactic and\nsemantic auxiliary tasks.\n",
"title": "Transfer Learning for Neural Semantic Parsing"
}
| null | null | null | null | true | null |
4814
| null |
Default
| null | null |
null |
{
"abstract": " We study the algebraic implications of the non-independence property (NIP)\nand variants thereof (dp-minimality) on infinite fields, motivated by the\nconjecture that all such fields which are neither real closed nor separably\nclosed admit a definable henselian valuation. Our results mainly focus on Hahn\nfields and build up on Will Johnson's preprint \"dp-minimal fields\", arXiv:\n1507.02745v1, July 2015.\n",
"title": "Definable Valuations induced by multiplicative subgroups and NIP Fields"
}
| null | null | null | null | true | null |
4815
| null |
Default
| null | null |
null |
{
"abstract": " Layout hotpot detection is one of the main steps in modern VLSI design. A\ntypical hotspot detection flow is extremely time consuming due to the\ncomputationally expensive mask optimization and lithographic simulation. Recent\nresearches try to facilitate the procedure with a reduced flow including\nfeature extraction, training set generation and hotspot detection, where\nfeature extraction methods and hotspot detection engines are deeply studied.\nHowever, the performance of hotspot detectors relies highly on the quality of\nreference layout libraries which are costly to obtain and usually predetermined\nor randomly sampled in previous works. In this paper, we propose an active\nlearning-based layout pattern sampling and hotspot detection flow, which\nsimultaneously optimizes the machine learning model and the training set that\naims to achieve similar or better hotspot detection performance with much\nsmaller number of training instances. Experimental results show that our\nproposed method can significantly reduce lithography simulation overhead while\nattaining satisfactory detection accuracy on designs under both DUV and EUV\nlithography technologies.\n",
"title": "Bridging the Gap Between Layout Pattern Sampling and Hotspot Detection via Batch Active Learning"
}
| null | null | null | null | true | null |
4816
| null |
Default
| null | null |
null |
{
"abstract": " Computational prediction of origin of replication (ORI) has been of great\ninterest in bioinformatics and several methods including GC Skew, Z curve,\nauto-correlation etc. have been explored in the past. In this paper, we have\nextended the auto-correlation method to predict ORI location with much higher\nresolution for prokaryotes. The proposed complex correlation method (iCorr)\nconverts the genome sequence into a sequence of complex numbers by mapping the\nnucleotides to {+1,-1,+i,-i} instead of {+1,-1} used in the auto-correlation\nmethod (here, 'i' is square root of -1). Thus, the iCorr method uses\ninformation about the positions of all the four nucleotides unlike the earlier\nauto-correlation method which uses the positional information of only one\nnucleotide. Also, this earlier method required visual inspection of the\nobtained graphs to identify the location of origin of replication. The proposed\niCorr method does away with this need and is able to identify the origin\nlocation simply by picking the peak in the iCorr graph. The iCorr method also\nworks for a much smaller segment size compared to the earlier auto-correlation\nmethod, which can be very helpful in experimental validation of the\ncomputational predictions. We have also developed a variant of the iCorr method\nto predict ORI location in eukaryotes and have tested it with the\nexperimentally known origin locations of S. cerevisiae with an average accuracy\nof 71.76%.\n",
"title": "iCorr : Complex correlation method to detect origin of replication in prokaryotic and eukaryotic genomes"
}
| null | null | null | null | true | null |
4817
| null |
Default
| null | null |
null |
{
"abstract": " A routine task for art historians is painting diagnostics, such as dating or\nattribution. Signal processing of the X-ray image of a canvas provides useful\ninformation about its fabric. However, previous methods may fail when very old\nand deteriorated artworks or simply canvases of small size are studied. We\npresent a new framework to analyze and further characterize the paintings from\ntheir radiographs. First, we start from a general analysis of lattices and\nprovide new unifying results about the theoretical spectra of weaves. Then, we\nuse these results to infer the main structure of the fabric, like the type of\nweave and the thread densities. We propose a practical estimation of these\ntheoretical results from paintings with the averaged power spectral density\n(PSD), which provides a more robust tool. Furthermore, we found that the PSD\nprovides a fingerprint that characterizes the whole canvas. We search and\ndiscuss some distinctive features we may find in that fingerprint. We apply\nthese results to several masterpieces of the 17th and 18th centuries from the\nMuseo Nacional del Prado to show that this approach yields accurate results in\nthread counting and is very useful for paintings comparison, even in situations\nwhere previous methods fail.\n",
"title": "On the Power Spectral Density Applied to the Analysis of Old Canvases"
}
| null | null | null | null | true | null |
4818
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study a slant submanifold of a complex space form. We also\nobtain an integral formula of Simons' type for a Kaehlerian slant submanifold\nin a complex space form and apply it to prove our main result.\n",
"title": "Simons' type formula for slant submanifolds of complex space form"
}
| null | null |
[
"Mathematics"
] | null | true | null |
4819
| null |
Validated
| null | null |
null |
{
"abstract": " 1. Theoretical models pertaining to feedbacks between ecological and\nevolutionary processes are prevalent in multiple biological fields. An\nintegrative overview is currently lacking, due to little crosstalk between the\nfields and the use of different methodological approaches.\n2. Here we review a wide range of models of eco-evolutionary feedbacks and\nhighlight their underlying assumptions. We discuss models where feedbacks occur\nboth within and between hierarchical levels of ecosystems, including\npopulations, communities, and abiotic environments, and consider feedbacks\nacross spatial scales.\n3. Identifying the commonalities among feedback models, and the underlying\nassumptions, helps us better understand the mechanistic basis of\neco-evolutionary feedbacks. Eco-evolutionary feedbacks can be readily modelled\nby coupling demographic and evolutionary formalisms. We provide an overview of\nthese approaches and suggest future integrative modelling avenues.\n4. Our overview highlights that eco-evolutionary feedbacks have been\nincorporated in theoretical work for nearly a century. Yet, this work does not\nalways include the notion of rapid evolution or concurrent ecological and\nevolutionary time scales. We discuss the importance of density- and\nfrequency-dependent selection for feedbacks, as well as the importance of\ndispersal as a central linking trait between ecology and evolution in a spatial\ncontext.\n",
"title": "Eco-evolutionary feedbacks - theoretical models and perspectives"
}
| null | null | null | null | true | null |
4820
| null |
Default
| null | null |
null |
{
"abstract": " The cuprate high-temperature superconductors are among the most intensively\nstudied materials, yet essential questions regarding their principal phases and\nthe transitions between them remain unanswered. Generally thought of as doped\ncharge-transfer insulators, these complex lamellar oxides exhibit pseudogap,\nstrange-metal, superconducting and Fermi-liquid behaviour with increasing\nhole-dopant concentration. Here we propose a simple inhomogeneous Mott-like\n(de)localization model wherein exactly one hole per copper-oxygen unit is\ngradually delocalized with increasing doping and temperature. The model is\npercolative in nature, with parameters that are experimentally constrained. It\ncomprehensively captures pivotal unconventional experimental results, including\nthe temperature and doping dependence of the pseudogap phenomenon, the\nstrange-metal linear temperature dependence of the planar resistivity, and the\ndoping dependence of the superfluid density. The success and simplicity of our\nmodel greatly demystify the cuprate phase diagram and point to a local\nsuperconducting pairing mechanism involving the (de)localized hole.\n",
"title": "Unusual behavior of cuprates explained by heterogeneous charge localization"
}
| null | null |
[
"Physics"
] | null | true | null |
4821
| null |
Validated
| null | null |
null |
{
"abstract": " Cryptocurrencies and their foundation technology, the Blockchain, are\nreshaping finance and economics, allowing a decentralized approach enabling\ntrusted applications with no trusted counterpart. More recently, the Blockchain\nand the programs running on it, called Smart Contracts, are also finding more\nand more applications in all fields requiring trust and sound certifications.\nSome people have come to the point of saying that the \"Blockchain revolution\"\ncan be compared to that of the Internet and the Web in their early days. As a\nresult, all the software development revolving around the Blockchain technology\nis growing at a staggering rate. The feeling of many software engineers about\nsuch huge interest in Blockchain technologies is that of unruled and hurried\nsoftware development, a sort of competition on a first-come-first-served basis\nwhich does not assure neither software quality, nor that the basic concepts of\nsoftware engineering are taken into account. This paper tries to cope with this\nissue, proposing a software development process to gather the requirement,\nanalyze, design, develop, test and deploy Blockchain applications. The process\nis based on several Agile practices, such as User Stories and iterative and\nincremental development based on them. However, it makes also use of more\nformal notations, such as some UML diagrams describing the design of the\nsystem, with additions to represent specific concepts found in Blockchain\ndevelopment. The method is described in good detail, and an example is given to\nshow how it works.\n",
"title": "An Agile Software Engineering Method to Design Blockchain Applications"
}
| null | null | null | null | true | null |
4822
| null |
Default
| null | null |
null |
{
"abstract": " Recent studies have shown that tuning prediction models increases prediction\naccuracy and that Random Forest can be used to construct prediction intervals.\nHowever, to our best knowledge, no study has investigated the need to, and the\nmanner in which one can, tune Random Forest for optimizing prediction intervals\n{ this paper aims to fill this gap. We explore a tuning approach that combines\nan effectively exhaustive search with a validation technique on a single Random\nForest parameter. This paper investigates which, out of eight validation\ntechniques, are beneficial for tuning, i.e., which automatically choose a\nRandom Forest configuration constructing prediction intervals that are reliable\nand with a smaller width than the default configuration. Additionally, we\npresent and validate three meta-validation techniques to determine which are\nbeneficial, i.e., those which automatically chose a beneficial validation\ntechnique. This study uses data from our industrial partner (Keymind Inc.) and\nthe Tukutuku Research Project, related to post-release defect prediction and\nWeb application effort estimation, respectively. Results from our study\nindicate that: i) the default configuration is frequently unreliable, ii) most\nof the validation techniques, including previously successfully adopted ones\nsuch as 50/50 holdout and bootstrap, are counterproductive in most of the\ncases, and iii) the 75/25 holdout meta-validation technique is always\nbeneficial; i.e., it avoids the likely counterproductive effects of validation\ntechniques.\n",
"title": "Optimizing Prediction Intervals by Tuning Random Forest via Meta-Validation"
}
| null | null | null | null | true | null |
4823
| null |
Default
| null | null |
null |
{
"abstract": " Matrices $\\Phi\\in\\R^{n\\times p}$ satisfying the Restricted Isometry Property\n(RIP) are an important ingredient of the compressive sensing methods. While it\nis known that random matrices satisfy the RIP with high probability even for\n$n=\\log^{O(1)}p$, the explicit construction of such matrices defied the\nrepeated efforts, and the most known approaches hit the so-called $\\sqrt{n}$\nsparsity bottleneck. The notable exception is the work by Bourgain et al\n\\cite{bourgain2011explicit} constructing an $n\\times p$ RIP matrix with\nsparsity $s=\\Theta(n^{{1\\over 2}+\\epsilon})$, but in the regime\n$n=\\Omega(p^{1-\\delta})$.\nIn this short note we resolve this open question in a sense by showing that\nan explicit construction of a matrix satisfying the RIP in the regime\n$n=O(\\log^2 p)$ and $s=\\Theta(n^{1\\over 2})$ implies an explicit construction\nof a three-colored Ramsey graph on $p$ nodes with clique sizes bounded by\n$O(\\log^2 p)$ -- a question in the extremal combinatorics which has been open\nfor decades.\n",
"title": "Explicit construction of RIP matrices is Ramsey-hard"
}
| null | null |
[
"Statistics"
] | null | true | null |
4824
| null |
Validated
| null | null |
null |
{
"abstract": " Models applied on real time response task, like click-through rate (CTR)\nprediction model, require high accuracy and rigorous response time. Therefore,\ntop-performing deep models of high depth and complexity are not well suited for\nthese applications with the limitations on the inference time. In order to\nfurther improve the neural networks' performance given the time and\ncomputational limitations, we propose an approach that exploits a cumbersome\nnet to help train the lightweight net for prediction. We dub the whole process\nrocket launching, where the cumbersome booster net is used to guide the\nlearning of the target light net throughout the whole training process. We\nanalyze different loss functions aiming at pushing the light net to behave\nsimilarly to the booster net, and adopt the loss with best performance in our\nexperiments. We use one technique called gradient block to improve the\nperformance of the light net and booster net further. Experiments on benchmark\ndatasets and real-life industrial advertisement data present that our light\nmodel can get performance only previously achievable with more complex models.\n",
"title": "Rocket Launching: A Universal and Efficient Framework for Training Well-performing Light Net"
}
| null | null | null | null | true | null |
4825
| null |
Default
| null | null |
null |
{
"abstract": " The High Luminosity LHC (HL-LHC) will integrate 10 times more luminosity than\nthe LHC, posing significant challenges for radiation tolerance and event pileup\non detectors, especially for forward calorimetry, and hallmarks the issue for\nfuture colliders. As part of its HL-LHC upgrade program, the CMS collaboration\nis designing a High Granularity Calorimeter to replace the existing endcap\ncalorimeters. It features unprecedented transverse and longitudinal\nsegmentation for both electromagnetic (ECAL) and hadronic (HCAL) compartments.\nThis will facilitate particle-flow calorimetry, where the fine structure of\nshowers can be measured and used to enhance pileup rejection and particle\nidentification, whilst still achieving good energy resolution. The ECAL and a\nlarge fraction of HCAL will be based on hexagonal silicon sensors of\n0.5-1cm$^{2}$ cell size, with the remainder of the HCAL based on\nhighly-segmented scintillators with SiPM readout. The intrinsic high-precision\ntiming capabilities of the silicon sensors will add an extra dimension to event\nreconstruction, especially in terms of pileup rejection. An overview of the\nHGCAL project is presented, covering motivation, engineering design, readout\nand trigger concepts, and performance (simulated and from beam tests).\n",
"title": "The CMS HGCAL detector for HL-LHC upgrade"
}
| null | null | null | null | true | null |
4826
| null |
Default
| null | null |
null |
{
"abstract": " Let $\\mathcal{F}$ be a finite alphabet and $\\mathcal{D}$ be a finite set of\ndistributions over $\\mathcal{F}$. A Generalized Santha-Vazirani (GSV) source of\ntype $(\\mathcal{F}, \\mathcal{D})$, introduced by Beigi, Etesami and Gohari\n(ICALP 2015, SICOMP 2017), is a random sequence $(F_1, \\dots, F_n)$ in\n$\\mathcal{F}^n$, where $F_i$ is a sample from some distribution $d \\in\n\\mathcal{D}$ whose choice may depend on $F_1, \\dots, F_{i-1}$.\nWe show that all GSV source types $(\\mathcal{F}, \\mathcal{D})$ fall into one\nof three categories: (1) non-extractable; (2) extractable with error\n$n^{-\\Theta(1)}$; (3) extractable with error $2^{-\\Omega(n)}$. This rules out\nother error rates like $1/\\log n$ or $2^{-\\sqrt{n}}$.\nWe provide essentially randomness-optimal extraction algorithms for\nextractable sources. Our algorithm for category (2) sources extracts with error\n$\\varepsilon$ from $n = \\mathrm{poly}(1/\\varepsilon)$ samples in time linear in\n$n$. Our algorithm for category (3) sources extracts $m$ bits with error\n$\\varepsilon$ from $n = O(m + \\log 1/\\varepsilon)$ samples in time\n$\\min\\{O(nm2^m),n^{O(\\lvert\\mathcal{F}\\rvert)}\\}$.\nWe also give algorithms for classifying a GSV source type $(\\mathcal{F},\n\\mathcal{D})$: Membership in category (1) can be decided in $\\mathrm{NP}$,\nwhile membership in category (3) is polynomial-time decidable.\n",
"title": "Complete Classification of Generalized Santha-Vazirani Sources"
}
| null | null | null | null | true | null |
4827
| null |
Default
| null | null |
null |
{
"abstract": " The present is a companion paper to \"A contemporary look at Hermann Hankel's\n1861 pioneering work on Lagrangian fluid dynamics\" by Frisch, Grimberg and\nVillone (2017). Here we present the English translation of the 1861 prize\nmanuscript from Göttingen University \"Zur allgemeinen Theorie der Bewegung\nder Flüssigkeiten\" (On the general theory of the motion of the fluids) of\nHermann Hankel (1839-1873), which was originally submitted in Latin and then\ntranslated into German by the Author for publication. We also provide the\nEnglish translation of two important reports on the manuscript, one written by\nBernhard Riemann and the other by Wilhelm Eduard Weber, during the assessment\nprocess for the prize. Finally we give a short biography of Hermann Hankel with\nhis complete bibliography.\n",
"title": "Hermann Hankel's \"On the general theory of motion of fluids\", an essay including an English translation of the complete Preisschrift from 1861"
}
| null | null |
[
"Physics",
"Mathematics"
] | null | true | null |
4828
| null |
Validated
| null | null |
null |
{
"abstract": " The giant mutually connected component (GMCC) of an interdependent or\nmultiplex network collapses with a discontinuous hybrid transition under random\ndamage to the network. If the nodes to be damaged are selected in a targeted\nway, the collapse of the GMCC may occur significantly sooner. Finding the\nminimal damage set which destroys the largest mutually connected component of a\ngiven interdependent network is a computationally prohibitive simultaneous\noptimization problem. We introduce a simple heuristic strategy -- Effective\nMultiplex Degree -- for targeted attack on interdependent networks that\nleverages the indirect damage inherent in multiplex networks to achieve a\ndamage set smaller than that found by any other non computationally intensive\nalgorithm. We show that the intuition from single layer networks that decycling\n(damage of the $2$-core) is the most effective way to destroy the giant\ncomponent, does not carry over to interdependent networks, and in fact such\napproaches are worse than simply removing the highest degree nodes.\n",
"title": "Targeted Damage to Interdependent Networks"
}
| null | null | null | null | true | null |
4829
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study the asymptotic behavior of the Hermitian-Yang-Mills\nflow on a reflexive sheaf. We prove that the limiting reflexive sheaf is\nisomorphic to the double dual of the graded sheaf associated to the\nHarder-Narasimhan-Seshadri filtration, this answers a question by Bando and\nSiu.\n",
"title": "The limit of the Hermitian-Yang-Mills flow on reflexive sheaves"
}
| null | null | null | null | true | null |
4830
| null |
Default
| null | null |
null |
{
"abstract": " Phase-field approaches to fracture based on energy minimization principles\nhave been rapidly gaining popularity in recent years, and are particularly\nwell-suited for simulating crack initiation and growth in complex fracture\nnetworks. In the phase-field framework, the surface energy associated with\ncrack formation is calculated by evaluating a functional defined in terms of a\nscalar order parameter and its gradients, which in turn describe the fractures\nin a diffuse sense following a prescribed regularization length scale. Imposing\nstationarity of the total energy leads to a coupled system of partial\ndifferential equations, one enforcing stress equilibrium and another governing\nphase-field evolution. The two equations are coupled through an energy\ndegradation function that models the loss of stiffness in the bulk material as\nit undergoes damage. In the present work, we introduce a new parametric family\nof degradation functions aimed at increasing the accuracy of phase-field models\nin predicting critical loads associated with crack nucleation as well as the\npropagation of existing fractures. An additional goal is the preservation of\nlinear elastic response in the bulk material prior to fracture. Through the\nanalysis of several numerical examples, we demonstrate the superiority of the\nproposed family of functions to the classical quadratic degradation function\nthat is used most often in the literature.\n",
"title": "High-accuracy phase-field models for brittle fracture based on a new family of degradation functions"
}
| null | null | null | null | true | null |
4831
| null |
Default
| null | null |
null |
{
"abstract": " Slow running or straggler tasks can significantly reduce computation speed in\ndistributed computation. Recently, coding-theory-inspired approaches have been\napplied to mitigate the effect of straggling, through embedding redundancy in\ncertain linear computational steps of the optimization algorithm, thus\ncompleting the computation without waiting for the stragglers. In this paper,\nwe propose an alternate approach where we embed the redundancy directly in the\ndata itself, and allow the computation to proceed completely oblivious to\nencoding. We propose several encoding schemes, and demonstrate that popular\nbatch algorithms, such as gradient descent and L-BFGS, applied in a\ncoding-oblivious manner, deterministically achieve sample path linear\nconvergence to an approximate solution of the original problem, using an\narbitrarily varying subset of the nodes at each iteration. Moreover, this\napproximation can be controlled by the amount of redundancy and the number of\nnodes used in each iteration. We provide experimental results demonstrating the\nadvantage of the approach over uncoded and data replication strategies.\n",
"title": "Straggler Mitigation in Distributed Optimization Through Data Encoding"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
4832
| null |
Validated
| null | null |
null |
{
"abstract": " We introduce inference trees (ITs), a new class of inference methods that\nbuild on ideas from Monte Carlo tree search to perform adaptive sampling in a\nmanner that balances exploration with exploitation, ensures consistency, and\nalleviates pathologies in existing adaptive methods. ITs adaptively sample from\nhierarchical partitions of the parameter space, while simultaneously learning\nthese partitions in an online manner. This enables ITs to not only identify\nregions of high posterior mass, but also maintain uncertainty estimates to\ntrack regions where significant posterior mass may have been missed. ITs can be\nbased on any inference method that provides a consistent estimate of the\nmarginal likelihood. They are particularly effective when combined with\nsequential Monte Carlo, where they capture long-range dependencies and yield\nimprovements beyond proposal adaptation alone.\n",
"title": "Inference Trees: Adaptive Inference with Exploration"
}
| null | null | null | null | true | null |
4833
| null |
Default
| null | null |
null |
{
"abstract": " We propose a scheme to employ backpropagation neural networks (BPNNs) for\nboth stages of fingerprinting-based indoor positioning using WLAN/WiFi signal\nstrengths (FWIPS): radio map construction during the offline stage, and\nlocalization during the online stage. Given a training radio map (TRM), i.e., a\nset of coordinate vectors and associated WLAN/WiFi signal strengths of the\navailable access points, a BPNN can be trained to output the expected signal\nstrengths for any input position within the region of interest (BPNN-RM). This\ncan be used to provide a continuous representation of the radio map and to\nfilter, densify or decimate a discrete radio map. Correspondingly, the TRM can\nalso be used to train another BPNN to output the expected position within the\nregion of interest for any input vector of recorded signal strengths and thus\ncarry out localization (BPNN-LA).Key aspects of the design of such artificial\nneural networks for a specific application are the selection of design\nparameters like the number of hidden layers and nodes within the network, and\nthe training procedure. Summarizing extensive numerical simulations, based on\nreal measurements in a testbed, we analyze the impact of these design choices\non the performance of the BPNN and compare the results in particular to those\nobtained using the $k$ nearest neighbors ($k$NN) and weighted $k$ nearest\nneighbors approaches to FWIPS.\n",
"title": "Application of backpropagation neural networks to both stages of fingerprinting based WIPS"
}
| null | null | null | null | true | null |
4834
| null |
Default
| null | null |
null |
{
"abstract": " Recently, two scalable adaptations of the bootstrap have been proposed: the\nbag of little bootstraps (BLB; Kleiner et al., 2014) and the subsampled double\nbootstrap (SDB; Sengupta et al., 2016). In this paper, we introduce Bayesian\nbootstrap analogues to the BLB and SDB that have similar theoretical and\ncomputational properties, a strategy to perform lossless inference for a class\nof functionals of the Bayesian bootstrap, and briefly discuss extensions for\nDirichlet Processes.\n",
"title": "Bayesian Bootstraps for Massive Data"
}
| null | null | null | null | true | null |
4835
| null |
Default
| null | null |
null |
{
"abstract": " Impressive image captioning results are achieved in domains with plenty of\ntraining image and sentence pairs (e.g., MSCOCO). However, transferring to a\ntarget domain with significant domain shifts but no paired training data\n(referred to as cross-domain image captioning) remains largely unexplored. We\npropose a novel adversarial training procedure to leverage unpaired data in the\ntarget domain. Two critic networks are introduced to guide the captioner,\nnamely domain critic and multi-modal critic. The domain critic assesses whether\nthe generated sentences are indistinguishable from sentences in the target\ndomain. The multi-modal critic assesses whether an image and its generated\nsentence are a valid pair. During training, the critics and captioner act as\nadversaries -- captioner aims to generate indistinguishable sentences, whereas\ncritics aim at distinguishing them. The assessment improves the captioner\nthrough policy gradient updates. During inference, we further propose a novel\ncritic-based planning method to select high-quality sentences without\nadditional supervision (e.g., tags). To evaluate, we use MSCOCO as the source\ndomain and four other datasets (CUB-200-2011, Oxford-102, TGIF, and Flickr30k)\nas the target domains. Our method consistently performs well on all datasets.\nIn particular, on CUB-200-2011, we achieve 21.8% CIDEr-D improvement after\nadaptation. Utilizing critics during inference further gives another 4.5%\nboost.\n",
"title": "Show, Adapt and Tell: Adversarial Training of Cross-domain Image Captioner"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4836
| null |
Validated
| null | null |
null |
{
"abstract": " We improve the performance of the American Fuzzy Lop (AFL) fuzz testing\nframework by using Generative Adversarial Network (GAN) models to reinitialize\nthe system with novel seed files. We assess performance based on the temporal\nrate at which we produce novel and unseen code paths. We compare this approach\nto seed file generation from a random draw of bytes observed in the training\nseed files. The code path lengths and variations were not sufficiently diverse\nto fully replace AFL input generation. However, augmenting native AFL with\nthese additional code paths demonstrated improvements over AFL alone.\nSpecifically, experiments showed the GAN was faster and more effective than the\nLSTM and out-performed a random augmentation strategy, as measured by the\nnumber of unique code paths discovered. GAN helps AFL discover 14.23% more code\npaths than the random strategy in the same amount of CPU time, finds 6.16% more\nunique code paths, and finds paths that are on average 13.84% longer. Using GAN\nshows promise as a reinitialization strategy for AFL to help the fuzzer\nexercise deep paths in software.\n",
"title": "Faster Fuzzing: Reinitialization with Deep Neural Models"
}
| null | null | null | null | true | null |
4837
| null |
Default
| null | null |
null |
{
"abstract": " Embedded real-time systems (RTS) are pervasive. Many modern RTS are exposed\nto unknown security flaws, and threats to RTS are growing in both number and\nsophistication. However, until recently, cyber-security considerations were an\nafterthought in the design of such systems. Any security mechanisms integrated\ninto RTS must (a) co-exist with the real- time tasks in the system and (b)\noperate without impacting the timing and safety constraints of the control\nlogic. We introduce Contego, an approach to integrating security tasks into RTS\nwithout affecting temporal requirements. Contego is specifically designed for\nlegacy systems, viz., the real-time control systems in which major alterations\nof the system parameters for constituent tasks is not always feasible. Contego\ncombines the concept of opportunistic execution with hierarchical scheduling to\nmaintain compatibility with legacy systems while still providing flexibility by\nallowing security tasks to operate in different modes. We also define a metric\nto measure the effectiveness of such integration. We evaluate Contego using\nsynthetic workloads as well as with an implementation on a realistic embedded\nplatform (an open- source ARM CPU running real-time Linux).\n",
"title": "Contego: An Adaptive Framework for Integrating Security Tasks in Real-Time Systems"
}
| null | null | null | null | true | null |
4838
| null |
Default
| null | null |
null |
{
"abstract": " We derive the second order rates of joint source-channel coding, whose source\nobeys an irreducible and ergodic Markov process when the channel is a discrete\nmemoryless, while a previous study solved it only in a special case. We also\ncompare the joint source-channel scheme with the separation scheme in the\nsecond order regime while a previous study made a notable comparison only with\nnumerical calculation. To make these two notable progress, we introduce two\nkinds of new distribution families, switched Gaussian convolution distribution\nand *-product distribution, which are defined by modifying the Gaussian\ndistribution.\n",
"title": "Second Order Analysis for Joint Source-Channel Coding with Markovian Source"
}
| null | null | null | null | true | null |
4839
| null |
Default
| null | null |
null |
{
"abstract": " Chiral and helical domain walls are generic defects of topological\nspin-triplet superconductors. We study theoretically the magnetic and transport\nproperties of superconducting singlet-triplet-singlet heterostructure as a\nfunction of the phase difference between the singlet leads in the presence of\nchiral and helical domains inside the spin-triplet region. The local inversion\nsymmetry breaking at the singlet-triplet interface allows the emergence of a\nstatic phase-controlled magnetization, and generally yields both spin and\ncharge currents flowing along the edges. The parity of the domain wall number\naffects the relative orientation of the interface moments and currents, while\nin some cases the domain walls themselves contribute to spin and charge\ntransport. We demonstrate that singlet-triplet heterostructures are a generic\nprototype to generate and control non-dissipative spin and charge effects,\nputting them in a broader class of systems exhibiting spin-Hall, anomalous Hall\neffects and similar phenomena. Features of the electron transport and magnetic\neffects at the interfaces can be employed to assess the presence of domains in\nchiral/helical superconductors.\n",
"title": "Interface currents and magnetization in singlet-triplet superconducting heterostructures: Role of chiral and helical domains"
}
| null | null | null | null | true | null |
4840
| null |
Default
| null | null |
null |
{
"abstract": " Modern neural networks tend to be overconfident on unseen, noisy or\nincorrectly labelled data and do not produce meaningful uncertainty measures.\nBayesian deep learning aims to address this shortcoming with variational\napproximations (such as Bayes by Backprop or Multiplicative Normalising Flows).\nHowever, current approaches have limitations regarding flexibility and\nscalability. We introduce Bayes by Hypernet (BbH), a new method of variational\napproximation that interprets hypernetworks as implicit distributions. It\nnaturally uses neural networks to model arbitrarily complex distributions and\nscales to modern deep learning architectures. In our experiments, we\ndemonstrate that our method achieves competitive accuracies and predictive\nuncertainties on MNIST and a CIFAR5 task, while being the most robust against\nadversarial attacks.\n",
"title": "Implicit Weight Uncertainty in Neural Networks"
}
| null | null | null | null | true | null |
4841
| null |
Default
| null | null |
null |
{
"abstract": " A detailed characterization of the particle induced background is fundamental\nfor many of the scientific objectives of the Athena X-ray telescope, thus an\nadequate knowledge of the background that will be encountered by Athena is\ndesirable. Current X-ray telescopes have shown that the intensity of the\nparticle induced background can be highly variable. Different regions of the\nmagnetosphere can have very different environmental conditions, which can, in\nprinciple, differently affect the particle induced background detected by the\ninstruments. We present results concerning the influence of the magnetospheric\nenvironment on the background detected by EPIC instrument onboard XMM-Newton\nthrough the estimate of the variation of the in-Field-of-View background excess\nalong the XMM-Newton orbit. An important contribution to the XMM background,\nwhich may affect the Athena background as well, comes from soft proton flares.\nAlong with the flaring component a low-intensity component is also present. We\nfind that both show modest variations in the different magnetozones and that\nthe soft proton component shows a strong trend with the distance from Earth.\n",
"title": "A systematic analysis of the XMM-Newton background: III. Impact of the magnetospheric environment"
}
| null | null |
[
"Physics"
] | null | true | null |
4842
| null |
Validated
| null | null |
null |
{
"abstract": " We present a principled approach to uncover the structure of visual data by\nsolving a novel deep learning task coined visual permutation learning. The goal\nof this task is to find the permutation that recovers the structure of data\nfrom shuffled versions of it. In the case of natural images, this task boils\ndown to recovering the original image from patches shuffled by an unknown\npermutation matrix. Unfortunately, permutation matrices are discrete, thereby\nposing difficulties for gradient-based methods. To this end, we resort to a\ncontinuous approximation of these matrices using doubly-stochastic matrices\nwhich we generate from standard CNN predictions using Sinkhorn iterations.\nUnrolling these iterations in a Sinkhorn network layer, we propose DeepPermNet,\nan end-to-end CNN model for this task. The utility of DeepPermNet is\ndemonstrated on two challenging computer vision problems, namely, (i) relative\nattributes learning and (ii) self-supervised representation learning. Our\nresults show state-of-the-art performance on the Public Figures and OSR\nbenchmarks for (i) and on the classification and segmentation tasks on the\nPASCAL VOC dataset for (ii).\n",
"title": "DeepPermNet: Visual Permutation Learning"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4843
| null |
Validated
| null | null |
null |
{
"abstract": " 6d superconformal field theories (SCFTs) are the SCFTs in the highest\npossible dimension. They can be geometrically engineered in F-theory by\ncompactifying on non-compact elliptic Calabi-Yau manifolds. In this paper we\nfocus on the class of SCFTs whose base geometry is determined by $-2$ curves\nintersecting according to ADE Dynkin diagrams and derive the corresponding\nmirror Calabi-Yau manifold. The mirror geometry is uniquely determined in terms\nof the mirror curve which has also an interpretation in terms of the\nSeiberg-Witten curve of the four-dimensional theory arising from torus\ncompactification. Adding the affine node of the ADE quiver to the base\ngeometry, we connect to recent results on SYZ mirror symmetry for the $A$ case\nand provide a physical interpretation in terms of little string theory. Our\nresults, however, go beyond this case as our construction naturally covers the\n$D$ and $E$ cases as well.\n",
"title": "ADE String Chains and Mirror Symmetry"
}
| null | null |
[
"Mathematics"
] | null | true | null |
4844
| null |
Validated
| null | null |
null |
{
"abstract": " In this article we consider the completely multiplicative sequences $(a_n)_{n\n\\in \\mathbf{N}}$ defined on a field $\\mathbf{K}$ and satisfying $$\\sum_{p| p\n\\leq n, a_p \\neq 1, p \\in \\mathbf{P}}\\frac{1}{p}<\\infty,$$ where $\\mathbf{P}$\nis the set of prime numbers. We prove that if such sequences are automatic then\nthey cannot have infinitely many prime numbers $p$ such that $a_{p}\\neq 1$.\nUsing this fact, we prove that if a completely multiplicative sequence\n$(a_n)_{n \\in \\mathbf{N}}$, vanishing or not, can be written in the form\n$a_n=b_n\\chi_n$ such that $(b_n)_{n \\in \\mathbf{N}}$ is a non ultimately\nperiodic, completely multiplicative automatic sequence satisfying the above\ncondition, and $(\\chi_n)_{n \\in \\mathbf{N}}$ is a Dirichlet character or a\nconstant sequence, then there exists only one prime number $p$ such that $b_p\n\\neq 1$ or $0$.\n",
"title": "(non)-automaticity of completely multiplicative sequences having negligible many non-trivial prime factors"
}
| null | null | null | null | true | null |
4845
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we analyzed parasitic coupling capacitance coming from dummy\nmetal fill and its impact on timing. Based on the modeling, we proposed two\napproaches to minimize the timing impact from dummy metal fill. The first\napproach applies more spacing between critical nets and metal fill, while the\nsecond approach leverages the shielding effects of reference nets. Experimental\nresults show consistent improvement compared to traditional metal fill method.\n",
"title": "Timing Aware Dummy Metal Fill Methodology"
}
| null | null | null | null | true | null |
4846
| null |
Default
| null | null |
null |
{
"abstract": " We consider a manager, who allocates some fixed total payment amount between\n$N$ rational agents in order to maximize the aggregate production. The profit\nof $i$-th agent is the difference between the compensation (reward) obtained\nfrom the manager and the production cost. We compare (i) the \\emph{normative}\ncompensation scheme, where the manager enforces the agents to follow an optimal\ncooperative strategy; (ii) the \\emph{linear piece rates} compensation scheme,\nwhere the manager announces an optimal reward per unit good; (iii) the\n\\emph{proportional} compensation scheme, where agent's reward is proportional\nto his contribution to the total output. Denoting the correspondent total\nproduction levels by $s^*$, $\\hat s$ and $\\overline s$ respectively, where the\nlast one is related to the unique Nash equilibrium, we examine the limits of\nthe prices of anarchy $\\mathscr A_N=s^*/\\overline s$, $\\mathscr A_N'=\\hat\ns/\\overline s$ as $N\\to\\infty$. These limits are calculated for the cases of\nidentical convex costs with power asymptotics at the origin, and for power\ncosts, corresponding to the Coob-Douglas and generalized CES production\nfunctions with decreasing returns to scale. Our results show that\nasymptotically no performance is lost in terms of $\\mathscr A'_N$, and in terms\nof $\\mathscr A_N$ the loss does not exceed $31\\%$.\n",
"title": "Asymptotic efficiency of the proportional compensation scheme for a large number of producers"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4847
| null |
Validated
| null | null |
null |
{
"abstract": " Continuous attractors have been used to understand recent neuroscience\nexperiments where persistent activity patterns encode internal representations\nof external attributes like head direction or spatial location. However, the\nconditions under which the emergent bump of neural activity in such networks\ncan be manipulated by space and time-dependent external sensory or motor\nsignals are not understood. Here, we find fundamental limits on how rapidly\ninternal representations encoded along continuous attractors can be updated by\nan external signal. We apply these results to place cell networks to derive a\nvelocity-dependent non-equilibrium memory capacity in neural networks.\n",
"title": "Non-equilibrium statistical mechanics of continuous attractors"
}
| null | null | null | null | true | null |
4848
| null |
Default
| null | null |
null |
{
"abstract": " A $(t, s, v)$-all-or-nothing transform is a bijective mapping defined on\n$s$-tuples over an alphabet of size $v$, which satisfies the condition that the\nvalues of any $t$ input co-ordinates are completely undetermined, given only\nthe values of any $s-t$ output co-ordinates. The main question we address in\nthis paper is: for which choices of parameters does a $(t, s,\nv)$-all-or-nothing transform (AONT) exist? More specifically, if we fix $t$ and\n$v$, we want to determine the maximum integer $s$ such that a $(t, s, v)$-AONT\nexists. We mainly concentrate on the case $t=2$ for arbitrary values of $v$,\nwhere we obtain various necessary as well as sufficient conditions for\nexistence of these objects. We consider both linear and general (linear or\nnonlinear) AONT. We also show some connections between AONT, orthogonal arrays\nand resilient functions.\n",
"title": "Some results on the existence of t-all-or-nothing transforms over arbitrary alphabets"
}
| null | null | null | null | true | null |
4849
| null |
Default
| null | null |
null |
{
"abstract": " High-availability of software systems requires automated handling of crashes\nin presence of errors. Failure-oblivious computing is one technique that aims\nto achieve high availability. We note that failure-obliviousness has not been\nstudied in depth yet, and there is very few study that helps understand why\nfailure-oblivious techniques work. In order to make failure-oblivious computing\nto have an impact in practice, we need to deeply understand failure-oblivious\nbehaviors in software. In this paper, we study, design and perform an\nexperiment that analyzes the size and the diversity of the failure-oblivious\nbehaviors. Our experiment consists of exhaustively computing the search space\nof 16 field failures of large-scale open-source Java software. The outcome of\nthis experiment is a much better understanding of what really happens when\nfailure-oblivious computing is used, and this opens new promising research\ndirections.\n",
"title": "Exhaustive Exploration of the Failure-oblivious Computing Search Space"
}
| null | null | null | null | true | null |
4850
| null |
Default
| null | null |
null |
{
"abstract": " We elucidate the importance of the consistent treatment of gravity-model\nspecific non-linearities when estimating the growth of cosmological structures\nfrom redshift space distortions (RSD). Within the context of standard\nperturbation theory (SPT), we compare the predictions of two theoretical\ntemplates with redshift space data from COLA (COmoving Lagrangian Acceleration)\nsimulations in the normal branch of DGP gravity (nDGP) and General Relativity\n(GR). Using COLA for these comparisons is validated using a suite of full\nN-body simulations for the same theories. The two theoretical templates\ncorrespond to the standard general relativistic perturbation equations and\nthose same equations modelled within nDGP. Gravitational clustering non-linear\neffects are accounted for by modelling the power spectrum up to one loop order\nand redshift space clustering anisotropy is modelled using the Taruya,\nNishimichi and Saito (TNS) RSD model. Using this approach, we attempt to\nrecover the simulation's fiducial logarithmic growth parameter $f$. By\nassigning the simulation data with errors representing an idealised survey with\na volume of $10\\mbox{Gpc}^3/h^3$, we find the GR template is unable to recover\nfiducial $f$ to within 1$\\sigma$ at $z=1$ when we match the data up to $k_{\\rm\nmax}=0.195h$/Mpc. On the other hand, the DGP template recovers the fiducial\nvalue within $1\\sigma$. Further, we conduct the same analysis for sets of mock\ndata generated for generalised models of modified gravity using SPT, where\nagain we analyse the GR template's ability to recover the fiducial value. We\nfind that for models with enhanced gravitational non-linearity, the theoretical\nbias of the GR template becomes significant for stage IV surveys. Thus, we show\nthat for the future large data volume galaxy surveys, the self-consistent\nmodelling of non-GR gravity scenarios will be crucial in constraining theory\nparameters.\n",
"title": "Theoretical Accuracy in Cosmological Growth Estimation"
}
| null | null | null | null | true | null |
4851
| null |
Default
| null | null |
null |
{
"abstract": " We develop a novel method for counterfactual analysis based on observational\ndata using prediction intervals for units under different exposures. Unlike\nmethods that target heterogeneous or conditional average treatment effects of\nan exposure, the proposed approach aims to take into account the irreducible\ndispersions of counterfactual outcomes so as to quantify the relative impact of\ndifferent exposures. The prediction intervals are constructed in a\ndistribution-free and model-robust manner based on the conformal prediction\napproach. The computational obstacles to this approach are circumvented by\nleveraging properties of a tuning-free method that learns sparse additive\npredictor models for counterfactual outcomes. The method is illustrated using\nboth real and synthetic data.\n",
"title": "Model-Robust Counterfactual Prediction Method"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
4852
| null |
Validated
| null | null |
null |
{
"abstract": " The Generalized Pareto Distribution (GPD) plays a central role in modelling\nheavy tail phenomena in many applications. Applying the GPD to actual datasets\nhowever is a non-trivial task. One common way suggested in the literature to\ninvestigate the tail behaviour is to take logarithm to the original dataset in\norder to reduce the sample variability. Inspired by this, we propose and study\nthe Exponentiated Generalized Pareto Distribution (exGPD), which is created via\nlog-transform of the GPD variable. After introducing the exGPD we derive\nvarious distributional quantities, including the moment generating function,\ntail risk measures. As an application we also develop a plot as an alternative\nto the Hill plot to identify the tail index of heavy tailed datasets, based on\nthe moment matching for the exGPD. Various numerical analyses with both\nsimulated and actual datasets show that the proposed plot works well.\n",
"title": "Exponentiated Generalized Pareto Distribution: Properties and applications towards Extreme Value Theory"
}
| null | null | null | null | true | null |
4853
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we introduce the {\\em average top-$k$} (\\atk) loss as a new\naggregate loss for supervised learning, which is the average over the $k$\nlargest individual losses over a training dataset. We show that the \\atk loss\nis a natural generalization of the two widely used aggregate losses, namely the\naverage loss and the maximum loss, but can combine their advantages and\nmitigate their drawbacks to better adapt to different data distributions.\nFurthermore, it remains a convex function over all individual losses, which can\nlead to convex optimization problems that can be solved effectively with\nconventional gradient-based methods. We provide an intuitive interpretation of\nthe \\atk loss based on its equivalent effect on the continuous individual loss\nfunctions, suggesting that it can reduce the penalty on correctly classified\ndata. We further give a learning theory analysis of \\matk learning on the\nclassification calibration of the \\atk loss and the error bounds of \\atk-SVM.\nWe demonstrate the applicability of minimum average top-$k$ learning for binary\nclassification and regression using synthetic and real datasets.\n",
"title": "Learning with Average Top-k Loss"
}
| null | null | null | null | true | null |
4854
| null |
Default
| null | null |
null |
{
"abstract": " Reflexive polytopes form one of the distinguished classes of lattice\npolytopes. Especially reflexive polytopes which possess the integer\ndecomposition property are of interest. In the present paper, by virtue of the\nalgebraic technique on Grönbner bases, a new class of reflexive polytopes\nwhich possess the integer decomposition property and which arise from perfect\ngraphs will be presented. Furthermore, the Ehrhart $\\delta$-polynomials of\nthese polytopes will be studied.\n",
"title": "Reflexive polytopes arising from perfect graphs"
}
| null | null | null | null | true | null |
4855
| null |
Default
| null | null |
null |
{
"abstract": " Neural networks have been successfully applied in applications with a large\namount of labeled data. However, the task of rapid generalization on new\nconcepts with small training data while preserving performances on previously\nlearned ones still presents a significant challenge to neural network models.\nIn this work, we introduce a novel meta learning method, Meta Networks\n(MetaNet), that learns a meta-level knowledge across tasks and shifts its\ninductive biases via fast parameterization for rapid generalization. When\nevaluated on Omniglot and Mini-ImageNet benchmarks, our MetaNet models achieve\na near human-level performance and outperform the baseline approaches by up to\n6% accuracy. We demonstrate several appealing properties of MetaNet relating to\ngeneralization and continual learning.\n",
"title": "Meta Networks"
}
| null | null | null | null | true | null |
4856
| null |
Default
| null | null |
null |
{
"abstract": " The mixture models have become widely used in clustering, given its\nprobabilistic framework in which its based, however, for modern databases that\nare characterized by their large size, these models behave disappointingly in\nsetting out the model, making essential the selection of relevant variables for\nthis type of clustering. After recalling the basics of clustering based on a\nmodel, this article will examine the variable selection methods for model-based\nclustering, as well as presenting opportunities for improvement of these\nmethods.\n",
"title": "Variable selection for clustering with Gaussian mixture models: state of the art"
}
| null | null | null | null | true | null |
4857
| null |
Default
| null | null |
null |
{
"abstract": " Scanning superconducting quantum interference device microscopy (SSM) is a\nscanning probe technique that images local magnetic flux, which allows for\nmapping of magnetic fields with high field and spatial accuracy. Many studies\ninvolving SSM have been published in the last decades, using SSM to make\nqualitative statements about magnetism. However, quantitative analysis using\nSSM has received less attention. In this work, we discuss several aspects of\ninterpreting SSM images and methods to improve quantitative analysis. First, we\nanalyse the spatial resolution and how it depends on several factors. Second,\nwe discuss the analysis of SSM scans and the information obtained from the SSM\ndata. Using simulations, we show how signals evolve as a function of changing\nscan height, SQUID loop size, magnetization strength and orientation. We also\ninvestigated 2-dimensional autocorrelation analysis to extract information\nabout the size, shape and symmetry of magnetic features. Finally, we provide an\noutlook on possible future applications and improvements.\n",
"title": "Analysing Magnetism Using Scanning SQUID Microscopy"
}
| null | null | null | null | true | null |
4858
| null |
Default
| null | null |
null |
{
"abstract": " Machine Learning models incorporating multiple layered learning networks have\nbeen seen to provide effective models for various classification problems. The\nresulting optimization problem to solve for the optimal vector minimizing the\nempirical risk is, however, highly nonconvex. This alone presents a challenge\nto application and development of appropriate optimization algorithms for\nsolving the problem. However, in addition, there are a number of interesting\nproblems for which the objective function is non- smooth and nonseparable. In\nthis paper, we summarize the primary challenges involved, the state of the art,\nand present some numerical results on an interesting and representative class\nof problems.\n",
"title": "Algorithms for solving optimization problems arising from deep neural net models: nonsmooth problems"
}
| null | null | null | null | true | null |
4859
| null |
Default
| null | null |
null |
{
"abstract": " We prove a general essential self-adjointness criterion for sub-Laplacians on\ncomplete sub-Riemannian manifolds, defined with respect to singular measures.\nAs a consequence, we show that the intrinsic sub-Laplacian (i.e. defined w.r.t.\nPopp's measure) is essentially self-adjoint on the equiregular connected\ncomponents of a sub-Riemannian manifold. This result holds under mild\nregularity assumptions on the singular region, and when the latter does not\ncontain characteristic points.\n",
"title": "On the essential self-adjointness of singular sub-Laplacians"
}
| null | null | null | null | true | null |
4860
| null |
Default
| null | null |
null |
{
"abstract": " Recent years have seen a growing interest in understanding deep neural\nnetworks from an optimization perspective. It is understood now that converging\nto low-cost local minima is sufficient for such models to become effective in\npractice. However, in this work, we propose a new hypothesis based on recent\ntheoretical findings and empirical studies that deep neural network models\nactually converge to saddle points with high degeneracy. Our findings from this\nwork are new, and can have a significant impact on the development of gradient\ndescent based methods for training deep networks. We validated our hypotheses\nusing an extensive experimental evaluation on standard datasets such as MNIST\nand CIFAR-10, and also showed that recent efforts that attempt to escape\nsaddles finally converge to saddles with high degeneracy, which we define as\n`good saddles'. We also verified the famous Wigner's Semicircle Law in our\nexperimental results.\n",
"title": "Are Saddles Good Enough for Deep Learning?"
}
| null | null | null | null | true | null |
4861
| null |
Default
| null | null |
null |
{
"abstract": " We show that the convex hull of a monotone perturbation of a homogeneous\nbackground conductivity in the $p$-conductivity equation is determined by\nknowledge of the nonlinear Dirichlet-Neumann operator. We give two independent\nproofs, one of which is based on the monotonicity method and the other on the\nenclosure method. Our results are constructive and require no jump or\nsmoothness properties on the conductivity perturbation or its support.\n",
"title": "Monotonicity and enclosure methods for the p-Laplace equation"
}
| null | null | null | null | true | null |
4862
| null |
Default
| null | null |
null |
{
"abstract": " Recent experiments demonstrate that molecular motors from the Myosin II\nfamily serve as cross-links inducing active tension in the cytoskeletal\nnetwork. Here we revise the Brownian ratchet model, previously studied in the\ncontext of active transport along polymer tracks, in setups resembling a motor\nin a polymer network, also taking into account the effect of electrostatic\nchanges in the motor heads. We explore important mechanical quantities and show\nthat such a model is also capable of mechanosensing. Finally, we introduce a\nnovel efficiency based on excess heat production by the chemical cycle which is\ndirectly related to the active tension the motor exerts. The chemical\nefficiencies differ considerably for motors with a different number of heads,\nwhile their mechanical properties remain qualitatively similar. For motors with\na small number of heads, the chemical efficiency is maximal when they are\nfrustrated, a trait that is not found in larger motors.\n",
"title": "Tension and chemical efficiency of Myosin-II motors"
}
| null | null | null | null | true | null |
4863
| null |
Default
| null | null |
null |
{
"abstract": " In distributed function computation, each node has an initial value and the\ngoal is to compute a function of these values in a distributed manner. In this\npaper, we propose a novel token-based approach to compute a wide class of\ntarget functions to which we refer as \"Token-based function Computation with\nMemory\" (TCM) algorithm. In this approach, node values are attached to tokens\nand travel across the network. Each pair of travelling tokens would coalesce\nwhen they meet, forming a token with a new value as a function of the original\ntoken values. In contrast to the Coalescing Random Walk (CRW) algorithm, where\ntoken movement is governed by random walk, meeting of tokens in our scheme is\naccelerated by adopting a novel chasing mechanism. We proved that, compared to\nthe CRW algorithm, the TCM algorithm results in a reduction of time complexity\nby a factor of at least $\\sqrt{n/\\log(n)}$ in Erdös-Renyi and complete\ngraphs, and by a factor of $\\log(n)/\\log(\\log(n))$ in torus networks.\nSimulation results show that there is at least a constant factor improvement in\nthe message complexity of TCM algorithm in all considered topologies.\nRobustness of the CRW and TCM algorithms in the presence of node failure is\nanalyzed. We show that their robustness can be improved by running multiple\ninstances of the algorithms in parallel.\n",
"title": "Token-based Function Computation with Memory"
}
| null | null | null | null | true | null |
4864
| null |
Default
| null | null |
null |
{
"abstract": " How individuals adapt their behavior in cultural evolution remains elusive.\nTheoretical studies have shown that the update rules chosen to model individual\ndecision making can dramatically modify the evolutionary outcome of the\npopulation as a whole. This hints at the complexities of considering the\npersonality of individuals in a population, where each one uses its own rule.\nHere, we investigate whether and how heterogeneity in the rules of behavior\nupdate alters the evolutionary outcome. We assume that individuals update\nbehaviors by aspiration-based self-evaluation and they do so in their own ways.\nUnder weak selection, we analytically reveal a simple property that holds for\nany two-strategy multi-player games in well-mixed populations and on regular\ngraphs: the evolutionary outcome in a population with heterogeneous update\nrules is the weighted average of the outcomes in the corresponding homogeneous\npopulations, and the associated weights are the frequencies of each update rule\nin the heterogeneous population. Beyond weak selection, we show that this\nproperty holds for public goods games. Our finding implies that heterogeneous\naspiration dynamics is additive. This additivity greatly reduces the complexity\ninduced by the underlying individual heterogeneity. Our work thus provides an\nefficient method to calculate evolutionary outcomes under heterogeneous update\nrules.\n",
"title": "Simple property of heterogeneous aspiration dynamics: Beyond weak selection"
}
| null | null | null | null | true | null |
4865
| null |
Default
| null | null |
null |
{
"abstract": " In warm dark matter scenarios structure formation is suppressed on small\nscales with respect to the cold dark matter case, reducing the number of\nlow-mass halos and the fraction of ionized gas at high redshifts and thus,\ndelaying reionization. This has an impact on the ionization history of the\nUniverse and measurements of the optical depth to reionization, of the\nevolution of the global fraction of ionized gas and of the thermal history of\nthe intergalactic medium, can be used to set constraints on the mass of the\ndark matter particle. However, the suppression of the fraction of ionized\nmedium in these scenarios can be partly compensated by varying other\nparameters, as the ionization efficiency or the minimum mass for which halos\ncan host star-forming galaxies. Here we use different data sets regarding the\nionization and thermal histories of the Universe and, taking into account the\ndegeneracies from several astrophysical parameters, we obtain a lower bound on\nthe mass of thermal warm dark matter candidates of $m_X > 1.3$ keV, or $m_s >\n5.5$ keV for the case of sterile neutrinos non-resonantly produced in the early\nUniverse, both at 90\\% confidence level.\n",
"title": "Warm dark matter and the ionization history of the Universe"
}
| null | null | null | null | true | null |
4866
| null |
Default
| null | null |
null |
{
"abstract": " Aluminum lumped-element kinetic inductance detectors (LEKIDs) sensitive to\nmillimeter-wave photons have been shown to exhibit high quality factors, making\nthem highly sensitive and multiplexable. The superconducting gap of aluminum\nlimits aluminum LEKIDs to photon frequencies above 100 GHz. Manganese-doped\naluminum (Al-Mn) has a tunable critical temperature and could therefore be an\nattractive material for LEKIDs sensitive to frequencies below 100 GHz if the\ninternal quality factor remains sufficiently high when manganese is added to\nthe film. To investigate, we measured some of the key properties of Al-Mn\nLEKIDs. A prototype eight-element LEKID array was fabricated using a 40 nm\nthick film of Al-Mn deposited on a 500 {\\mu}m thick high-resistivity,\nfloat-zone silicon substrate. The manganese content was 900 ppm, the measured\n$T_c = 694\\pm1$ mK, and the resonance frequencies were near 150 MHz. Using\nmeasurements of the forward scattering parameter $S_{21}$ at various bath\ntemperatures between 65 and 250 mK, we determined that the Al-Mn LEKIDs we\nfabricated have internal quality factors greater than $2 \\times 10^5$, which is\nhigh enough for millimeter-wave astrophysical observations. In the dark\nconditions under which these devices were measured, the fractional frequency\nnoise spectrum shows a shallow slope that depends on bath temperature and probe\ntone amplitude, which could be two-level system noise. The anticipated white\nphoton noise should dominate this level of low-frequency noise when the\ndetectors are illuminated with millimeter-waves in future measurements. The\nLEKIDs responded to light pulses from a 1550 nm light-emitting diode, and we\nused these light pulses to determine that the quasiparticle lifetime is 60\n{\\mu}s.\n",
"title": "High quality factor manganese-doped aluminum lumped-element kinetic inductance detectors sensitive to frequencies below 100 GHz"
}
| null | null | null | null | true | null |
4867
| null |
Default
| null | null |
null |
{
"abstract": " We calculate the universal spectrum of trimer and tetramer states in\nheteronuclear mixtures of ultracold atoms with different masses in the vicinity\nof the heavy-light dimer threshold. To extract the energies, we solve the\nthree- and four-body problem for simple two- and three-body potentials tuned to\nthe universal region using the Gaussian expansion method. We focus on the case\nof one light particle of mass $m$ and two or three heavy bosons of mass $M$\nwith resonant heavy-light interactions. We find that trimer and tetramer cross\ninto the heavy-light dimer threshold at almost the same point and that as the\nmass ratio $M/m$ decreases, the distance between the thresholds for trimer and\ntetramer states becomes smaller. We also comment on the possibility of\nobserving exotic three-body states consisting of a dimer and two atoms in this\nregion and compare with previous work.\n",
"title": "Tetramer Bound States in Heteronuclear Systems"
}
| null | null | null | null | true | null |
4868
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we have constructed dark energy models in an anisotropic\nBianchi-V space-time and studied the role of anisotropy in the evolution of\ndark energy. We have considered anisotropic dark energy fluid with different\npressure gradients along different spatial directions. In order to obtain a\ndeterministic solution, we have considered three general forms of scale factor.\nThe different forms of scale factors considered here produce time varying\ndeceleration parameters in all the cases that simulates the cosmic transition.\nThe variable equation of state (EoS) parameter, skewness parameters for all the\nmodels are obtained and analyzed. The physical properties of the models are\nalso discussed.\n",
"title": "Dark Energy Cosmological Models with General forms of Scale Factor"
}
| null | null | null | null | true | null |
4869
| null |
Default
| null | null |
null |
{
"abstract": " With the huge influx of various data nowadays, extracting knowledge from them\nhas become an interesting but tedious task among data scientists, particularly\nwhen the data come in heterogeneous form and have missing information. Many\ndata completion techniques had been introduced, especially in the advent of\nkernel methods. However, among the many data completion techniques available in\nthe literature, studies about mutually completing several incomplete kernel\nmatrices have not been given much attention yet. In this paper, we present a\nnew method, called Mutual Kernel Matrix Completion (MKMC) algorithm, that\ntackles this problem of mutually inferring the missing entries of multiple\nkernel matrices by combining the notions of data fusion and kernel matrix\ncompletion, applied on biological data sets to be used for classification task.\nWe first introduced an objective function that will be minimized by exploiting\nthe EM algorithm, which in turn results to an estimate of the missing entries\nof the kernel matrices involved. The completed kernel matrices are then\ncombined to produce a model matrix that can be used to further improve the\nobtained estimates. An interesting result of our study is that the E-step and\nthe M-step are given in closed form, which makes our algorithm efficient in\nterms of time and memory. After completion, the (completed) kernel matrices are\nthen used to train an SVM classifier to test how well the relationships among\nthe entries are preserved. Our empirical results show that the proposed\nalgorithm bested the traditional completion techniques in preserving the\nrelationships among the data points, and in accurately recovering the missing\nkernel matrix entries. By far, MKMC offers a promising solution to the problem\nof mutual estimation of a number of relevant incomplete kernel matrices.\n",
"title": "Mutual Kernel Matrix Completion"
}
| null | null | null | null | true | null |
4870
| null |
Default
| null | null |
null |
{
"abstract": " We give an algebraic quantization, in the sense of quantum groups, of the\ncomplex Minkowski space, and we examine the real forms corresponding to the\nsignatures $(3,1)$, $(2,2)$, $(4,0)$, constructing the corresponding quantum\nmetrics and providing an explicit presentation of the quantized coordinate\nalgebras. In particular, we focus on the Kleinian signature $(2,2)$. The\nquantizations of the complex and real spaces come together with a coaction of\nthe quantizations of the respective symmetry groups. We also extend such\nquantizations to the $\\mathcal{N}=1$ supersetting.\n",
"title": "Quantum Klein Space and Superspace"
}
| null | null |
[
"Mathematics"
] | null | true | null |
4871
| null |
Validated
| null | null |
null |
{
"abstract": " It is well known that the Lasso can be interpreted as a Bayesian posterior\nmode estimate with a Laplacian prior. Obtaining samples from the full posterior\ndistribution, the Bayesian Lasso, confers major advantages in performance as\ncompared to having only the Lasso point estimate. Traditionally, the Bayesian\nLasso is implemented via Gibbs sampling methods which suffer from lack of\nscalability, unknown convergence rates, and generation of samples that are\nnecessarily correlated. We provide a measure transport approach to generate\ni.i.d samples from the posterior by constructing a transport map that\ntransforms a sample from the Laplacian prior into a sample from the posterior.\nWe show how the construction of this transport map can be parallelized into\nmodules that iteratively solve Lasso problems and perform closed-form linear\nalgebra updates. With this posterior sampling method, we perform maximum\nlikelihood estimation of the Lasso regularization parameter via the EM\nalgorithm. We provide comparisons to traditional Gibbs samplers using the\ndiabetes dataset of Efron et al. Lastly, we give an example implementation on a\ncomputing system that leverages parallelization, a graphics processing unit,\nwhose execution time has much less dependence on dimension as compared to a\nstandard implementation.\n",
"title": "Bayesian Lasso Posterior Sampling via Parallelized Measure Transport"
}
| null | null | null | null | true | null |
4872
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we investigate the endogenous information contained in four\nliquidity variables at a five minutes time scale on equity markets around the\nworld: the traded volume, the bid-ask spread, the volatility and the volume at\nfirst limits of the orderbook. In the spirit of Granger causality, we measure\nthe level of information by the level of accuracy of linear autoregressive\nmodels. This empirical study is carried out on a dataset of more than 300\nstocks from four different markets (US, UK, Japan and Hong Kong) from a period\nof over five years. We discuss the obtained performances of autoregressive (AR)\nmodels on stationarized versions of the variables, focusing on explaining the\nobserved differences between stocks.\nSince empirical studies are often conducted at this time scale, we believe it\nis of paramount importance to document endogenous dynamics in a simple\nframework with no addition of supplemental information. Our study can hence be\nused as a benchmark to identify exogenous effects. On the other hand, most\noptimal trading frameworks (like the celebrated Almgren and Chriss one), focus\non computing an optimal trading speed at a frequency close to the one we\nconsider. Such frameworks very often take i.i.d. assumptions on liquidity\nvariables; this paper document the auto-correlations emerging from real data,\nopening the door to new developments in optimal trading.\n",
"title": "Endogeneous Dynamics of Intraday Liquidity"
}
| null | null | null | null | true | null |
4873
| null |
Default
| null | null |
null |
{
"abstract": " Data diversity is critical to success when training deep learning models.\nMedical imaging data sets are often imbalanced as pathologic findings are\ngenerally rare, which introduces significant challenges when training deep\nlearning models. In this work, we propose a method to generate synthetic\nabnormal MRI images with brain tumors by training a generative adversarial\nnetwork using two publicly available data sets of brain MRI. We demonstrate two\nunique benefits that the synthetic images provide. First, we illustrate\nimproved performance on tumor segmentation by leveraging the synthetic images\nas a form of data augmentation. Second, we demonstrate the value of generative\nmodels as an anonymization tool, achieving comparable tumor segmentation\nresults when trained on the synthetic data versus when trained on real subject\ndata. Together, these results offer a potential solution to two of the largest\nchallenges facing machine learning in medical imaging, namely the small\nincidence of pathological findings, and the restrictions around sharing of\npatient data.\n",
"title": "Medical Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial Networks"
}
| null | null | null | null | true | null |
4874
| null |
Default
| null | null |
null |
{
"abstract": " Robust feature representation plays significant role in visual tracking.\nHowever, it remains a challenging issue, since many factors may affect the\nexperimental performance. The existing method which combine different features\nby setting them equally with the fixed weight could hardly solve the issues,\ndue to the different statistical properties of different features across\nvarious of scenarios and attributes. In this paper, by exploiting the internal\nrelationship among these features, we develop a robust method to construct a\nmore stable feature representation. More specifically, we utilize a co-training\nparadigm to formulate the intrinsic complementary information of multi-feature\ntemplate into the efficient correlation filter framework. We test our approach\non challenging se- quences with illumination variation, scale variation,\ndeformation etc. Experimental results demonstrate that the proposed method\noutperforms state-of-the-art methods favorably.\n",
"title": "Adaptive Feature Representation for Visual Tracking"
}
| null | null | null | null | true | null |
4875
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we develop an upper bound for the SPARSEVA (SPARSe Estimation\nbased on a VAlidation criterion) estimation error in a general scheme, i.e.,\nwhen the cost function is strongly convex and the regularized norm is\ndecomposable for a pair of subspaces. We show how this general bound can be\napplied to a sparse regression problem to obtain an upper bound for the\ntraditional SPARSEVA problem. Numerical results are used to illustrate the\neffectiveness of the suggested bound.\n",
"title": "An analysis of the SPARSEVA estimate for the finite sample data case"
}
| null | null | null | null | true | null |
4876
| null |
Default
| null | null |
null |
{
"abstract": " We present the results of a Chandra X-ray survey of the 8 most massive galaxy\nclusters at z>1.2 in the South Pole Telescope 2500 deg^2 survey. We combine\nthis sample with previously-published Chandra observations of 49 massive\nX-ray-selected clusters at 0<z<0.1 and 90 SZ-selected clusters at 0.25<z<1.2 to\nconstrain the evolution of the intracluster medium (ICM) over the past ~10 Gyr.\nWe find that the bulk of the ICM has evolved self similarly over the full\nredshift range probed here, with the ICM density at r>0.2R500 scaling like\nE(z)^2. In the centers of clusters (r<0.1R500), we find significant deviations\nfrom self similarity (n_e ~ E(z)^{0.1+/-0.5}), consistent with no redshift\ndependence. When we isolate clusters with over-dense cores (i.e., cool cores),\nwe find that the average over-density profile has not evolved with redshift --\nthat is, cool cores have not changed in size, density, or total mass over the\npast ~9-10 Gyr. We show that the evolving \"cuspiness\" of clusters in the X-ray,\nreported by several previous studies, can be understood in the context of a\ncool core with fixed properties embedded in a self similarly-evolving cluster.\nWe find no measurable evolution in the X-ray morphology of massive clusters,\nseemingly in tension with the rapidly-rising (with redshift) rate of major\nmergers predicted by cosmological simulations. We show that these two results\ncan be brought into agreement if we assume that the relaxation time after a\nmerger is proportional to the crossing time, since the latter is proportional\nto H(z)^(-1).\n",
"title": "The Remarkable Similarity of Massive Galaxy Clusters From z~0 to z~1.9"
}
| null | null |
[
"Physics"
] | null | true | null |
4877
| null |
Validated
| null | null |
null |
{
"abstract": " We revisit the relegation algorithm by Deprit et al. (Celest. Mech. Dyn.\nAstron. 79:157-182, 2001) in the light of the rigorous Nekhoroshev's like\ntheory. This relatively recent algorithm is nowadays widely used for\nimplementing closed form analytic perturbation theories, as it generalises the\nclassical Birkhoff normalisation algorithm. The algorithm, here briefly\nexplained by means of Lie transformations, has been so far introduced and used\nin a formal way, i.e. without providing any rigorous convergence or asymptotic\nestimates. The overall aim of this paper is to find such quantitative estimates\nand to show how the results about stability over exponentially long times can\nbe recovered in a simple and effective way, at least in the non-resonant case.\n",
"title": "Rigorous estimates for the relegation algorithm"
}
| null | null | null | null | true | null |
4878
| null |
Default
| null | null |
null |
{
"abstract": " There exists a bijection between the configuration space of a linear pentapod\nand all points $(u,v,w,p_x,p_y,p_z)\\in\\mathbb{R}^{6}$ located on the singular\nquadric $\\Gamma: u^2+v^2+w^2=1$, where $(u,v,w)$ determines the orientation of\nthe linear platform and $(p_x,p_y,p_z)$ its position. Then the set of all\nsingular robot configurations is obtained by intersecting $\\Gamma$ with a cubic\nhypersurface $\\Sigma$ in $\\mathbb{R}^{6}$, which is only quadratic in the\norientation variables and position variables, respectively. This article\ninvestigates the restrictions to be imposed on the design of this mechanism in\norder to obtain a reduction in degree. In detail we study the cases where\n$\\Sigma$ is (1) linear in position variables, (2) linear in orientation\nvariables and (3) quadratic in total. The resulting designs of linear pentapods\nhave the advantage of considerably simplified computation of singularity-free\nspheres in the configuration space. Finally we propose three kinematically\nredundant designs of linear pentapods with a simple singularity surface.\n",
"title": "Linear Pentapods with a Simple Singularity Variety"
}
| null | null | null | null | true | null |
4879
| null |
Default
| null | null |
null |
{
"abstract": " Neural networks, a central tool in machine learning, have demonstrated\nremarkable, high fidelity performance on image recognition and classification\ntasks. These successes evince an ability to accurately represent high\ndimensional functions, potentially of great use in computational and applied\nmathematics. That said, there are few rigorous results about the representation\nerror and trainability of neural networks. Here we characterize both the error\nand the scaling of the error with the size of the network by reinterpreting the\nstandard optimization algorithm used in machine learning applications,\nstochastic gradient descent, as the evolution of a particle system with\ninteractions governed by a potential related to the objective or \"loss\"\nfunction used to train the network. We show that, when the number $n$ of\nparameters is large, the empirical distribution of the particles descends on a\nconvex landscape towards a minimizer at a rate independent of $n$. We establish\na Law of Large Numbers and a Central Limit Theorem for the empirical\ndistribution, which together show that the approximation error of the network\nuniversally scales as $O(n^{-1})$. Remarkably, these properties do not depend\non the dimensionality of the domain of the function that we seek to represent.\nOur analysis also quantifies the scale and nature of the noise introduced by\nstochastic gradient descent and provides guidelines for the step size and batch\nsize to use when training a neural network. We illustrate our findings on\nexamples in which we train neural network to learn the energy function of the\ncontinuous 3-spin model on the sphere. The approximation error scales as our\nanalysis predicts in as high a dimension as $d=25$.\n",
"title": "Neural Networks as Interacting Particle Systems: Asymptotic Convexity of the Loss Landscape and Universal Scaling of the Approximation Error"
}
| null | null |
[
"Statistics"
] | null | true | null |
4880
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we are motivated by two important applications:\nentropy-regularized optimal transport problem and road or IP traffic demand\nmatrix estimation by entropy model. Both of them include solving a special type\nof optimization problem with linear equality constraints and objective given as\na sum of an entropy regularizer and a linear function. It is known that the\nstate-of-the-art solvers for this problem, which are based on Sinkhorn's method\n(also known as RSA or balancing method), can fail to work, when the\nentropy-regularization parameter is small. We consider the above optimization\nproblem as a particular instance of a general strongly convex optimization\nproblem with linear constraints. We propose a new algorithm to solve this\ngeneral class of problems. Our approach is based on the transition to the dual\nproblem. First, we introduce a new accelerated gradient method with adaptive\nchoice of gradient's Lipschitz constant. Then, we apply this method to the dual\nproblem and show, how to reconstruct an approximate solution to the primal\nproblem with provable convergence rate. We prove the rate $O(1/k^2)$, $k$ being\nthe iteration counter, both for the absolute value of the primal objective\nresidual and constraints infeasibility. Our method has similar to Sinkhorn's\nmethod complexity of each iteration, but is faster and more stable numerically,\nwhen the regularization parameter is small. We illustrate the advantage of our\nmethod by numerical experiments for the two mentioned applications. We show\nthat there exists a threshold, such that, when the regularization parameter is\nsmaller than this threshold, our method outperforms the Sinkhorn's method in\nterms of computation time.\n",
"title": "Adaptive Similar Triangles Method: a Stable Alternative to Sinkhorn's Algorithm for Regularized Optimal Transport"
}
| null | null | null | null | true | null |
4881
| null |
Default
| null | null |
null |
{
"abstract": " Let $K$ be a field of characteristic zero and $x$ a free variable. A\n$K$-$\\mathcal E$-derivation of $K[x]$ is a $K$-linear map of the form\n$\\operatorname{I}-\\phi$ for some $K$-algebra endomorphism $\\phi$ of $K[x]$,\nwhere $\\operatorname{I}$ denotes the identity map of $K[x]$. In this paper we\nstudy the image of an ideal of $K[x]$ under some $K$-derivations and\n$K$-$\\mathcal E$-derivations of $K[x]$. We show that the LFED conjecture\nproposed in [Z4] holds for all $K$-$\\mathcal E$-derivations and all locally\nfinite $K$-derivations of $K[x]$. We also show that the LNED conjecture\nproposed in [Z4] holds for all locally nilpotent $K$-derivations of $K[x]$, and\nalso for all locally nilpotent $K$-$\\mathcal E$-derivations of $K[x]$ and the\nideals $uK[x]$ such that either $u=0$, or $\\operatorname{deg}\\, u\\le 1$, or $u$\nhas at least one repeated root in the algebraic closure of $K$. As a\nbi-product, the homogeneous Mathieu subspaces (Mathieu-Zhao spaces) of the\nunivariate polynomial algebra over an arbitrary field have also been\nclassified.\n",
"title": "Images of Ideals under Derivations and $\\mathcal E$-Derivations of Univariate Polynomial Algebras over a Field of Characteristic Zero"
}
| null | null | null | null | true | null |
4882
| null |
Default
| null | null |
null |
{
"abstract": " We treat the boundary of the union of blocks in the Jenga game as a surface\nwith a polyhedral structure and consider its genus. We generalize the game and\ndetermine the maximum genus of the generalized game.\n",
"title": "Maximum genus of the Jenga like configurations"
}
| null | null | null | null | true | null |
4883
| null |
Default
| null | null |
null |
{
"abstract": " We introduce $\\mathcal{DLR}^+$, an extension of the n-ary propositionally\nclosed description logic $\\mathcal{DLR}$ to deal with attribute-labelled tuples\n(generalising the positional notation), projections of relations, and global\nand local objectification of relations, able to express inclusion, functional,\nkey, and external uniqueness dependencies. The logic is equipped with both TBox\nand ABox axioms. We show how a simple syntactic restriction on the appearance\nof projections sharing common attributes in a $\\mathcal{DLR}^+$ knowledge base\nmakes reasoning in the language decidable with the same computational\ncomplexity as $\\mathcal{DLR}$. The obtained $\\mathcal{DLR}^\\pm$ n-ary\ndescription logic is able to encode more thoroughly conceptual data models such\nas EER, UML, and ORM.\n",
"title": "A Decidable Very Expressive Description Logic for Databases (Extended Version)"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4884
| null |
Validated
| null | null |
null |
{
"abstract": " Developing a Brain-Computer Interface~(BCI) for seizure prediction can help\nepileptic patients have a better quality of life. However, there are many\ndifficulties and challenges in developing such a system as a real-life support\nfor patients. Because of the nonstationary nature of EEG signals, normal and\nseizure patterns vary across different patients. Thus, finding a group of\nmanually extracted features for the prediction task is not practical. Moreover,\nwhen using implanted electrodes for brain recording massive amounts of data are\nproduced. This big data calls for the need for safe storage and high\ncomputational resources for real-time processing. To address these challenges,\na cloud-based BCI system for the analysis of this big EEG data is presented.\nFirst, a dimensionality-reduction technique is developed to increase\nclassification accuracy as well as to decrease the communication bandwidth and\ncomputation time. Second, following a deep-learning approach, a stacked\nautoencoder is trained in two steps for unsupervised feature extraction and\nclassification. Third, a cloud-computing solution is proposed for real-time\nanalysis of big EEG data. The results on a benchmark clinical dataset\nillustrate the superiority of the proposed patient-specific BCI as an\nalternative method and its expected usefulness in real-life support of epilepsy\npatients.\n",
"title": "Cloud-based Deep Learning of Big EEG Data for Epileptic Seizure Prediction"
}
| null | null | null | null | true | null |
4885
| null |
Default
| null | null |
null |
{
"abstract": " As relational datasets modeled as graphs keep increasing in size and their\ndata-acquisition is permeated by uncertainty, graph-based analysis techniques\ncan become computationally and conceptually challenging. In particular, node\ncentrality measures rely on the assumption that the graph is perfectly known --\na premise not necessarily fulfilled for large, uncertain networks. Accordingly,\ncentrality measures may fail to faithfully extract the importance of nodes in\nthe presence of uncertainty. To mitigate these problems, we suggest a\nstatistical approach based on graphon theory: we introduce formal definitions\nof centrality measures for graphons and establish their connections to\nclassical graph centrality measures. A key advantage of this approach is that\ncentrality measures defined at the modeling level of graphons are inherently\nrobust to stochastic variations of specific graph realizations. Using the\ntheory of linear integral operators, we define degree, eigenvector, Katz and\nPageRank centrality functions for graphons and establish concentration\ninequalities demonstrating that graphon centrality functions arise naturally as\nlimits of their counterparts defined on sequences of graphs of increasing size.\nThe same concentration inequalities also provide high-probability bounds\nbetween the graphon centrality functions and the centrality measures on any\nsampled graph, thereby establishing a measure of uncertainty of the measured\ncentrality score. The same concentration inequalities also provide\nhigh-probability bounds between the graphon centrality functions and the\ncentrality measures on any sampled graph, thereby establishing a measure of\nuncertainty of the measured centrality score.\n",
"title": "Centrality measures for graphons: Accounting for uncertainty in networks"
}
| null | null | null | null | true | null |
4886
| null |
Default
| null | null |
null |
{
"abstract": " We theoretically investigate the stability and linear oscillatory behavior of\na naturally unstable particle whose potential energy is harmonically modulated.\nWe find this fundamental dynamical system is analogous in time to a quantum\nharmonic oscillator. In a certain modulation limit, a.k.a. the Kapitza regime,\nthe modulated oscillator can behave like an effective classic harmonic\noscillator. But in the overlooked opposite limit, the stable modes of\nvibrations are quantized in the modulation parameter space. By analogy with the\nstatistical interpretation of quantum physics, those modes can be characterized\nby the time-energy uncertainty relation of a quantum harmonic oscillator.\nReducing the almost-periodic vibrational modes of the particle to their\nperiodic eigenfunctions, one can transform the original equation of motion to a\ndimensionless Schrödinger stationary wave equation with a harmonic potential.\nThis reduction process introduces two features reminiscent of the quantum\nrealm: a wave-particle duality and a loss of causality that could legitimate a\nstatistical interpretation of the computed eigenfunctions. These results shed\nnew light on periodically time-varying linear dynamical systems and open an\noriginal path in the recently revived field of quantum mechanical analogs.\n",
"title": "A time-periodic mechanical analog of the quantum harmonic oscillator"
}
| null | null | null | null | true | null |
4887
| null |
Default
| null | null |
null |
{
"abstract": " Recent experiments [Schaeffer 2015] have shown that lithium presents an\nextremely anomalous isotope effect in the 15-25 GPa pressure range. In this\narticle we have calculated the anharmonic phonon dispersion of $\\mathrm{^7Li}$\nand $\\mathrm{^6Li}$ under pressure, their superconducting transition\ntemperatures, and the associated isotope effect. We have found a huge\nanharmonic renormalization of a transverse acoustic soft mode along $\\Gamma$K\nin the fcc phase, the expected structure at the pressure range of interest. In\nfact, the anharmonic correction dynamically stabilizes the fcc phase above 25\nGPa. However, we have not found any anomalous scaling of the superconducting\ntemperature with the isotopic mass. Additionally, we have also analyzed whether\nthe two lithium isotopes adopting different structures could explain the\nobserved anomalous behavior. According to our enthalpy calculations including\nzero-point motion and anharmonicity it would not be possible in a stable\nregime.\n",
"title": "Anharmonicity and the isotope effect in superconducting lithium at high pressures: a first-principles approach"
}
| null | null | null | null | true | null |
4888
| null |
Default
| null | null |
null |
{
"abstract": " We present imaging polarimetry of the superluminous supernova SN 2015bn,\nobtained over nine epochs between $-$20 and $+$46 days with the Nordic Optical\nTelescope. This was a nearby, slowly-evolving Type I superluminous supernova\nthat has been studied extensively and for which two epochs of\nspectropolarimetry are also available. Based on field stars, we determine the\ninterstellar polarisation in the Galaxy to be negligible. The polarisation of\nSN 2015bn shows a statistically significant increase during the last epochs,\nconfirming previous findings. Our well-sampled imaging polarimetry series\nallows us to determine that this increase (from $\\sim 0.54\\%$ to $\\gtrsim\n1.10\\%$) coincides in time with rapid changes that took place in the optical\nspectrum. We conclude that the supernova underwent a `phase transition' at\naround $+$20 days, when the photospheric emission shifted from an outer layer,\ndominated by natal C and O, to a more aspherical inner core, dominated by\nfreshly nucleosynthesized material. This two-layered model might account for\nthe characteristic appearance and properties of Type I superluminous\nsupernovae.\n",
"title": "Time-resolved polarimetry of the superluminous SN 2015bn with the Nordic Optical Telescope"
}
| null | null | null | null | true | null |
4889
| null |
Default
| null | null |
null |
{
"abstract": " Even though active learning forms an important pillar of machine learning,\ndeep learning tools are not prevalent within it. Deep learning poses several\ndifficulties when used in an active learning setting. First, active learning\n(AL) methods generally rely on being able to learn and update models from small\namounts of data. Recent advances in deep learning, on the other hand, are\nnotorious for their dependence on large amounts of data. Second, many AL\nacquisition functions rely on model uncertainty, yet deep learning methods\nrarely represent such model uncertainty. In this paper we combine recent\nadvances in Bayesian deep learning into the active learning framework in a\npractical way. We develop an active learning framework for high dimensional\ndata, a task which has been extremely challenging so far, with very sparse\nexisting literature. Taking advantage of specialised models such as Bayesian\nconvolutional neural networks, we demonstrate our active learning techniques\nwith image data, obtaining a significant improvement on existing active\nlearning approaches. We demonstrate this on both the MNIST dataset, as well as\nfor skin cancer diagnosis from lesion images (ISIC2016 task).\n",
"title": "Deep Bayesian Active Learning with Image Data"
}
| null | null | null | null | true | null |
4890
| null |
Default
| null | null |
null |
{
"abstract": " Optical flow estimation in the rainy scenes is challenging due to background\ndegradation introduced by rain streaks and rain accumulation effects in the\nscene. Rain accumulation effect refers to poor visibility of remote objects due\nto the intense rainfall. Most existing optical flow methods are erroneous when\napplied to rain sequences because the conventional brightness constancy\nconstraint (BCC) and gradient constancy constraint (GCC) generally break down\nin this situation. Based on the observation that the RGB color channels receive\nraindrop radiance equally, we introduce a residue channel as a new data\nconstraint to reduce the effect of rain streaks. To handle rain accumulation,\nour method decomposes the image into a piecewise-smooth background layer and a\nhigh-frequency detail layer. It also enforces the BCC on the background layer\nonly. Results on both synthetic dataset and real images show that our algorithm\noutperforms existing methods on different types of rain sequences. To our\nknowledge, this is the first optical flow method specifically dealing with\nrain.\n",
"title": "Robust Optical Flow Estimation in Rainy Scenes"
}
| null | null | null | null | true | null |
4891
| null |
Default
| null | null |
null |
{
"abstract": " Among the many additive manufacturing (AM) processes for metallic materials,\nselective laser melting (SLM) is arguably the most versatile in terms of its\npotential to realize complex geometries along with tailored microstructure.\nHowever, the complexity of the SLM process, and the need for predictive\nrelation of powder and process parameters to the part properties, demands\nfurther development of computational and experimental methods. This review\naddresses the fundamental physical phenomena of SLM, with a special emphasis on\nthe associated thermal behavior. Simulation and experimental methods are\ndiscussed according to three primary categories. First, macroscopic approaches\naim to answer questions at the component level and consider for example the\ndetermination of residual stresses or dimensional distortion effects prevalent\nin SLM. Second, mesoscopic approaches focus on the detection of defects such as\nexcessive surface roughness, residual porosity or inclusions that occur at the\nmesoscopic length scale of individual powder particles. Third, microscopic\napproaches investigate the metallurgical microstructure evolution resulting\nfrom the high temperature gradients and extreme heating and cooling rates\ninduced by the SLM process. Consideration of physical phenomena on all of these\nthree length scales is mandatory to establish the understanding needed to\nrealize high part quality in many applications, and to fully exploit the\npotential of SLM and related metal AM processes.\n",
"title": "Thermophysical Phenomena in Metal Additive Manufacturing by Selective Laser Melting: Fundamentals, Modeling, Simulation and Experimentation"
}
| null | null | null | null | true | null |
4892
| null |
Default
| null | null |
null |
{
"abstract": " Due to complexity and invisibility of human organs, diagnosticians need to\nanalyze medical images to determine where the lesion region is, and which kind\nof disease is, in order to make precise diagnoses. For satisfying clinical\npurposes through analyzing medical images, registration plays an essential\nrole. For instance, in Image-Guided Interventions (IGI) and computer-aided\nsurgeries, patient anatomy is registered to preoperative images to guide\nsurgeons complete procedures. Medical image registration is also very useful in\nsurgical planning, monitoring disease progression and for atlas construction.\nDue to the significance, the theories, methods, and implementation method of\nimage registration constitute fundamental knowledge in educational training for\nmedical specialists. In this chapter, we focus on image registration of a\nspecific human organ, i.e. the lung, which is prone to be lesioned. For\npulmonary image registration, the improvement of the accuracy and how to obtain\nit in order to achieve clinical purposes represents an important problem which\nshould seriously be addressed. In this chapter, we provide a survey which\nfocuses on the role of image registration in educational training together with\nthe state-of-the-art of pulmonary image registration. In the first part, we\ndescribe clinical applications of image registration introducing artificial\norgans in Simulation-based Education. In the second part, we summarize the\ncommon methods used in pulmonary image registration and analyze popular papers\nto obtain a survey of pulmonary image registration.\n",
"title": "Numerical Methods for Pulmonary Image Registration"
}
| null | null | null | null | true | null |
4893
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we address the inverse problem, or the statistical machine\nlearning problem, in Markov random fields with a non-parametric pair-wise\nenergy function with continuous variables. The inverse problem is formulated by\nmaximum likelihood estimation. The exact treatment of maximum likelihood\nestimation is intractable because of two problems: (1) it includes the\nevaluation of the partition function and (2) it is formulated in the form of\nfunctional optimization. We avoid Problem (1) by using Bethe approximation.\nBethe approximation is an approximation technique equivalent to the loopy\nbelief propagation. Problem (2) can be solved by using orthonormal function\nexpansion. Orthonormal function expansion can reduce a functional optimization\nproblem to a function optimization problem. Our method can provide an analytic\nform of the solution of the inverse problem within the framework of Bethe\napproximation.\n",
"title": "Solving Non-parametric Inverse Problem in Continuous Markov Random Field using Loopy Belief Propagation"
}
| null | null |
[
"Computer Science",
"Physics",
"Statistics"
] | null | true | null |
4894
| null |
Validated
| null | null |
null |
{
"abstract": " Spectral graph convolutional neural networks (CNNs) require approximation to\nthe convolution to alleviate the computational complexity, resulting in\nperformance loss. This paper proposes the topology adaptive graph convolutional\nnetwork (TAGCN), a novel graph convolutional network defined in the vertex\ndomain. We provide a systematic way to design a set of fixed-size learnable\nfilters to perform convolutions on graphs. The topologies of these filters are\nadaptive to the topology of the graph when they scan the graph to perform\nconvolution. The TAGCN not only inherits the properties of convolutions in CNN\nfor grid-structured data, but it is also consistent with convolution as defined\nin graph signal processing. Since no approximation to the convolution is\nneeded, TAGCN exhibits better performance than existing spectral CNNs on a\nnumber of data sets and is also computationally simpler than other recent\nmethods.\n",
"title": "Topology Adaptive Graph Convolutional Networks"
}
| null | null | null | null | true | null |
4895
| null |
Default
| null | null |
null |
{
"abstract": " We study the turbulent square duct flow of dense suspensions of\nneutrally-buoyant spherical particles. Direct numerical simulations (DNS) are\nperformed in the range of volume fractions $\\phi=0-0.2$, using the immersed\nboundary method (IBM) to account for the dispersed phase. Based on the\nhydraulic diameter a Reynolds number of $5600$ is considered. We report flow\nfeatures and particle statistics specific to this geometry, and compare the\nresults to the case of two-dimensional channel flows. In particular, we observe\nthat for $\\phi=0.05$ and $0.1$, particles preferentially accumulate on the\ncorner bisectors, close to the duct corners as also observed for laminar square\nduct flows of same duct-to-particle size ratios. At the highest volume\nfraction, particles preferentially accumulate in the core region. For channel\nflows, in the absence of lateral confinement particles are found instead to be\nuniformily distributed across the channel. We also observe that the intensity\nof the cross-stream secondary flows increases (with respect to the unladen\ncase) with the volume fraction up to $\\phi=0.1$, as a consequence of the high\nconcentration of particles along the corner bisector. For $\\phi=0.2$ the\nturbulence activity is strongly reduced and the intensity of the secondary\nflows reduces below that of the unladen case. The friction Reynolds number\nincreases with $\\phi$ in dilute conditions, as observed for channel flows.\nHowever, for $\\phi=0.2$ the mean friction Reynolds number decreases below the\nvalue for $\\phi=0.1$.\n",
"title": "Suspensions of finite-size neutrally-buoyant spheres in turbulent duct flow"
}
| null | null |
[
"Physics"
] | null | true | null |
4896
| null |
Validated
| null | null |
null |
{
"abstract": " Transport of charged carriers in regimes of strong non-equilibrium is\ncritical in a wide array of applications ranging from solar energy conversion\nand semiconductor devices to quantum information. Plasmonic hot-carrier science\nbrings this regime of transport physics to the forefront since photo-excited\ncarriers must be extracted far from equilibrium to harvest their energy\nefficiently. Here, we present a theoretical and computational framework,\nNon-Equilibrium Scattering in Space and Energy (NESSE), to predict the spatial\nevolution of carrier energy distributions that combines the best features of\nphase-space (Boltzmann) and particle-based (Monte Carlo) methods. Within the\nNESSE framework, we bridge first-principles electronic structure predictions of\nplasmon decay and carrier collision integrals at the atomic scale, with\nelectromagnetic field simulations at the nano- to mesoscale. Finally, we apply\nNESSE to predict spatially-resolved energy distributions of photo-excited\ncarriers that impact the surface of experimentally realizable plasmonic\nnanostructures, enabling first-principles design of hot carrier devices.\n",
"title": "Far-from-equilibrium transport of excited carriers in nanostructures"
}
| null | null | null | null | true | null |
4897
| null |
Default
| null | null |
null |
{
"abstract": " Let $\\frak g$ be a semisimple Lie algebra and $\\frak k\\subset\\frak g$ be a\nreductive subalgebra. We say that a $\\frak g$-module $M$ is a bounded $(\\frak\ng, \\frak k)$-module if $M$ is a direct sum of simple finite-dimensional $\\frak\nk$-modules and the multiplicities of all simple $\\frak k$-modules in that\ndirect sum are universally bounded.\nThe goal of this article is to show that the \"boundedness\" property for a\nsimple $(\\frak g, \\frak k)$-module $M$ is equivalent to a property of the\nassociated variety of the annihilator of $M$ (this is the closure of a\nnilpotent coadjoint orbit inside $\\frak g^*$) under the assumption that the\nmain field is algebraically closed and of characteristic 0. In particular this\nimplies that if $M_1, M_2$ are simple $(\\frak g, \\frak k)$-modules such that\n$M_1$ is bounded and the associated varieties of the annihilators of $M_1$ and\n$M_2$ coincide then $M_2$ is also bounded. This statement is a geometric\nanalogue of a purely algebraic fact due to I. Penkov and V. Serganova and it\nwas posed as a conjecture in my Ph.D. thesis.\n",
"title": "On annihilators of bounded $(\\frak g, \\frak k)$-modules"
}
| null | null | null | null | true | null |
4898
| null |
Default
| null | null |
null |
{
"abstract": " Of the roughly 3000 neutron stars known, only a handful have sub-stellar\ncompanions. The most famous of these are the low-mass planets around the\nmillisecond pulsar B1257+12. New evidence indicates that observational biases\ncould still hide a wide variety of planetary systems around most neutron stars.\nWe consider the environment and physical processes relevant to neutron star\nplanets, in particular the effect of X-ray irradiation and the relativistic\npulsar wind on the planetary atmosphere. We discuss the survival time of planet\natmospheres and the planetary surface conditions around different classes of\nneutron stars, and define a neutron star habitable zone. Depending on as-yet\npoorly constrained aspects of the pulsar wind, both Super-Earths around\nB1257+12 could lie within its habitable zone.\n",
"title": "Neutron Star Planets: Atmospheric processes and habitability"
}
| null | null | null | null | true | null |
4899
| null |
Default
| null | null |
null |
{
"abstract": " Discrete time crystals are a recently proposed and experimentally observed\nout-of-equilibrium dynamical phase of Floquet systems, where the stroboscopic\nevolution of a local observable repeats itself at an integer multiple of the\ndriving period. We address this issue in a driven-dissipative setup, focusing\non the modulated open Dicke model, which can be implemented by cavity or\ncircuit QED systems. In the thermodynamic limit, we employ semiclassical\napproaches and find rich dynamical phases on top of the discrete\ntime-crystalline order. In a deep quantum regime with few qubits, we find clear\nsignatures of a transient discrete time-crystalline behavior, which is absent\nin the isolated counterpart. We establish a phenomenology of dissipative\ndiscrete time crystals by generalizing the Landau theory of phase transitions\nto Floquet open systems.\n",
"title": "Discrete Time-Crystalline Order in Cavity and Circuit QED Systems"
}
| null | null | null | null | true | null |
4900
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.