text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Track-before-detect (TBD) is a powerful approach that consists in providing\nthe tracker with sensor measurements directly without pre-detection. Due to the\nmeasurement model non-linearities, online state estimation in TBD is most\ncommonly solved via particle filtering. Existing particle filters for TBD do\nnot incorporate measurement information in their proposal distribution. The\nLangevin Monte Carlo (LMC) is a sampling method whose proposal is able to\nexploit all available knowledge of the posterior (that is, both prior and\nmeasurement information). This letter synthesizes recent advances in LMC-based\nfiltering to describe the Riemann-Langevin particle filter and introduces its\nnovel application to TBD. The benefits of our approach are illustrated in a\nchallenging low-noise scenario.\n",
"title": "Riemann-Langevin Particle Filtering in Track-Before-Detect"
}
| null | null | null | null | true | null |
12501
| null |
Default
| null | null |
null |
{
"abstract": " Given a holomorphic principal bundle $Q\\, \\longrightarrow\\, X$, the universal\nspace of holomorphic connections is a torsor $C_1(Q)$ for $\\text{ad} Q \\otimes\nT^*X$ such that the pullback of $Q$ to $C_1(Q)$ has a tautological holomorphic\nconnection. When $X\\,=\\, G/P$, where $P$ is a parabolic subgroup of a complex\nsimple group $G$, and $Q$ is the frame bundle of an ample line bundle, we show\nthat $C_1(Q)$ may be identified with $G/L$, where $L\\, \\subset\\, P$ is a Levi\nfactor. We use this identification to construct the twistor space associated to\na natural hyper-Kähler metric on $T^*(G/P)$, recovering Biquard's description\nof this twistor space, but employing only finite-dimensional, Lie-theoretic\nmeans.\n",
"title": "The universal connection for principal bundles over homogeneous spaces and twistor space of coadjoint orbits"
}
| null | null | null | null | true | null |
12502
| null |
Default
| null | null |
null |
{
"abstract": " We propose to study equivariance in deep neural networks through parameter\nsymmetries. In particular, given a group $\\mathcal{G}$ that acts discretely on\nthe input and output of a standard neural network layer $\\phi_{W}: \\Re^{M} \\to\n\\Re^{N}$, we show that $\\phi_{W}$ is equivariant with respect to\n$\\mathcal{G}$-action iff $\\mathcal{G}$ explains the symmetries of the network\nparameters $W$. Inspired by this observation, we then propose two\nparameter-sharing schemes to induce the desirable symmetry on $W$. Our\nprocedures for tying the parameters achieve $\\mathcal{G}$-equivariance and,\nunder some conditions on the action of $\\mathcal{G}$, they guarantee\nsensitivity to all other permutation groups outside $\\mathcal{G}$.\n",
"title": "Equivariance Through Parameter-Sharing"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12503
| null |
Validated
| null | null |
null |
{
"abstract": " Low-dimensional wide bandgap semiconductors open a new playing field in\nquantum optics using sub-bandgap excitation. In this field, hexagonal boron\nnitride (h-BN) has been reported to host single quantum emitters (QEs), linking\nQE density to perimeters. Furthermore, curvature/perimeters in transition metal\ndichalcogenides (TMDCs) have demonstrated a key role in QE formation. We\ninvestigate a curvature-abundant BN system - quasi one-dimensional BN nanotubes\n(BNNTs) fabricated via a catalyst-free method. We find that non-treated BNNT is\nan abundant source of stable QEs and analyze their emission features down to\nsingle nanotubes, comparing dispersed/suspended material. Combining high\nspatial resolution of a scanning electron microscope, we categorize and\npin-point emission origin to a scale of less than 20 nm, giving us a one-to-one\nvalidation of emission source with dimensions smaller than the laser excitation\nwavelength, elucidating nano-antenna effects. Two emission origins emerge:\nhybrid/entwined BNNT. By artificially curving h-BN flakes, similar QE spectral\nfeatures are observed. The impact on emission of solvents used in commercial\nproducts and curved regions is also demonstrated. The 'out of the box'\navailability of QEs in BNNT, lacking processing contamination, is a milestone\nfor unraveling their atomic features. These findings open possibilities for\nprecision engineering of QEs, puts h-BN under a similar 'umbrella' of TMDC's\nQEs and provides a model explaining QEs spatial localization/formation using\nelectron/ion irradiation and chemical etching.\n",
"title": "Quantum light in curved low dimensional hexagonal boron nitride systems"
}
| null | null | null | null | true | null |
12504
| null |
Default
| null | null |
null |
{
"abstract": " While recent developments in autonomous vehicle (AV) technology highlight\nsubstantial progress, we lack tools for rigorous and scalable testing.\nReal-world testing, the $\\textit{de facto}$ evaluation environment, places the\npublic in danger, and, due to the rare nature of accidents, will require\nbillions of miles in order to statistically validate performance claims. We\nimplement a simulation framework that can test an entire modern autonomous\ndriving system, including, in particular, systems that employ deep-learning\nperception and control algorithms. Using adaptive importance-sampling methods\nto accelerate rare-event probability evaluation, we estimate the probability of\nan accident under a base distribution governing standard traffic behavior. We\ndemonstrate our framework on a highway scenario, accelerating system evaluation\nby $2$-$20$ times over naive Monte Carlo sampling methods and $10$-$300\n\\mathsf{P}$ times (where $\\mathsf{P}$ is the number of processors) over\nreal-world testing.\n",
"title": "Scalable End-to-End Autonomous Vehicle Testing via Rare-event Simulation"
}
| null | null |
[
"Computer Science"
] | null | true | null |
12505
| null |
Validated
| null | null |
null |
{
"abstract": " We identify peak and valley structures in the exact exchange-correlation\npotential of time-dependent density functional theory that are crucial for\ntime-resolved electron scattering in a model one-dimensional system. These\nstructures are completely missed by adiabatic approximations which consequently\nsignificantly underestimate the scattering probability. A recently-proposed\nnon-adiabatic approximation is shown to correctly capture the approach of the\nelectron to the target when the initial Kohn-Sham state is chosen judiciously,\nand is more accurate than standard adiabatic functionals, but it ultimately\nfails to accurately capture reflection. These results may explain the\nunderestimate of scattering probabilities in some recent studies on molecules\nand surfaces.\n",
"title": "Exact time-dependent exchange-correlation potential in electron scattering processes"
}
| null | null | null | null | true | null |
12506
| null |
Default
| null | null |
null |
{
"abstract": " Given two independent sets $I, J$ of a graph $G$, and imagine that a token\n(coin) is placed at each vertex of $I$. The Sliding Token problem asks if one\ncould transform $I$ to $J$ via a sequence of elementary steps, where each step\nrequires sliding a token from one vertex to one of its neighbors so that the\nresulting set of vertices where tokens are placed remains independent. This\nproblem is $\\mathsf{PSPACE}$-complete even for planar graphs of maximum degree\n$3$ and bounded-treewidth. In this paper, we show that Sliding Token can be\nsolved efficiently for cactus graphs and block graphs, and give upper bounds on\nthe length of a transformation sequence between any two independent sets of\nthese graph classes. Our algorithms are designed based on two main\nobservations. First, all structures that forbid the existence of a sequence of\ntoken slidings between $I$ and $J$, if exist, can be found in polynomial time.\nA sufficient condition for determining no-instances can be easily derived using\nthis characterization. Second, without such forbidden structures, a sequence of\ntoken slidings between $I$ and $J$ does exist. In this case, one can indeed\ntransform $I$ to $J$ (and vice versa) using a polynomial number of\ntoken-slides.\n",
"title": "Polynomial-Time Algorithms for Sliding Tokens on Cactus Graphs and Block Graphs"
}
| null | null | null | null | true | null |
12507
| null |
Default
| null | null |
null |
{
"abstract": " This work studies the entity-wise topical behavior from massive network logs.\nBoth the temporal and the spatial relationships of the behavior are explored\nwith the learning architectures combing the recurrent neural network (RNN) and\nthe convolutional neural network (CNN). To make the behavioral data appropriate\nfor the spatial learning in CNN, several reduction steps are taken to form the\ntopical metrics and place them homogeneously like pixels in the images. The\nexperimental result shows both the temporal- and the spatial- gains when\ncompared to a multilayer perceptron (MLP) network. A new learning framework\ncalled spatially connected convolutional networks (SCCN) is introduced to more\nefficiently predict the behavior.\n",
"title": "Summarized Network Behavior Prediction"
}
| null | null | null | null | true | null |
12508
| null |
Default
| null | null |
null |
{
"abstract": " The exponential scaling of the wave function is a fundamental property of\nquantum systems with far reaching implications in our ability to process\nquantum information. A problem where these are particularly relevant is quantum\nstate tomography. State tomography, whose objective is to obtain a full\ndescription of a quantum system, can be analysed in the framework of\ncomputational learning theory. In this model, quantum states have been shown to\nbe Probably Approximately Correct (PAC)-learnable with sample complexity linear\nin the number of qubits. However, it is conjectured that in general quantum\nstates require an exponential amount of computation to be learned. Here, using\nresults from the literature on the efficient classical simulation of quantum\nsystems, we show that stabiliser states are efficiently PAC-learnable. Our\nresults solve an open problem formulated by Aaronson [Proc. R. Soc. A, 2088,\n(2007)] and propose learning theory as a tool for exploring the power of\nquantum computation.\n",
"title": "Stabiliser states are efficiently PAC-learnable"
}
| null | null | null | null | true | null |
12509
| null |
Default
| null | null |
null |
{
"abstract": " We show that if a semisimple synchronizing automaton with $n$ states has a\nminimal reachable non-unary subset of cardinality $r\\ge 2$, then there is a\nreset word of length at most $(n-1)D(2,r,n)$, where $D(2,r,n)$ is the\n$2$-packing number for families of $r$-subsets of $[1,n]$.\n",
"title": "A bound for the shortest reset words for semisimple synchronizing automata via the packing number"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
12510
| null |
Validated
| null | null |
null |
{
"abstract": " Recent developments within memory-augmented neural networks have solved\nsequential problems requiring long-term memory, which are intractable for\ntraditional neural networks. However, current approaches still struggle to\nscale to large memory sizes and sequence lengths. In this paper we show how\naccess to memory can be encoded geometrically through a HyperNEAT-based Neural\nTuring Machine (HyperENTM). We demonstrate that using the indirect HyperNEAT\nencoding allows for training on small memory vectors in a bit-vector copy task\nand then applying the knowledge gained from such training to speed up training\non larger size memory vectors. Additionally, we demonstrate that in some\ninstances, networks trained to copy bit-vectors of size 9 can be scaled to\nsizes of 1,000 without further training. While the task in this paper is\nsimple, these results could open up the problems amendable to networks with\nexternal memories to problems with larger memory vectors and theoretically\nunbounded memory sizes.\n",
"title": "HyperENTM: Evolving Scalable Neural Turing Machines through HyperNEAT"
}
| null | null | null | null | true | null |
12511
| null |
Default
| null | null |
null |
{
"abstract": " We present analytical and numerical studies of models of supernova-remnant\n(SNR) blast waves expanding into uniform media and interacting with a denser\ncavity wall, in one spatial dimension. We predict the nonthermal emission from\nsuch blast waves: synchrotron emission at radio and X-ray energies, and\nbremsstrahlung, inverse-Compton emission (from cosmic-microwave-background seed\nphotons, ICCMB), and emission from the decay of $\\pi^0$ mesons produced in\ninelastic collisions between accelerated ions and thermal gas, at GeV and TeV\nenergies. Accelerated particle spectra are assumed to be power-laws with\nexponential cutoffs at energies limited by the remnant age or (for electrons,\nif lower) by radiative losses. We compare the results with those from\nhomogeneous (\"one-zone\") models. Such models give fair representations of the\n1-D results for uniform media, but cavity-wall interactions produce effects for\nwhich one-zone models are inadequate. We study the time evolution of SNR\nmorphology and emission with time. Strong morphological differences exist\nbetween ICCMB and $\\pi^0$-decay emission, at some stages, the TeV emission can\nbe dominated by the former and the GeV by the latter, resulting in strong\nenergy-dependence of morphology. Integrated gamma-ray spectra show apparent\npower-laws of slopes that vary with time, but do not indicate the energy\ndistribution of a single population of particles. As observational capabilities\nat GeV and TeV energies improve, spatial inhomogeneity in SNRs will need to be\naccounted for.\n",
"title": "X-Ray and Gamma-Ray Emission from Middle-aged Supernova Remnants in Cavities. I. Spherical Symmetry"
}
| null | null | null | null | true | null |
12512
| null |
Default
| null | null |
null |
{
"abstract": " We study the optimal design of electricity contracts among a population of\nconsumers with different needs. This question is tackled within the framework\nof Principal-Agent problems in presence of adverse selection. The particular\nfeatures of electricity induce an unusual structure on the production cost,\nwith no decreasing return to scale. We are nevertheless able to provide an\nexplicit solution for the problem at hand. The optimal contracts are either\nlinear or polynomial with respect to the consumption. Whenever the outside\noptions offered by competitors are not uniform among the different type of\nconsumers, we exhibit situations where the electricity provider should contract\nwith consumers with either low or high appetite for electricity.\n",
"title": "An adverse selection approach to power pricing"
}
| null | null | null | null | true | null |
12513
| null |
Default
| null | null |
null |
{
"abstract": " We consider the minimization of an objective function given access to\nunbiased estimates of its gradient through stochastic gradient descent (SGD)\nwith constant step-size. While the detailed analysis was only performed for\nquadratic functions, we provide an explicit asymptotic expansion of the moments\nof the averaged SGD iterates that outlines the dependence on initial\nconditions, the effect of noise and the step-size, as well as the lack of\nconvergence in the general (non-quadratic) case. For this analysis, we bring\ntools from Markov chain theory into the analysis of stochastic gradient. We\nthen show that Richardson-Romberg extrapolation may be used to get closer to\nthe global optimum and we show empirical improvements of the new extrapolation\nscheme.\n",
"title": "Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
12514
| null |
Validated
| null | null |
null |
{
"abstract": " We study the problem of learning overcomplete HMMs---those that have many\nhidden states but a small output alphabet. Despite having significant practical\nimportance, such HMMs are poorly understood with no known positive or negative\nresults for efficient learning. In this paper, we present several new\nresults---both positive and negative---which help define the boundaries between\nthe tractable and intractable settings. Specifically, we show positive results\nfor a large subclass of HMMs whose transition matrices are sparse,\nwell-conditioned, and have small probability mass on short cycles. On the other\nhand, we show that learning is impossible given only a polynomial number of\nsamples for HMMs with a small output alphabet and whose transition matrices are\nrandom regular graphs with large degree. We also discuss these results in the\ncontext of learning HMMs which can capture long-term dependencies.\n",
"title": "Learning Overcomplete HMMs"
}
| null | null | null | null | true | null |
12515
| null |
Default
| null | null |
null |
{
"abstract": " Atomistic rigid lattice Kinetic Monte Carlo is an efficient method for\nsimulating nano-objects and surfaces at timescales much longer than those\naccessible by molecular dynamics. A laborious part of constructing any Kinetic\nMonte Carlo model is, however, to calculate all migration barriers that are\nneeded to give the probabilities for any atom jump event to occur in the\nsimulations. One of the common methods of barrier calculations is Nudged\nElastic Band. The number of barriers needed to fully describe simulated systems\nis typically between hundreds of thousands and millions. Calculations of such a\nlarge number of barriers of various processes is far from trivial. In this\npaper, we will discuss the challenges arising during barriers calculations on a\nsurface and present a systematic and reliable tethering force approach to\nconstruct a rigid lattice barrier parameterization of face-centred and\nbody-centred cubic metal lattices. We have produced several different barrier\nsets for Cu and for Fe that can be used for KMC simulations of processes on\narbitrarily rough surfaces. The sets are published as Data in Brief articles\nand available for the use.\n",
"title": "Migration barriers for surface diffusion on a rigid lattice: challenges and solutions"
}
| null | null | null | null | true | null |
12516
| null |
Default
| null | null |
null |
{
"abstract": " The paper aims at finding acyclic graphs under a given set of constraints.\nMore specifically, given a propositional formula {\\phi} over edges of a\nfixed-size graph, the objective is to find a model of {\\phi} that corresponds\nto a graph that is acyclic. The paper proposes several encodings of the problem\nand compares them in an experimental evaluation using stateof-the-art SAT\nsolvers.\n",
"title": "On the Quest for an Acyclic Graph"
}
| null | null | null | null | true | null |
12517
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents a robust matrix elastic net based canonical correlation\nanalysis (RMEN-CCA) for multiple view unsupervised learning problems, which\nemphasizes the combination of CCA and the robust matrix elastic net (RMEN) used\nas coupled feature selection. The RMEN-CCA leverages the strength of the RMEN\nto distill naturally meaningful features without any prior assumption and to\nmeasure effectively correlations between different 'views'. We can further\nemploy directly the kernel trick to extend the RMEN-CCA to the kernel scenario\nwith theoretical guarantees, which takes advantage of the kernel trick for\nhighly complicated nonlinear feature learning. Rather than simply incorporating\nexisting regularization minimization terms into CCA, this paper provides a new\nlearning paradigm for CCA and is the first to derive a coupled feature\nselection based CCA algorithm that guarantees convergence. More significantly,\nfor CCA, the newly-derived RMEN-CCA bridges the gap between measurement of\nrelevance and coupled feature selection. Moreover, it is nontrivial to tackle\ndirectly the RMEN-CCA by previous optimization approaches derived from its\nsophisticated model architecture. Therefore, this paper further offers a bridge\nbetween a new optimization problem and an existing efficient iterative\napproach. As a consequence, the RMEN-CCA can overcome the limitation of CCA and\naddress large-scale and streaming data problems. Experimental results on four\npopular competing datasets illustrate that the RMEN-CCA performs more\neffectively and efficiently than do state-of-the-art approaches.\n",
"title": "Robust Matrix Elastic Net based Canonical Correlation Analysis: An Effective Algorithm for Multi-View Unsupervised Learning"
}
| null | null | null | null | true | null |
12518
| null |
Default
| null | null |
null |
{
"abstract": " Visual Question Answering (VQA) has received a lot of attention over the past\ncouple of years. A number of deep learning models have been proposed for this\ntask. However, it has been shown that these models are heavily driven by\nsuperficial correlations in the training data and lack compositionality -- the\nability to answer questions about unseen compositions of seen concepts. This\ncompositionality is desirable and central to intelligence. In this paper, we\npropose a new setting for Visual Question Answering where the test\nquestion-answer pairs are compositionally novel compared to training\nquestion-answer pairs. To facilitate developing models under this setting, we\npresent a new compositional split of the VQA v1.0 dataset, which we call\nCompositional VQA (C-VQA). We analyze the distribution of questions and answers\nin the C-VQA splits. Finally, we evaluate several existing VQA models under\nthis new setting and show that the performances of these models degrade by a\nsignificant amount compared to the original VQA setting.\n",
"title": "C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset"
}
| null | null | null | null | true | null |
12519
| null |
Default
| null | null |
null |
{
"abstract": " Wheeled planetary rovers such as the Mars Exploration Rovers (MERs) and Mars\nScience Laboratory (MSL) have provided unprecedented, detailed images of the\nMars surface. However, these rovers are large and are of high-cost as they need\nto carry sophisticated instruments and science laboratories. We propose the\ndevelopment of low-cost planetary rovers that are the size and shape of\ncantaloupes and that can be deployed from a larger rover. The rover named\nSphereX is 2 kg in mass, is spherical, holonomic and contains a hopping\nmechanism to jump over rugged terrain. A small low-cost rover complements a\nlarger rover, particularly to traverse rugged terrain or roll down a canyon,\ncliff or crater to obtain images and science data. While it may be a one-way\njourney for these small robots, they could be used tactically to obtain\nhigh-reward science data. The robot is equipped with a pair of stereo cameras\nto perform visual navigation and has room for a science payload. In this paper,\nwe analyze the design and development of a laboratory prototype. The results\nshow a promising pathway towards development of a field system.\n",
"title": "Spherical Planetary Robot for Rugged Terrain Traversal"
}
| null | null | null | null | true | null |
12520
| null |
Default
| null | null |
null |
{
"abstract": " We explore the sequential decision making problem where the goal is to\nestimate uniformly well a number of linear models, given a shared budget of\nrandom contexts independently sampled from a known distribution. The decision\nmaker must query one of the linear models for each incoming context, and\nreceives an observation corrupted by noise levels that are unknown, and depend\non the model instance. We present Trace-UCB, an adaptive allocation algorithm\nthat learns the noise levels while balancing contexts accordingly across the\ndifferent linear functions, and derive guarantees for simple regret in both\nexpectation and high-probability. Finally, we extend the algorithm and its\nguarantees to high dimensional settings, where the number of linear models\ntimes the dimension of the contextual space is higher than the total budget of\nsamples. Simulations with real data suggest that Trace-UCB is remarkably\nrobust, outperforming a number of baselines even when its assumptions are\nviolated.\n",
"title": "Active Learning for Accurate Estimation of Linear Models"
}
| null | null | null | null | true | null |
12521
| null |
Default
| null | null |
null |
{
"abstract": " One of the goals of 5G wireless systems stated by the NGMN alliance is to\nprovide moderate rates (50+ Mbps) everywhere and with very high reliability. We\nterm this service Ultra-Reliable Ubiquitous-Rate Communication (UR2C). This\npaper investigates the role of frequency reuse in supporting UR2C in the\ndownlink. To this end, two frequency reuse schemes are considered:\nuser-specific frequency reuse (FRu) and BS-specific frequency reuse (FRb). For\na given unit frequency channel, FRu reduces the number of serving user\nequipments (UEs), whereas FRb directly decreases the number of interfering base\nstations (BSs). This increases the distance from the interfering BSs and the\nsignal-to-interference ratio (SIR) attains ultra-reliability, e.g. 99% SIR\ncoverage at a randomly picked UE. The ultra-reliability is, however, achieved\nat the cost of the reduced frequency allocation, which may degrade overall\ndownlink rate. To fairly capture this reliability-rate tradeoff, we propose\nubiquitous rate defined as the maximum downlink rate whose required SIR can be\nachieved with ultra-reliability. By using stochastic geometry, we derive\nclosed-form ubiquitous rate as well as the optimal frequency reuse rules for\nUR2C.\n",
"title": "Revisiting Frequency Reuse towards Supporting Ultra-Reliable Ubiquitous-Rate Communication"
}
| null | null | null | null | true | null |
12522
| null |
Default
| null | null |
null |
{
"abstract": " We consider the class of evolution equations that describe pseudo-spherical\nsurfaces of the form u\\_t = F (u, $\\partial$u/$\\partial$x, ..., $\\partial$^k\nu/$\\partial$x^k), k $\\ge$ 2 classified by Chern-Tenenblat. This class of\nequations is characterized by the property that to each solution of a\ndifferential equation within this class, there corresponds a 2-dimensional\nRiemannian metric of curvature-1. We investigate the following problem: given\nsuch a metric, is there a local isometric immersion in R 3 such that the\ncoefficients of the second fundamental form of the surface depend on a jet of\nfinite order of u? By extending our previous result for second order evolution\nequation to k-th order equations, we prove that there is only one type of\nequations that admit such an isometric immersion. We prove that the\ncoefficients of the second fundamental forms of the local isometric immersion\ndetermined by the solutions u are universal, i.e., they are independent of u.\nMoreover, we show that there exists a foliation of the domain of the parameters\nof the surface by straight lines with the property that the mean curvature of\nthe surface is constant along the images of these straight lines under the\nisometric immersion.\n",
"title": "Local isometric immersions of pseudo-spherical surfaces and k-th order evolution equations"
}
| null | null | null | null | true | null |
12523
| null |
Default
| null | null |
null |
{
"abstract": " Deep convolutional neural networks (CNNs) are becoming increasingly popular\nmodels to predict neural responses in visual cortex. However, contextual\neffects, which are prevalent in neural processing and in perception, are not\nexplicitly handled by current CNNs, including those used for neural prediction.\nIn primary visual cortex, neural responses are modulated by stimuli spatially\nsurrounding the classical receptive field in rich ways. These effects have been\nmodeled with divisive normalization approaches, including flexible models,\nwhere spatial normalization is recruited only to the degree responses from\ncenter and surround locations are deemed statistically dependent. We propose a\nflexible normalization model applied to mid-level representations of deep CNNs\nas a tractable way to study contextual normalization mechanisms in mid-level\ncortical areas. This approach captures non-trivial spatial dependencies among\nmid-level features in CNNs, such as those present in textures and other visual\nstimuli, that arise from tiling high order features, geometrically. We expect\nthat the proposed approach can make predictions about when spatial\nnormalization might be recruited in mid-level cortical areas. We also expect\nthis approach to be useful as part of the CNN toolkit, therefore going beyond\nmore restrictive fixed forms of normalization.\n",
"title": "Integrating Flexible Normalization into Mid-Level Representations of Deep Convolutional Neural Networks"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
12524
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we study the moments of central values of Hecke $L$-functions\nassociated with quadratic characters in $\\mq(i)$, and establish quantitative\nnon-vanishing result for the $L$-values.\n",
"title": "Moments and non-vanishing of Hecke $L$-functions with quadratic characters in $\\mathbb{Q}(i)$ at the central point"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12525
| null |
Validated
| null | null |
null |
{
"abstract": " Graphs are a commonly used construct for representing relationships between\nelements in complex high dimensional datasets. Many real-world phenomenon are\ndynamic in nature, meaning that any graph used to represent them is inherently\ntemporal. However, many of the machine learning models designed to capture\nknowledge about the structure of these graphs ignore this rich temporal\ninformation when creating representations of the graph. This results in models\nwhich do not perform well when used to make predictions about the future state\nof the graph -- especially when the delta between time stamps is not small. In\nthis work, we explore a novel training procedure and an associated unsupervised\nmodel which creates graph representations optimised to predict the future state\nof the graph. We make use of graph convolutional neural networks to encode the\ngraph into a latent representation, which we then use to train our temporal\noffset reconstruction method, inspired by auto-encoders, to predict a later\ntime point -- multiple time steps into the future. Using our method, we\ndemonstrate superior performance for the task of future link prediction\ncompared with none-temporal state-of-the-art baselines. We show our approach to\nbe capable of outperforming non-temporal baselines by 38% on a real world\ndataset.\n",
"title": "Temporal Graph Offset Reconstruction: Towards Temporally Robust Graph Representation Learning"
}
| null | null | null | null | true | null |
12526
| null |
Default
| null | null |
null |
{
"abstract": " This paper addresses the problem of decentralized tube-based nonlinear Model\nPredictive Control (NMPC) for a class of uncertain nonlinear continuous-time\nmulti-agent systems with additive and bounded disturbance. In particular, the\nproblem of robust navigation of a multi-agent system to predefined states of\nthe workspace while using only local information is addressed, under certain\ndistance and control input constraints. We propose a decentralized feedback\ncontrol protocol that consists of two terms: a nominal control input, which is\ncomputed online and is the outcome of a Decentralized Finite Horizon Optimal\nControl Problem (DFHOCP) that each agent solves at every sampling time, for its\nnominal system dynamics; and an additive state feedback law which is computed\noffline and guarantees that the real trajectories of each agent will belong to\na hyper-tube centered along the nominal trajectory, for all times. The volume\nof the hyper-tube depends on the upper bound of the disturbances as well as the\nbounds of the derivatives of the dynamics. In addition, by introducing certain\ndistance constraints, the proposed scheme guarantees that the initially\nconnected agents remain connected for all times. Under standard assumptions\nthat arise in nominal NMPC schemes, controllability assumptions as well as\ncommunication capabilities between the agents, we guarantee that the\nmulti-agent system is ISS (Input to State Stable) with respect to the\ndisturbances, for all initial conditions satisfying the state constraints.\nSimulation results verify the correctness of the proposed framework.\n",
"title": "Decentralized Tube-based Model Predictive Control of Uncertain Nonlinear Multi-Agent Systems"
}
| null | null | null | null | true | null |
12527
| null |
Default
| null | null |
null |
{
"abstract": " We discuss the understanding of geometry of the circle in ancient India, in\nterms of enunciation of various principles, constructions, applications etc.\nduring various phases of history and cultural contexts.\n",
"title": "Cognition of the circle in ancient India"
}
| null | null | null | null | true | null |
12528
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we use refined approximations for Chebyshev's\n$\\vartheta$-function to establish new explicit estimates for the prime counting\nfunction $\\pi(x)$, which improve the current best estimates for large values of\n$x$. As an application we find an upper bound for the number $H_0$ which is\ndefined to be the smallest positive integer so that Ramanujan's prime counting\ninequality holds for every $x \\geq H_0$.\n",
"title": "Estimates for $π(x)$ for large values of $x$ and Ramanujan's prime counting inequality"
}
| null | null | null | null | true | null |
12529
| null |
Default
| null | null |
null |
{
"abstract": " Electrical forces are the background of all the interactions occurring in\nbiochemical systems. From here and by using a combination of ab-initio and\nad-hoc models, we introduce the first description of electric field profiles\nwith intrabond resolution to support a characterization of single bond forces\nattending to its electrical origin. This fundamental issue has eluded a\nphysical description so far. Our method is applied to describe hydrogen bonds\n(HB) in DNA base pairs. Numerical results reveal that base pairs in DNA could\nbe equivalent considering HB strength contributions, which challenges previous\ninterpretations of thermodynamic properties of DNA based on the assumption that\nAdenine/Thymine pairs are weaker than Guanine/Cytosine pairs due to the sole\ndifference in the number of HB. Thus, our methodology provides solid\nfoundations to support the development of extended models intended to go deeper\ninto the molecular mechanisms of DNA functioning.\n",
"title": "Unveiled electric profiles within hydrogen bonds suggest DNA base pairs with similar bond strengths"
}
| null | null | null | null | true | null |
12530
| null |
Default
| null | null |
null |
{
"abstract": " Bone tissue mechanical properties and trabecular microarchitecture are the\nmain factors that determine the biomechanical properties of cancellous bone.\nArtificial cancellous microstructures, typically described by a reduced number\nof geometrical parameters, can be designed to obtain a mechanical behavior\nmimicking that of natural bone. In this work, we assess the ability of the\nparameterized microstructure introduced by Kowalczyk (2006) to mimic the\nelastic response of cancellous bone. Artificial microstructures are compared\nwith actual bone samples in terms of elasticity matrices and their symmetry\nclasses. The capability of the parameterized microstructure to combine the\ndominant isotropic, hexagonal, tetragonal and orthorhombic symmetry classes in\nthe proportions present in the cancellous bone is shown. Based on this finding,\ntwo optimization approaches are devised to find the geometrical parameters of\nthe artificial microstructure that better mimics the elastic response of a\ntarget natural bone specimen: a Sequential Quadratic Programming algorithm that\nminimizes the norm of the difference between the elasticity matrices, and a\nPattern Search algorithm that minimizes the difference between the symmetry\nclass decompositions. The pattern search approach is found to produce the best\nresults. The performance of the method is demonstrated via analyses for 146\nbone samples.\n",
"title": "Mimetization of the elastic properties of cancellous bone via a parameterized cellular material"
}
| null | null |
[
"Physics"
] | null | true | null |
12531
| null |
Validated
| null | null |
null |
{
"abstract": " Temporal Pattern Mining (TPM) is the problem of mining predictive complex\ntemporal patterns from multivariate time series in a supervised setting. We\ndevelop a new method called the Fast Temporal Pattern Mining with Extended\nVertical Lists. This method utilizes an extension of the Apriori property which\nrequires a more complex pattern to appear within records only at places where\nall of its subpatterns are detected as well. The approach is based on a novel\ndata structure called the Extended Vertical List that tracks positions of the\nfirst state of the pattern inside records. Extensive computational results\nindicate that the new method performs significantly faster than the previous\nversion of the algorithm for TMP. However, the speed-up comes at the expense of\nmemory usage.\n",
"title": "Extended Vertical Lists for Temporal Pattern Mining from Multivariate Time Series"
}
| null | null | null | null | true | null |
12532
| null |
Default
| null | null |
null |
{
"abstract": " We propose PowerAlert, an efficient external integrity checker for untrusted\nhosts. Current attestation systems suffer from shortcomings in requiring\ncomplete checksum of the code segment, being static, use of timing information\nsourced from the untrusted machine, or use of timing information with high\nerror (network round trip time). We address those shortcomings by (1) using\npower measurements from the host to ensure that the checking code is executed\nand (2) checking a subset of the kernel space over a long period of time. We\ncompare the power measurement against a learned power model of the execution of\nthe machine and validate that the execution was not tampered. Finally, power\ndiversifies the integrity checking program to prevent the attacker from\nadapting. We implement a prototype of PowerAlert using Raspberry pi and\nevaluate the performance of the integrity checking program generation. We model\nthe interaction between PowerAlert and an attacker as a game. We study the\neffectiveness of the random initiation strategy in deterring the attacker. The\nstudy shows that \\power forces the attacker to trade-off stealthiness for the\nrisk of detection, while still maintaining an acceptable probability of\ndetection given the long lifespan of stealthy attacks.\n",
"title": "PowerAlert: An Integrity Checker using Power Measurement"
}
| null | null | null | null | true | null |
12533
| null |
Default
| null | null |
null |
{
"abstract": " It is an open question whether the linear extension complexity of the\nCartesian product of two polytopes P, Q is the sum of the extension\ncomplexities of P and Q. We give an affirmative answer to this question for the\ncase that one of the two polytopes is a pyramid.\n",
"title": "Extension complexities of Cartesian products involving a pyramid"
}
| null | null | null | null | true | null |
12534
| null |
Default
| null | null |
null |
{
"abstract": " In this work we explored building automatic speech recognition models for\ntranscribing doctor patient conversation. We collected a large scale dataset of\nclinical conversations ($14,000$ hr), designed the task to represent the real\nword scenario, and explored several alignment approaches to iteratively improve\ndata quality. We explored both CTC and LAS systems for building speech\nrecognition models. The LAS was more resilient to noisy data and CTC required\nmore data clean up. A detailed analysis is provided for understanding the\nperformance for clinical tasks. Our analysis showed the speech recognition\nmodels performed well on important medical utterances, while errors occurred in\ncausal conversations. Overall we believe the resulting models can provide\nreasonable quality in practice.\n",
"title": "Speech recognition for medical conversations"
}
| null | null | null | null | true | null |
12535
| null |
Default
| null | null |
null |
{
"abstract": " Fallback authentication is used to retrieve forgotten passwords. Security\nquestions are one of the main techniques used to conduct fallback\nauthentication. In this paper, we propose a serious game design that uses\nsystem-generated security questions with the aim of improving the usability of\nfallback authentication. For this purpose, we adopted the popular picture-based\n\"4 Pics 1 word\" mobile game. This game was selected because of its use of\npictures and cues, which previous psychology research found to be crucial to\naid memorability. This game asks users to pick the word that relates to the\ngiven pictures. We then customized this game by adding features which help\nmaximize the following memory retrieval skills: (a) verbal cues - by providing\nhints with verbal descriptions, (b) spatial cues - by maintaining the same\norder of pictures, (c) graphical cues - by showing 4 images for each challenge,\n(d) interactivity/engaging nature of the game.\n",
"title": "Changing users' security behaviour towards security questions: A game based learning approach"
}
| null | null |
[
"Computer Science"
] | null | true | null |
12536
| null |
Validated
| null | null |
null |
{
"abstract": " Web archiving services play an increasingly important role in today's\ninformation ecosystem, by ensuring the continuing availability of information,\nor by deliberately caching content that might get deleted or removed. Among\nthese, the Wayback Machine has been proactively archiving, since 2001, versions\nof a large number of Web pages, while newer services like archive.is allow\nusers to create on-demand snapshots of specific Web pages, which serve as time\ncapsules that can be shared across the Web. In this paper, we present a\nlarge-scale analysis of Web archiving services and their use on social media,\nshedding light on the actors involved in this ecosystem, the content that gets\narchived, and how it is shared. We crawl and study: 1) 21M URLs from\narchive.is, spanning almost two years, and 2) 356K archive.is plus 391K Wayback\nMachine URLs that were shared on four social networks: Reddit, Twitter, Gab,\nand 4chan's Politically Incorrect board (/pol/) over 14 months. We observe that\nnews and social media posts are the most common types of content archived,\nlikely due to their perceived ephemeral and/or controversial nature. Moreover,\nURLs of archiving services are extensively shared on \"fringe\" communities\nwithin Reddit and 4chan to preserve possibly contentious content. Lastly, we\nfind evidence of moderators nudging or even forcing users to use archives,\ninstead of direct links, for news sources with opposing ideologies, potentially\ndepriving them of ad revenue.\n",
"title": "Understanding Web Archiving Services and Their (Mis)Use on Social Media"
}
| null | null | null | null | true | null |
12537
| null |
Default
| null | null |
null |
{
"abstract": " Biclustering techniques have been widely used to identify homogeneous\nsubgroups within large data matrices, such as subsets of genes similarly\nexpressed across subsets of patients. Mining a max-sum sub-matrix is a related\nbut distinct problem for which one looks for a (non-necessarily contiguous)\nrectangular sub-matrix with a maximal sum of its entries. Le Van et al. (Ranked\nTiling, 2014) already illustrated its applicability to gene expression analysis\nand addressed it with a constraint programming (CP) approach combined with\nlarge neighborhood search (CP-LNS). In this work, we exhibit some key\nproperties of this NP-hard problem and define a bounding function such that\nlarger problems can be solved in reasonable time. Two different algorithms are\nproposed in order to exploit the highlighted characteristics of the problem: a\nCP approach with a global constraint (CPGC) and mixed integer linear\nprogramming (MILP). Practical experiments conducted both on synthetic and real\ngene expression data exhibit the characteristics of these approaches and their\nrelative benefits over the original CP-LNS method. Overall, the CPGC approach\ntends to be the fastest to produce a good solution. Yet, the MILP formulation\nis arguably the easiest to formulate and can also be competitive.\n",
"title": "Mining a Sub-Matrix of Maximal Sum"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12538
| null |
Validated
| null | null |
null |
{
"abstract": " Deep learning methods achieve state-of-the-art performance in many\napplication scenarios. Yet, these methods require a significant amount of\nhyperparameters tuning in order to achieve the best results. In particular,\ntuning the learning rates in the stochastic optimization process is still one\nof the main bottlenecks. In this paper, we propose a new stochastic gradient\ndescent procedure for deep networks that does not require any learning rate\nsetting. Contrary to previous methods, we do not adapt the learning rates nor\nwe make use of the assumed curvature of the objective function. Instead, we\nreduce the optimization process to a game of betting on a coin and propose a\nlearning-rate-free optimal algorithm for this scenario. Theoretical convergence\nis proven for convex and quasi-convex functions and empirical evidence shows\nthe advantage of our algorithm over popular stochastic gradient algorithms.\n",
"title": "Training Deep Networks without Learning Rates Through Coin Betting"
}
| null | null | null | null | true | null |
12539
| null |
Default
| null | null |
null |
{
"abstract": " This letter studies joint transmit beamforming and antenna selection at a\nsecondary base station (BS) with multiple primary users (PUs) in an underlay\ncognitive radio multiple-input single-output broadcast channel. The objective\nis to maximize the sum rate subject to the secondary BS transmit power, minimum\nrequired rates for secondary users, and PUs' interference power constraints.\nThe utility function of interest is nonconcave and the involved constraints are\nnonconvex, so this problem is hard to solve. Nevertheless, we propose a new\niterative algorithm that finds local optima at the least. We use an inner\napproximation method to construct and solve a simple convex quadratic program\nof moderate dimension at each iteration of the proposed algorithm. Simulation\nresults indicate that the proposed algorithm converges quickly and outperforms\nexisting approaches.\n",
"title": "Joint Beamforming and Antenna Selection for Sum Rate Maximization in Cognitive Radio Networks"
}
| null | null |
[
"Computer Science"
] | null | true | null |
12540
| null |
Validated
| null | null |
null |
{
"abstract": " We propose local segmentation of multiple sequences sharing a common time- or\nlocation-index, building upon the single sequence local segmentation methods of\nNiu and Zhang (2012) and Fang, Li and Siegmund (2016). We also propose reverse\nsegmentation of multiple sequences that is new even in the single sequence\ncontext. We show that local segmentation estimates change-points consistently\nfor both single and multiple sequences, and that both methods proposed here\ndetect signals well, with the reverse segmentation method outperforming a large\nnumber of known segmentation methods on a commonly used single sequence test\nscenario. We show that on a recent allele-specific copy number study involving\nmultiple cancer patients, the simultaneous segmentations of the DNA sequences\nof all the patients provide information beyond that obtained by segmentation of\nthe sequences one at a time.\n",
"title": "Multi-sequence segmentation via score and higher-criticism tests"
}
| null | null | null | null | true | null |
12541
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the effect of annealing temperature on the crystalline\nstructure and physical properties of tantalum-pentoxide films grown by radio\nfrequency magnetron sputtering. For this purpose, several tantalum films were\ndeposited and the Ta$_2$O$_5$ crystalline phase was induced by exposing the\nsamples to heat treatments in air in the temperature range from (575 to\n1000)$^\\circ$C. Coating characterization was performed using X-ray diffraction,\nscanning electron microscopy, Raman spectroscopy and UV-VIS spectroscopy. By\nX-ray diffraction analysis we found that a hexagonal Ta$_2$O$_5$ phase\ngenerates at temperatures above $675^\\circ$C. As the annealing temperature\nraises, we observe peak sharpening and new peaks in the corresponding\ndiffraction patterns indicating a possible structural transition from hexagonal\nto orthorhombic. The microstructure of the films starts with flake-like\nstructures formed on the surface and evolves, as the temperature is further\nincreased, to round grains. We found out that, according to the features\nexhibited in the corresponding spectra, Raman spectroscopy can be sensitive\nenough to discriminate between the orthorhombic and hexagonal phases of\nTa$_2$O$_5$. Finally, as the films crystallize the magnitude of the optical\nband gap increases from 2.4 eV to the typical reported value of 3.8 eV.\n",
"title": "Evidence for structural transition in crystalline tantalum pentoxide films grown by RF magnetron sputtering"
}
| null | null | null | null | true | null |
12542
| null |
Default
| null | null |
null |
{
"abstract": " This article discusses the relationship between emergence and reductionism\nfrom the perspective of a condensed matter physicist. Reductionism and\nemergence play an intertwined role in the everyday life of the physicist, yet\nwe rarely stop to contemplate their relationship: indeed, the two are often\nregarded as conflicting world-views of science. I argue that in practice, they\ncompliment one-another, forming an awkward alliance in a fashion envisioned by\nthe Renaissance scientist, Francis Bacon. Looking at the historical record in\nclassical and quantum physics, I discuss how emergence fits into a reductionist\nview of nature. Often, a deep understanding of reductionist physics depends on\nthe understanding of its emergent consequences. Thus the concept of energy was\nunknown to Newton, Leibnitz, Lagrange or Hamilton, because they did not\nunderstand heat. Similarly, the understanding of the weak force awaited an\nunderstanding of the Meissner effect in superconductivity. Emergence can thus\nbe likened to an encrypted consequence of reductionism. Taking examples from\ncurrent research, including topological insulators and strange metals, I show\nthat the convection between emergence and reductionism continues to provide a\npowerful driver for frontier scientific research, linking the lab with the\ncosmos.\n",
"title": "Emergence and Reductionism: an awkward Baconian alliance"
}
| null | null |
[
"Physics"
] | null | true | null |
12543
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, an optimized efficient VLSI architecture of a pipeline Fast\nFourier transform (FFT) processor capable of producing the reverse output order\nsequence is presented. Paper presents Radix-2 multipath delay architecture for\nFFT calculation. The implementation of FFT in hardware is very critical because\nfor calculation of FFT number of butterfly operations i.e. number of\nmultipliers requires due to which hardware gets increased means indirectly cost\nof hardware is automatically gets increased. Also multiplier operations are\nslow that's why it limits the speed of operation of architecture. The optimized\nVLSI implementation of FFT algorithm is presented in this paper. Here\narchitecture is pipelined to optimize it and to increase the speed of\noperation. Also to increase the speed of operation 2 levels parallel processing\nis used.\n",
"title": "Pipelined Parallel FFT Architecture"
}
| null | null | null | null | true | null |
12544
| null |
Default
| null | null |
null |
{
"abstract": " We give rather simple answers to two long-standing questions in real-analytic\ngeometry, on global smoothing of a subanalytic set, and on transformation of a\nproper real-analytic mapping to a mapping with equidimensional fibres by global\nblowings-up of the target. These questions are related: a positive answer to\nthe second can be used to reduce the first to the simpler semianalytic case. We\nshow that the second question has a negative answer, in general, and that the\nfirst problem nevertheless has a positive solution.\n",
"title": "Global smoothing of a subanalytic set"
}
| null | null | null | null | true | null |
12545
| null |
Default
| null | null |
null |
{
"abstract": " We present a general framework, the coupled compound Poisson factorization\n(CCPF), to capture the missing-data mechanism in extremely sparse data sets by\ncoupling a hierarchical Poisson factorization with an arbitrary data-generating\nmodel. We derive a stochastic variational inference algorithm for the resulting\nmodel and, as examples of our framework, implement three different\ndata-generating models---a mixture model, linear regression, and factor\nanalysis---to robustly model non-random missing data in the context of\nclustering, prediction, and matrix factorization. In all three cases, we test\nour framework against models that ignore the missing-data mechanism on large\nscale studies with non-random missing data, and we show that explicitly\nmodeling the missing-data mechanism substantially improves the quality of the\nresults, as measured using data log likelihood on a held-out test set.\n",
"title": "Coupled Compound Poisson Factorization"
}
| null | null | null | null | true | null |
12546
| null |
Default
| null | null |
null |
{
"abstract": " Global and partial synchronization are the two distinctive forms of\nsynchronization in coupled oscillators and have been well studied in the past\ndecades. Recent attention on synchronization is focused on the chimera state\n(CS) and explosive synchronization (ES), but little attention has been paid to\ntheir relationship. We here study this topic by presenting a model to bridge\nthese two phenomena, which consists of two groups of coupled oscillators and\nits coupling strength is adaptively controlled by a local order parameter. We\nfind that this model displays either CS or ES in two limits. In between the two\nlimits, this model exhibits both CS and ES, where CS can be observed for a\nfixed coupling strength and ES appears when the coupling is increased\nadiabatically. Moreover, we show both theoretically and numerically that there\nare a variety of CS basin patterns for the case of identical oscillators,\ndepending on the distributions of both the initial order parameters and the\ninitial average phases. This model suggests a way to easily observe CS, in\ncontrast to others models having some (weak or strong) dependence on initial\nconditions.\n",
"title": "A model bridging chimera state and explosive synchronization"
}
| null | null | null | null | true | null |
12547
| null |
Default
| null | null |
null |
{
"abstract": " Goals are results of pin-point shots and it is a pivotal decision in soccer\nwhen, how and where to shoot. The main contribution of this study is two-fold.\nAt first, after showing that there exists high spatial correlation in the data\nof shots across games, we introduce a spatial process in the error structure to\nmodel the probability of conversion from a shot depending on positional and\nsituational covariates. The model is developed using a full Bayesian framework.\nSecondly, based on the proposed model, we define two new measures that can\nappropriately quantify the impact of an individual in soccer, by evaluating the\npositioning senses and shooting abilities of the players. As a practical\napplication, the method is implemented on Major League Soccer data from 2016/17\nseason.\n",
"title": "Spatial modeling of shot conversion in soccer to single out goalscoring ability"
}
| null | null | null | null | true | null |
12548
| null |
Default
| null | null |
null |
{
"abstract": " This report introduces and investigates a family of metrics on sets of\npointed Kripke models. The metrics are generalizations of the Hamming distance\napplicable to countably infinite binary strings and, by extension, logical\ntheories or semantic structures. We first study the topological properties of\nthe resulting metric spaces. A key result provides sufficient conditions for\nspaces having the Stone property, i.e., being compact, totally disconnected and\nHausdorff. Second, we turn to mappings, where it is shown that a widely used\ntype of model transformations, product updates, give rise to continuous maps in\nthe induced topology.\n",
"title": "Metrics for Formal Structures, with an Application to Kripke Models and their Dynamics"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12549
| null |
Validated
| null | null |
null |
{
"abstract": " Surface plasmon polariton, hyberbolic dispersion of energy and momentum, and\nemission interference provide opportunities to control photoluminescence\nproperties. However, the interplays between these regimes need to be understood\nto take advantage of them in optoelectronic applications. Here, we investigate\nbroadband variations induced by hyperbolic metamaterial (HMM) multilayer\nnanostructures on the spontaneous emission of selected organic chromophores.\nExperimental and calculated spontaneous emission lifetimes are shown to vary\nnon-monotonously near HMM interfaces. With the SPP and interference dominant\nregimes. With the HMM number of pairs used as the analysis parameter, the\nlifetime is shown to be independent of the number of pairs in the surface\nplasmon polaritons, and emission interference dominant regimes, while it\ndecreases in the Hyperbolic Dispersion dominant regime. We also show that the\nspontaneous emission lifetime is similarly affected by transverse positive and\ntransverse negative HMMs. This work has broad implications on the rational\ndesign of functional photonic surfaces to control the luminescence of\nsemiconductor chromophores.\n",
"title": "Hyperbolic Dispersion Dominant Regime Identified through Spontaneous Emission Variations near Metamaterial Interfaces"
}
| null | null | null | null | true | null |
12550
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the ramifications of the Legendrian satellite construction on\nthe relation of Lagrangian cobordism between Legendrian knots. Under a simple\nhypothesis, we construct a Lagrangian concordance between two Legendrian\nsatellites by stacking up a sequence of elementary cobordisms. This\nconstruction narrows the search for \"non-decomposable\" Lagrangian cobordisms\nand yields new families of decomposable Lagrangian slice knots. Finally, we\nshow that the maximum Thurston-Bennequin number of a smoothly slice knot\nprovides an obstruction to any Legendrian satellite of that knot being\nLagrangian slice.\n",
"title": "Legendrian Satellites and Decomposable Concordances"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12551
| null |
Validated
| null | null |
null |
{
"abstract": " Releasing full data records is one of the most challenging problems in data\nprivacy. On the one hand, many of the popular techniques such as data\nde-identification are problematic because of their dependence on the background\nknowledge of adversaries. On the other hand, rigorous methods such as the\nexponential mechanism for differential privacy are often computationally\nimpractical to use for releasing high dimensional data or cannot preserve high\nutility of original data due to their extensive data perturbation.\nThis paper presents a criterion called plausible deniability that provides a\nformal privacy guarantee, notably for releasing sensitive datasets: an output\nrecord can be released only if a certain amount of input records are\nindistinguishable, up to a privacy parameter. This notion does not depend on\nthe background knowledge of an adversary. Also, it can efficiently be checked\nby privacy tests. We present mechanisms to generate synthetic datasets with\nsimilar statistical properties to the input data and the same format. We study\nthis technique both theoretically and experimentally. A key theoretical result\nshows that, with proper randomization, the plausible deniability mechanism\ngenerates differentially private synthetic data. We demonstrate the efficiency\nof this generative technique on a large dataset; it is shown to preserve the\nutility of original data with respect to various statistical analysis and\nmachine learning measures.\n",
"title": "Plausible Deniability for Privacy-Preserving Data Synthesis"
}
| null | null | null | null | true | null |
12552
| null |
Default
| null | null |
null |
{
"abstract": " Automatic testing is a widely adopted technique for improving software\nquality. Software developers add, remove and update test methods and test\nclasses as part of the software development process as well as during the\nevolution phase, following the initial release. In this work we conduct a large\nscale study of 61 popular open source projects and report the relationships we\nhave established between test maintenance, production code maintenance, and\nsemantic changes (e.g, statement added, method removed, etc.). performed in\ndevelopers' commits.\nWe build predictive models, and show that the number of tests in a software\nproject can be well predicted by employing code maintenance profiles (i.e., how\nmany commits were performed in each of the maintenance activities: corrective,\nperfective, adaptive). Our findings also reveal that more often than not,\ndevelopers perform code fixes without performing complementary test maintenance\nin the same commit (e.g., update an existing test or add a new one). When\ndevelopers do perform test maintenance, it is likely to be affected by the\nsemantic changes they perform as part of their commit.\nOur work is based on studying 61 popular open source projects, comprised of\nover 240,000 commits consisting of over 16,000,000 semantic change type\ninstances, performed by over 4,000 software engineers.\n",
"title": "The Co-Evolution of Test Maintenance and Code Maintenance through the lens of Fine-Grained Semantic Changes"
}
| null | null | null | null | true | null |
12553
| null |
Default
| null | null |
null |
{
"abstract": " Autonomous vehicles (AVs) are on the road. To safely and efficiently interact\nwith other road participants, AVs have to accurately predict the behavior of\nsurrounding vehicles and plan accordingly. Such prediction should be\nprobabilistic, to address the uncertainties in human behavior. Such prediction\nshould also be interactive, since the distribution over all possible\ntrajectories of the predicted vehicle depends not only on historical\ninformation, but also on future plans of other vehicles that interact with it.\nTo achieve such interaction-aware predictions, we propose a probabilistic\nprediction approach based on hierarchical inverse reinforcement learning (IRL).\nFirst, we explicitly consider the hierarchical trajectory-generation process of\nhuman drivers involving both discrete and continuous driving decisions. Based\non this, the distribution over all future trajectories of the predicted vehicle\nis formulated as a mixture of distributions partitioned by the discrete\ndecisions. Then we apply IRL hierarchically to learn the distributions from\nreal human demonstrations. A case study for the ramp-merging driving scenario\nis provided. The quantitative results show that the proposed approach can\naccurately predict both the discrete driving decisions such as yield or pass as\nwell as the continuous trajectories.\n",
"title": "Probabilistic Prediction of Interactive Driving Behavior via Hierarchical Inverse Reinforcement Learning"
}
| null | null | null | null | true | null |
12554
| null |
Default
| null | null |
null |
{
"abstract": " We explore the feasibility of using fast-slow asymptotic to eliminate the\ncomputational stiffness of the discrete-state, continuous-time deterministic\nMarkov chain models of ionic channels underlying cardiac excitability. We focus\non a Markov chain model of the fast sodium current, and investigate its\nasymptotic behaviour with respect to small parameters identified in different\nways.\n",
"title": "Fast-slow asymptotics for a Markov chain model of fast sodium current"
}
| null | null | null | null | true | null |
12555
| null |
Default
| null | null |
null |
{
"abstract": " The online sports gambling industry employs teams of data analysts to build\nforecast models that turn the odds at sports games in their favour. While\nseveral betting strategies have been proposed to beat bookmakers, from expert\nprediction models and arbitrage strategies to odds bias exploitation, their\nreturns have been inconsistent and it remains to be shown that a betting\nstrategy can outperform the online sports betting market. We designed a\nstrategy to beat football bookmakers with their own numbers. Instead of\nbuilding a forecasting model to compete with bookmakers predictions, we\nexploited the probability information implicit in the odds publicly available\nin the marketplace to find bets with mispriced odds. Our strategy proved\nprofitable in a 10-year historical simulation using closing odds, a 6-month\nhistorical simulation using minute to minute odds, and a 5-month period during\nwhich we staked real money with the bookmakers (we made code, data and models\npublicly available). Our results demonstrate that the football betting market\nis inefficient - bookmakers can be consistently beaten across thousands of\ngames in both simulated environments and real-life betting. We provide a\ndetailed description of our betting experience to illustrate how the sports\ngambling industry compensates these market inefficiencies with discriminatory\npractices against successful clients.\n",
"title": "Beating the bookies with their own numbers - and how the online sports betting market is rigged"
}
| null | null | null | null | true | null |
12556
| null |
Default
| null | null |
null |
{
"abstract": " This paper is based on the complete classification of evolutionary scenarios\nfor the Moran process with two strategies given by Taylor et al. (B. Math.\nBiol. 66(6): 1621--1644, 2004). Their classification is based on whether each\nstrategy is a Nash equilibrium and whether the fixation probability for a\nsingle individual of each strategy is larger or smaller than its value for\nneutral evolution. We improve on this analysis by showing that each\nevolutionary scenario is characterized by a definite graph shape for the\nfixation probability function. A second class of results deals with the\nbehavior of the fixation probability when the population size tends to\ninfinity. We develop asymptotic formulae that approximate the fixation\nprobability in this limit and conclude that some of the evolutionary scenarios\ncannot exist when the population size is large.\n",
"title": "Fixation probabilities for the Moran process in evolutionary games with two strategies: graph shapes and large population asymptotics"
}
| null | null | null | null | true | null |
12557
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we show how a deep-submicron FPGA can be modified to operate at\nextremely low temperatures through modifications in the supporting hardware and\nin the firmware programming it. Though FPGAs are not designed to operate at a\nfew Kelvin, it is possible to do so on virtue of the extremely high doping\nlevels found in deep-submicron CMOS technology nodes. First, any PCB component,\nthat does not conform with this requirement, is removed. Both the majority of\ndecoupling capacitor types and voltage regulators are not well behaved at\ncryogenic temperatures, asking for an ad-hoc solution to stabilize the FPGA\nsupply voltage, especially for sensitive applications. Therefore, we have\ndesigned a firmware that enforces a constant power consumption, so as to\nstabilize the supply voltage in the interior of the FPGA chip. The FPGA is\npowered with a supply at several meters distance, causing significant IR drop\nand thus fluctuations on the local supply voltage. To achieve the\nstabilization, the variation in digital logic speed, which directly corresponds\nto changes in supply voltage, is constantly measured and corrected for through\na tunable oscillator farm, implemented on the FPGA. The method is versatile and\nrobust, enabling seamless porting to other FPGA families and configurations.\n",
"title": "FPGA Design Techniques for Stable Cryogenic Operation"
}
| null | null | null | null | true | null |
12558
| null |
Default
| null | null |
null |
{
"abstract": " We propose a new algorithm for finite sum optimization which we call the\ncurvature-aided incremental aggregated gradient (CIAG) method. Motivated by the\nproblem of training a classifier for a d-dimensional problem, where the number\nof training data is $m$ and $m \\gg d \\gg 1$, the CIAG method seeks to\naccelerate incremental aggregated gradient (IAG) methods using aids from the\ncurvature (or Hessian) information, while avoiding the evaluation of matrix\ninverses required by the incremental Newton (IN) method. Specifically, our idea\nis to exploit the incrementally aggregated Hessian matrix to trace the full\ngradient vector at every incremental step, therefore achieving an improved\nlinear convergence rate over the state-of-the-art IAG methods. For strongly\nconvex problems, the fast linear convergence rate requires the objective\nfunction to be close to quadratic, or the initial point to be close to optimal\nsolution. Importantly, we show that running one iteration of the CIAG method\nyields the same improvement to the optimality gap as running one iteration of\nthe full gradient method, while the complexity is $O(d^2)$ for CIAG and $O(md)$\nfor the full gradient. Overall, the CIAG method strikes a balance between the\nhigh computation complexity incremental Newton-type methods and the slow IAG\nmethod. Our numerical results support the theoretical findings and show that\nthe CIAG method often converges with much fewer iterations than IAG, and\nrequires much shorter running time than IN when the problem dimension is high.\n",
"title": "Curvature-aided Incremental Aggregated Gradient Method"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12559
| null |
Validated
| null | null |
null |
{
"abstract": " [abridged] In the typical giant-impact scenario for the Moon formation most\nof the Moon's material originates from the impactor. Any Earth-impactor\ncomposition difference should, therefore, correspond to a comparable Earth-Moon\ncomposition difference. Analysis of Moon rocks shows a close Earth-Moon\ncomposition similarity, posing a challenge for the giant-impact scenario, given\nthat impactors were thought to significantly differ in composition from the\nplanets they impact. Here we use a large set of 140 simulations to show that\nthe composition of impactors could be very similar to that of the planets they\nimpact; in $4.9\\%$-$18.2\\%$ ($1.9\\%$-$6.7\\%$) of the cases the resulting\ncomposition of the Moon is consistent with the observations of\n$\\Delta^{17}O<15$ ($\\Delta^{17}O<6$ ppm). These findings suggest that the\nEarth-Moon composition similarity could be resolved as to arise from the\nprimordial Earth-impactor composition similarity. Note that although we find\nthe likelihood for the suggested competing model of very high mass-ratio\nimpacts (producing significant Earth-impactor composition mixing) is comparable\n($<6.7\\%$), this scenario also requires additional fine-tuned requirements of a\nvery fast spinning Earth. Using the same simulations we also explore the\ncomposition of giant-impact formed Mars-moons as well as Vesta-like asteroids.\nWe find that the Mars-moon composition difference should be large, but smaller\nthan expected if the moons are captured asteroids. Finally, we find that the\nleft-over planetesimals ('asteroids') in our simulations are frequently\nscattered far away from their initial positions, thus potentially explaining\nthe mismatch between the current position and composition of the Vesta\nasteroid.\n",
"title": "The composition of Solar system asteroids and Earth/Mars moons, and the Earth-Moon composition similarity"
}
| null | null |
[
"Physics"
] | null | true | null |
12560
| null |
Validated
| null | null |
null |
{
"abstract": " Learning a regression function using censored or interval-valued output data\nis an important problem in fields such as genomics and medicine. The goal is to\nlearn a real-valued prediction function, and the training output labels\nindicate an interval of possible values. Whereas most existing algorithms for\nthis task are linear models, in this paper we investigate learning nonlinear\ntree models. We propose to learn a tree by minimizing a margin-based\ndiscriminative objective function, and we provide a dynamic programming\nalgorithm for computing the optimal solution in log-linear time. We show\nempirically that this algorithm achieves state-of-the-art speed and prediction\naccuracy in a benchmark of several data sets.\n",
"title": "Maximum Margin Interval Trees"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12561
| null |
Validated
| null | null |
null |
{
"abstract": " The degree distribution is one of the most fundamental properties used in the\nanalysis of massive graphs. There is a large literature on graph sampling,\nwhere the goal is to estimate properties (especially the degree distribution)\nof a large graph through a small, random sample. The degree distribution\nestimation poses a significant challenge, due to its heavy-tailed nature and\nthe large variance in degrees.\nWe design a new algorithm, SADDLES, for this problem, using recent\nmathematical techniques from the field of sublinear algorithms. The SADDLES\nalgorithm gives provably accurate outputs for all values of the degree\ndistribution. For the analysis, we define two fatness measures of the degree\ndistribution, called the $h$-index and the $z$-index. We prove that SADDLES is\nsublinear in the graph size when these indices are large. A corollary of this\nresult is a provably sublinear algorithm for any degree distribution bounded\nbelow by a power law.\nWe deploy our new algorithm on a variety of real datasets and demonstrate its\nexcellent empirical behavior. In all instances, we get extremely accurate\napproximations for all values in the degree distribution by observing at most\n$1\\%$ of the vertices. This is a major improvement over the state-of-the-art\nsampling algorithms, which typically sample more than $10\\%$ of the vertices to\ngive comparable results. We also observe that the $h$ and $z$-indices of real\ngraphs are large, validating our theoretical analysis.\n",
"title": "Provable and practical approximations for the degree distribution using sublinear graph samples"
}
| null | null | null | null | true | null |
12562
| null |
Default
| null | null |
null |
{
"abstract": " A method is proposed to generate an optimal fit of a number of connected\nlinear trend segments onto time-series data. To be able to efficiently handle\nmany lines, the method employs a stochastic search procedure to determine\noptimal transition point locations. Traditional methods use exhaustive grid\nsearches, which severely limit the scale of the problems for which they can be\nutilized. The proposed approach is tried against time series with severe noise\nto demonstrate its robustness, and then it is applied to real medical data as\nan illustrative example.\n",
"title": "Efficient and Robust Polylinear Analysis of Noisy Time Series"
}
| null | null | null | null | true | null |
12563
| null |
Default
| null | null |
null |
{
"abstract": " We consider the problem of optimizing heat transport through an\nincompressible fluid layer. Modeling passive scalar transport by\nadvection-diffusion, we maximize the mean rate of total transport by a\ndivergence-free velocity field. Subject to various boundary conditions and\nintensity constraints, we prove that the maximal rate of transport scales\nlinearly in the r.m.s. kinetic energy and, up to possible logarithmic\ncorrections, as the $1/3$rd power of the mean enstrophy in the advective\nregime. This makes rigorous a previous prediction on the near optimality of\nconvection rolls for energy-constrained transport. Optimal designs for\nenstrophy-constrained transport are significantly more difficult to describe:\nwe introduce a \"branching\" flow design with an unbounded number of degrees of\nfreedom and prove it achieves nearly optimal transport. The main technical tool\nbehind these results is a variational principle for evaluating the transport of\ncandidate designs. The principle admits dual formulations for bounding\ntransport from above and below. While the upper bound is closely related to the\n\"background method\", the lower bound reveals a connection between the optimal\ndesign problems considered herein and other apparently related model problems\nfrom mathematical materials science. These connections serve to motivate\ndesigns.\n",
"title": "On the optimal design of wall-to-wall heat transport"
}
| null | null | null | null | true | null |
12564
| null |
Default
| null | null |
null |
{
"abstract": " We consider the wave equation with a boundary condition of memory type. Under\nnatural conditions on the acoustic impedance $\\hat{k}$ of the boundary one can\ndefine a corresponding semigroup of contractions (Desch, Fasangova, Milota,\nProbst 2010). With the help of Tauberian theorems we establish energy decay\nrates via resolvent estimates on the generator $-\\mathcal{A}$ of the semigroup.\nWe reduce the problem of estimating the resolvent of $-\\mathcal{A}$ to the\nproblem of estimating the resolvent of the corresponding stationary problem.\nUnder not too strict additional assumptions on $\\hat{k}$ we establish an upper\nbound on the resolvent. For the wave equation on the interval or the disk we\nprove our estimates to be sharp.\n",
"title": "On the decay rate for the wave equation with viscoelastic boundary damping"
}
| null | null | null | null | true | null |
12565
| null |
Default
| null | null |
null |
{
"abstract": " We present an approach for a lightweight datatype-generic programming in\nObjective Caml programming language aimed at better code reuse. We show, that a\nlarge class of transformations usually expressed via recursive functions with\npattern matching can be implemented using the single per-type traversal\nfunction and the set of object-encoded transformations, which we call\ntransformation objects. Object encoding allows transformations to be modified,\ninherited and extended in a conventional object-oriented manner. However, the\ndata representation is kept untouched which preserves the ability to construct\nand pattern-match it in the usual way. Our approach equally works for regular\nand polymorphic variant types which makes it possible to combine data types and\ntheir transformations from statically typed and separately compiled components.\nWe also present an implementation which allows us to automatically derive most\nfunctionality from a slightly augmented type descriptions.\n",
"title": "Code Reuse With Transformation Objects"
}
| null | null | null | null | true | null |
12566
| null |
Default
| null | null |
null |
{
"abstract": " Many scientific and engineering challenges -- ranging from pharmacokinetic\ndrug dosage allocation and personalized medicine to marketing mix (4Ps)\nrecommendations -- require an understanding of the unobserved heterogeneity in\norder to develop the best decision making-processes. In this paper, we develop\na hypothesis test and the corresponding p-value for testing for the\nsignificance of the homogeneous structure in linear mixed models. A robust\nmatching moment construction is used for creating a test that adapts to the\nsize of the model sparsity. When unobserved heterogeneity at a cluster level is\nconstant, we show that our test is both consistent and unbiased even when the\ndimension of the model is extremely high. Our theoretical results rely on a new\nfamily of adaptive sparse estimators of the fixed effects that do not require\nconsistent estimation of the random effects. Moreover, our inference results do\nnot require consistent model selection. We showcase that moment matching can be\nextended to nonlinear mixed effects models and to generalized linear mixed\neffects models. In numerical and real data experiments, we find that the\ndeveloped method is extremely accurate, that it adapts to the size of the\nunderlying model and is decidedly powerful in the presence of irrelevant\ncovariates.\n",
"title": "Fixed effects testing in high-dimensional linear mixed models"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
12567
| null |
Validated
| null | null |
null |
{
"abstract": " This paper proves that every finite volume hyperbolic 3-manifold M contains a\nubiquitous collection of closed, immersed, quasi-Fuchsian surfaces. These\nsurfaces are ubiquitous in the sense that their preimages in the universal\ncover separate any pair of disjoint, non-asymptotic geodesic planes. The proof\nrelies in a crucial way on the corresponding theorem of Kahn and Markovic for\nclosed 3-manifolds. As a corollary of this result and a companion statement\nabout surfaces with cusps, we recover Wise's theorem that the fundamental group\nof M acts freely and cocompactly on a CAT(0) cube complex.\n",
"title": "Ubiquitous quasi-Fuchsian surfaces in cusped hyperbolic 3-manifolds"
}
| null | null | null | null | true | null |
12568
| null |
Default
| null | null |
null |
{
"abstract": " The cospark of a matrix is the cardinality of the sparsest vector in the\ncolumn space of the matrix. Computing the cospark of a matrix is well known to\nbe an NP hard problem. Given the sparsity pattern (i.e., the locations of the\nnon-zero entries) of a matrix, if the non-zero entries are drawn from\nindependently distributed continuous probability distributions, we prove that\nthe cospark of the matrix equals, with probability one, to a particular number\ntermed the generic cospark of the matrix. The generic cospark also equals to\nthe maximum cospark of matrices consistent with the given sparsity pattern. We\nprove that the generic cospark of a matrix can be computed in polynomial time,\nand offer an algorithm that achieves this.\n",
"title": "Generic Cospark of a Matrix Can Be Computed in Polynomial Time"
}
| null | null | null | null | true | null |
12569
| null |
Default
| null | null |
null |
{
"abstract": " Neural networks based vocoders, typically the WaveNet, have achieved\nspectacular performance for text-to-speech (TTS) in recent years. Although\nstate-of-the-art parallel WaveNet has addressed the issue of real-time waveform\ngeneration, there remains problems. Firstly, due to the noisy input signal of\nthe model, there is still a gap between the quality of generated and natural\nwaveforms. Secondly, a parallel WaveNet is trained under a distilled training\nframework, which makes it tedious to adapt a well trained model to a new\nspeaker. To address these two problems, this paper proposes an end-to-end\nadaptation method based on the generative adversarial network (GAN), which can\nreduce the computational cost for the training of new speaker adaptation. Our\nsubjective experiments shows that the proposed training method can further\nreduce the quality gap between generated and natural waveforms.\n",
"title": "Generative Adversarial Network based Speaker Adaptation for High Fidelity WaveNet Vocoder"
}
| null | null | null | null | true | null |
12570
| null |
Default
| null | null |
null |
{
"abstract": " In the last few years, contributions of the general public in scientific\nprojects has increased due to the advancement of communication and computing\ntechnologies. Internet played an important role in connecting scientists and\nvolunteers who are interested in participating in their scientific projects.\nHowever, despite potential benefits, only a limited number of crowdsourcing\nbased large-scale science (citizen science) projects have been deployed due to\nthe complexity involved in setting them up and running them. In this paper, we\npresent CitizenGrid - an online middleware platform which addresses security\nand deployment complexity issues by making use of cloud computing and\nvirtualisation technologies. CitizenGrid incentivises scientists to make their\nsmall-to-medium scale applications available as citizen science projects by: 1)\nproviding a directory of projects through a web-based portal that makes\napplications easy to discover; 2) providing flexibility to participate in,\nmonitor, and control multiple citizen science projects from a common interface;\n3) supporting diverse categories of citizen science projects. The paper\ndescribes the design, development and evaluation of CitizenGrid and its use\ncases.\n",
"title": "CitizenGrid: An Online Middleware for Crowdsourcing Scientific Research"
}
| null | null |
[
"Computer Science"
] | null | true | null |
12571
| null |
Validated
| null | null |
null |
{
"abstract": " In bounded smooth domains $\\Omega\\subset\\mathbb{R}^N$, $N\\in\\{2,3\\}$,\nconsidering the chemotaxis--fluid system\n\\[ \\begin{cases} \\begin{split} & n_t + u\\cdot \\nabla n &= \\Delta n - \\chi\n\\nabla \\cdot(\\frac{n}{c}\\nabla c) &\\\\ & c_t + u\\cdot \\nabla c &= \\Delta c - c +\nn &\\\\ & u_t + \\kappa (u\\cdot \\nabla) u &= \\Delta u + \\nabla P + n\\nabla \\Phi &\n\\end{split}\\end{cases} \\] with singular sensitivity, we prove global existence\nof classical solutions for given $\\Phi\\in C^2(\\bar{\\Omega})$, for $\\kappa=0$\n(Stokes-fluid) if $N=3$ and $\\kappa\\in\\{0,1\\}$ (Stokes- or Navier--Stokes\nfluid) if $N=2$ and under the condition that \\[\n0<\\chi<\\sqrt{\\frac{2}{N}}. \\]\n",
"title": "Singular sensitivity in a Keller-Segel-fluid system"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12572
| null |
Validated
| null | null |
null |
{
"abstract": " This paper deals with asymptotics for multiple-set linear canonical analysis\n(MSLCA). A definition of this analysis, that adapts the classical one to the\ncontext of Euclidean random variables, is given and properties of the related\ncanonical coefficients are derived. Then, estimators of the MSLCA's elements,\nbased on empirical covariance operators, are proposed and asymptotics for these\nestimators are obtained. More precisely, we prove their consistency and we\nobtain asymptotic normality for the estimator of the operator that gives MSLCA,\nand also for the estimator of the vector of canonical coefficients. These\nresults are then used to obtain a test for mutual non-correlation between the\ninvolved Euclidean random variables.\n",
"title": "Asymptotic theory of multiple-set linear canonical analysis"
}
| null | null | null | null | true | null |
12573
| null |
Default
| null | null |
null |
{
"abstract": " The rapid development of deep learning, a family of machine learning\ntechniques, has spurred much interest in its application to medical imaging\nproblems. Here, we develop a deep learning algorithm that can accurately detect\nbreast cancer on screening mammograms using an \"end-to-end\" training approach\nthat efficiently leverages training datasets with either complete clinical\nannotation or only the cancer status (label) of the whole image. In this\napproach, lesion annotations are required only in the initial training stage,\nand subsequent stages require only image-level labels, eliminating the reliance\non rarely available lesion annotations. Our all convolutional network method\nfor classifying screening mammograms attained excellent performance in\ncomparison with previous methods. On an independent test set of digitized film\nmammograms from Digital Database for Screening Mammography (DDSM), the best\nsingle model achieved a per-image AUC of 0.88, and four-model averaging\nimproved the AUC to 0.91 (sensitivity: 86.1%, specificity: 80.1%). On a\nvalidation set of full-field digital mammography (FFDM) images from the\nINbreast database, the best single model achieved a per-image AUC of 0.95, and\nfour-model averaging improved the AUC to 0.98 (sensitivity: 86.7%, specificity:\n96.1%). We also demonstrate that a whole image classifier trained using our\nend-to-end approach on the DDSM digitized film mammograms can be transferred to\nINbreast FFDM images using only a subset of the INbreast data for fine-tuning\nand without further reliance on the availability of lesion annotations. These\nfindings show that automatic deep learning methods can be readily trained to\nattain high accuracy on heterogeneous mammography platforms, and hold\ntremendous promise for improving clinical tools to reduce false positive and\nfalse negative screening mammography results.\n",
"title": "Deep Learning to Improve Breast Cancer Early Detection on Screening Mammography"
}
| null | null | null | null | true | null |
12574
| null |
Default
| null | null |
null |
{
"abstract": " In a Wireless Sensor Network (WSN), data manipulation and representation is a\ncrucial part and can take a lot of time to be developed from scratch. Although\nvarious visualization tools have been created for certain projects so far,\nthese tools can only be used in certain scenarios, due to their hard-coded\npacket formats and network's properties. To speed up the development process, a\nvisualization tool which can adapt to any kind of WSN is essentially necessary.\nFor this purpose, a general-purpose visualization tool - NViz, which can\nrepresent and visualize data for any kind of WSN, is proposed. NViz allows\nusers to set their network's properties and packet formats through XML files.\nBased on properties defined, users can choose the meaning of them and let NViz\nrepresents the data respectively. Furthermore, a better Replay mechanism, which\nlets researchers and developers debug their WSN easily, is also integrated in\nthis tool. NViz is designed based on a layered architecture which allows for\nclear and well-defined interrelationships and interfaces between each\ncomponent.\n",
"title": "Nviz - A General Purpse Visualization tool for Wireless Sensor Networks"
}
| null | null | null | null | true | null |
12575
| null |
Default
| null | null |
null |
{
"abstract": " The coherent optical response from 140~nm and 65~nm thick ZnO epitaxial\nlayers is studied using transient four-wave-mixing spectroscopy with picosecond\ntemporal resolution. Resonant excitation of neutral donor-bound excitons\nresults in two-pulse and three-pulse photon echoes. For the donor-bound A\nexciton (D$^0$X$_\\text{A}$) at temperature of 1.8~K we evaluate optical\ncoherence times $T_2=33-50$~ps corresponding to homogeneous linewidths of\n$13-19~\\mu$eV, about two orders of magnitude smaller as compared with the\ninhomogeneous broadening of the optical transitions. The coherent dynamics is\ndetermined mainly by the population decay with time $T_1=30-40$~ps, while pure\ndephasing is negligible in the studied high quality samples even for strong\noptical excitation. Temperature increase leads to a significant shortening of\n$T_2$ due to interaction with acoustic phonons. In contrast, the loss of\ncoherence of the donor-bound B exciton (D$^0$X$_\\text{B}$) is significantly\nfaster ($T_2=3.6$~ps) and governed by pure dephasing processes.\n",
"title": "Transient photon echoes from donor-bound excitons in ZnO epitaxial layers"
}
| null | null | null | null | true | null |
12576
| null |
Default
| null | null |
null |
{
"abstract": " We present the luminosity function of z=4 quasars based on the Hyper\nSuprime-Cam Subaru Strategic Program Wide layer imaging data in the g, r, i, z,\nand y bands covering 339.8 deg^2. From stellar objects, 1666 z~4 quasar\ncandidates are selected by the g-dropout selection down to i=24.0 mag. Their\nphotometric redshifts cover the redshift range between 3.6 and 4.3 with an\naverage of 3.9. In combination with the quasar sample from the Sloan Digital\nSky Survey in the same redshift range, the quasar luminosity function covering\nthe wide luminosity range of M1450=-22 to -29 mag is constructed. It is well\ndescribed by a double power-law model with a knee at M1450=-25.36+-0.13 mag and\na flat faint-end slope with a power-law index of -1.30+-0.05. The knee and\nfaint-end slope show no clear evidence of redshift evolution from those at z~2.\nThe flat slope implies that the UV luminosity density of the quasar population\nis dominated by the quasars around the knee, and does not support the steeper\nfaint-end slope at higher redshifts reported at z>5. If we convert the M1450\nluminosity function to the hard X-ray 2-10keV luminosity function using the\nrelation between UV and X-ray luminosity of quasars and its scatter, the number\ndensity of UV-selected quasars matches well with that of the X-ray-selected\nAGNs above the knee of the luminosity function. Below the knee, the UV-selected\nquasars show a deficiency compared to the hard X-ray luminosity function. The\ndeficiency can be explained by the lack of obscured AGNs among the UV-selected\nquasars.\n",
"title": "The Quasar Luminosity Function at Redshift 4 with Hyper Suprime-Cam Wide Survey"
}
| null | null | null | null | true | null |
12577
| null |
Default
| null | null |
null |
{
"abstract": " The dawn of the fourth industrial revolution, Industry 4.0 has created great\nenthusiasm among companies and researchers by giving them an opportunity to\npave the path towards the vision of a connected smart factory ecosystem.\nHowever, in context of automotive industry there is an evident gap between the\nrequirements supported by the current automotive manufacturing execution\nsystems (MES) and the requirements proposed by industrial standards from the\nInternational Society of Automation (ISA) such as, ISA-95, ISA-88 over which\nthe Industry 4.0 is being built on. In this paper, we bridge this gap by\nfollowing a model-based requirements engineering approach along with a gap\nanalysis process. Our work is mainly divided into three phases, (i) automotive\nMES tool selection phase, (ii) requirements modeling phase, (iii) and gap\nanalysis phase based on the modeled requirements. During the MES tool selection\nphase, we used known reliable sources such as, MES product survey reports,\nwhite papers that provide in-depth and comprehensive information about various\ncomparison criteria and tool vendors list for the current MES landscape. During\nthe requirement modeling phase, we specified requirements derived from the\nneeds of ISA-95 and ISA-88 industrial standards using the general purpose\nSystems Modeling Language (SysML). During the gap analysis phase, we find the\nmisalignment between standard requirements and the compliance of the existing\nsoftware tools to those standards.\n",
"title": "Towards Industry 4.0: Gap Analysis between Current Automotive MES and Industry Standards using Model-Based Requirement Engineering"
}
| null | null | null | null | true | null |
12578
| null |
Default
| null | null |
null |
{
"abstract": " Given p independent normal populations, we consider the problem of estimating\nthe mean of those populations, that based on the observed data, give the\nstrongest signals. We explicitly condition on the ranking of the sample means,\nand consider a constrained conditional maximum likelihood (CCMLE) approach,\navoiding the use of any priors and of any sparsity requirement between the\npopulation means. Our results show that if the observed means are too close\ntogether, we should in fact use the grand mean to estimate the mean of the\npopulation with the larger sample mean. If they are separated by more than a\ncertain threshold, we should shrink the observed means towards each other. As\nintuition suggests, it is only if the observed means are far apart that we\nshould conclude that the magnitude of separation and consequent ranking are not\ndue to chance. Unlike other methods, our approach does not need to pre-specify\nthe number of selected populations and the proposed CCMLE is able to perform\nsimultaneous inference. Our method, which is conceptually straightforward, can\nbe easily adapted to incorporate other selection criteria.\nSelected populations, Maximum likelihood, Constrained MLE, Post-selection\ninference\n",
"title": "A Constrained Conditional Likelihood Approach for Estimating the Means of Selected Populations"
}
| null | null |
[
"Statistics"
] | null | true | null |
12579
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the problem of detecting a deformation from a symmetric Gaussian\nrandom $p$-tensor $(p\\geq 3)$ with a rank-one spike sampled from the Rademacher\nprior. Recently in Lesieur et al. (2017), it was proved that there exists a\ncritical threshold $\\beta_p$ so that when the signal-to-noise ratio exceeds\n$\\beta_p$, one can distinguish the spiked and unspiked tensors and weakly\nrecover the prior via the minimal mean-square-error method. On the other side,\nPerry, Wein, and Bandeira (2017) proved that there exists a $\\beta_p'<\\beta_p$\nsuch that any statistical hypothesis test can not distinguish these two\ntensors, in the sense that their total variation distance asymptotically\nvanishes, when the signa-to-noise ratio is less than $\\beta_p'$. In this work,\nwe show that $\\beta_p$ is indeed the critical threshold that strictly separates\nthe distinguishability and indistinguishability between the two tensors under\nthe total variation distance. Our approach is based on a subtle analysis of the\nhigh temperature behavior of the pure $p$-spin model with Ising spin, arising\ninitially from the field of spin glasses. In particular, we identify the\nsignal-to-noise criticality $\\beta_p$ as the critical temperature,\ndistinguishing the high and low temperature behavior, of the Ising pure\n$p$-spin mean-field spin glass model.\n",
"title": "Phase transition in the spiked random tensor with Rademacher prior"
}
| null | null | null | null | true | null |
12580
| null |
Default
| null | null |
null |
{
"abstract": " We propose a high signal-to-noise extended depth-range three-dimensional (3D)\nprofilometer projecting two linear-fringes with close phase-sensitivity. We use\ntemporal phase-shifting algorithms (PSAs) to phase-demodulate the two close\nsensitivity phases. Then we calculate their phase-difference and their\nphase-sum. If the sensitivity between the two phases is close enough, their\nphase-difference is not-wrapped. The non-wrapped phase-difference as\nextended-range profilometry is well known and has been widely used. However as\nthis paper shows, the closeness between the two demodulated phases makes their\ndifference quite noisy. On the other hand, as we show, their phase-sum has a\nmuch higher phase-sensitivity and signal-to-noise ratio but it is highly\nwrapped. Spatial unwrapping of the phase-sum is precluded for separate or\nhighly discontinuous objects. However it is possible to unwrap the phase-sum by\nusing the phase-difference as first approximation and our previously published\n2-step temporal phase-unwrapping. Therefore the proposed profilometry technique\nallows unwrapping the higher sensitivity phase-sum using the noisier\nphase-difference as stepping stone. Due to the non-linear nature of the\nextended 2-steps temporal-unwrapper, the harmonics and noise errors in the\nphase-difference do not propagate towards the unwrapping phase-sum. To the best\nof our knowledge this is the highest signal-to-noise ratio, extended\ndepth-range, 3D digital profilometry technique reported to this date.\n",
"title": "Extended depth-range profilometry using the phase-difference and phase-sum of two close-sensitivity projected fringes"
}
| null | null | null | null | true | null |
12581
| null |
Default
| null | null |
null |
{
"abstract": " An oblivious computation is one that is free of direct and indirect\ninformation leaks, e.g., due to observable differences in timing and memory\naccess patterns. This paper presents Lobliv, a core language whose type system\nenforces obliviousness. Prior work on type-enforced oblivious computation has\nfocused on deterministic programs. Lobliv is new in its consideration of\nprograms that implement probabilistic algorithms, such as those involved in\ncryptography. Lobliv employs a substructural type system and a novel notion of\nprobability region to ensure that information is not leaked via the\ndistribution of visible events. The use of regions was motivated by a source of\nunsoundness that we discovered in the type system of ObliVM, a language for\nimplementing state of the art oblivious algorithms and data structures. We\nprove that Lobliv's type system enforces obliviousness and show that it is\nnevertheless powerful enough to check state-of-the-art, efficient oblivious\ndata structures, such as stacks and queues, and even tree-based oblivious RAMs.\n",
"title": "A Language for Probabilistically Oblivious Computation"
}
| null | null | null | null | true | null |
12582
| null |
Default
| null | null |
null |
{
"abstract": " Inverse Compton scattering (ICS) is a unique mechanism for producing fast\npulses - picosecond and below - of bright X- to gamma-rays. These nominally\nnarrow spectral bandwidth electromagnetic radiation pulses are efficiently\nproduced in the interaction between intense, well-focused electron and laser\nbeams. The spectral characteristics of such sources are affected by many\nexperimental parameters, such as the bandwidth of the laser, and the angles of\nboth the electrons and laser photons at collision. The laser field amplitude\ninduces harmonic generation and importantly, for the present work, nonlinear\nred shifting, both of which dilute the spectral brightness of the radiation. As\nthe applications enabled by this source often depend sensitively on its\nspectra, it is critical to resolve the details of the wavelength and angular\ndistribution obtained from ICS collisions. With this motivation, we present\nhere an experimental study that greatly improves on previous spectral\nmeasurement methods based on X-ray K-edge filters, by implementing a\nmulti-layer bent-crystal X-ray spectrometer. In tandem with a collimating slit,\nthis method reveals a projection of the double-differential angular-wavelength\nspectrum of the ICS radiation in a single shot. The measurements enabled by\nthis diagnostic illustrate the combined off-axis and nonlinear-field-induced\nred shifting in the ICS emission process. They reveal in detail the strength of\nthe normalized laser vector potential, and provide a non-destructive measure of\nthe temporal and spatial electron-laser beam overlap.\n",
"title": "Single shot, double differential spectral measurements of inverse Compton scattering in linear and nonlinear regimes"
}
| null | null | null | null | true | null |
12583
| null |
Default
| null | null |
null |
{
"abstract": " The Yarkovsky effect is a thermal process acting upon the orbits of small\ncelestial bodies, which can cause these orbits to slowly expand or contract\nwith time. The effect is subtle -- typical drift rates lie near $10^{-4}$ au/My\nfor a $\\sim$1 km diameter object -- and is thus generally difficult to measure.\nHowever, objects with long observation intervals, as well as objects with radar\ndetections, serve as excellent candidates for the observation of this effect.\nWe analyzed both optical and radar astrometry for all numbered Near-Earth\nAsteroids (NEAs), as well as several un-numbered NEAs, for the purpose of\ndetecting and quantifying the Yarkovsky effect. We present 159 objects with\nmeasured drift rates. Our Yarkovsky sample is the largest published set of such\ndetections, and presents an opportunity to examine the physical properties of\nthese NEAs and the Yarkovsky effect in a statistical manner. In particular, we\nconfirm the Yarkovsky effect's theoretical size dependence of 1/$D$, where $D$\nis diameter. We also examine the efficiency with which this effect acts on our\nsample objects and find typical efficiencies of around 12%. We interpret this\nefficiency with respect to the typical spin and thermal properties of objects\nin our sample. We report the ratio of negative to positive drift rates in our\nsample as $N_R/N_P = 2.9 \\pm 0.7$ and interpret this ratio in terms of\nretrograde/prograde rotators and main belt escape routes. The observed ratio\nhas a probability of 1 in 46 million of occurring by chance, which confirms the\npresence of a non-gravitational influence. We examine how the presence of radar\ndata affects the strength and precision of our detections. We find that, on\naverage, the precision of radar+optical detections improves by a factor of\napproximately 1.6 for each additional apparition with ranging data compared to\nthat of optical-only solutions.\n",
"title": "Yarkovsky Drift Detections for 159 Near-Earth Asteroids"
}
| null | null | null | null | true | null |
12584
| null |
Default
| null | null |
null |
{
"abstract": " In this short note we improve the best to date bound in Godbersen's\nconjecture, and show some implications for unbalanced difference bodies.\n",
"title": "A short note on Godbersen's Conjecture"
}
| null | null | null | null | true | null |
12585
| null |
Default
| null | null |
null |
{
"abstract": " We extend the global existence result for the derivative NLS equation to the\ncase when the initial datum includes a finite number of solitons. This is\nachieved by an application of the Bäcklund transformation that removes a\nfinite number of zeros of the scattering coefficient. By means of this\ntransformation, the Riemann--Hilbert problem for meromorphic functions can be\nformulated as the one for analytic functions, the solvability of which was\nobtained recently.\n",
"title": "The derivative NLS equation: global existence with solitons"
}
| null | null | null | null | true | null |
12586
| null |
Default
| null | null |
null |
{
"abstract": " I examine a possible spectral distortion of the Cosmic Microwave Background\n(CMB) due to its absorption by galactic and intergalactic dust. I show that\neven subtle intergalactic opacity of $1 \\times 10^{-7}\\, \\mathrm{mag}\\, h\\,\n\\mathrm{Gpc}^{-1}$ at the CMB wavelengths in the local Universe causes\nnon-negligible CMB absorption and decline of the CMB intensity because the\nopacity steeply increases with redshift. The CMB should be distorted even\nduring the epoch of the Universe defined by redshifts $z < 10$. For this epoch,\nthe maximum spectral distortion of the CMB is at least $20 \\times 10^{-22}\n\\,\\mathrm{Wm}^{-2}\\, \\mathrm{Hz}^{-1}\\, \\mathrm{sr}^{-1}$ at 300 GHz being well\nabove the sensitivity of the COBE/FIRAS, WMAP or Planck flux measurements. If\ndust mass is considered to be redshift dependent with noticeable dust abundance\nat redshifts 2-4, the predicted CMB distortion is even higher. The CMB would be\ndistorted also in a perfectly transparent universe due to dust in galaxies but\nthis effect is lower by one order than that due to intergalactic opacity. The\nfact that the distortion of the CMB by dust is not observed is intriguing and\nquestions either opacity and extinction law measurements or validity of the\ncurrent model of the Universe.\n",
"title": "Missing dust signature in the cosmic microwave background"
}
| null | null | null | null | true | null |
12587
| null |
Default
| null | null |
null |
{
"abstract": " In informationally efficient financial markets, option prices and this\nimplied volatility should immediately be adjusted to new information that\narrives along with a jump in underlying's return, whereas gradual changes in\nimplied volatility would indicate market inefficiency. Using minute-by-minute\ndata on S&P 500 index options, we provide evidence regarding delayed and\ngradual movements in implied volatility after the arrival of return jumps.\nThese movements are directed and persistent, especially in the case of negative\nreturn jumps. Our results are significant when the implied volatilities are\nextracted from at-the-money options and out-of-the-money puts, while the\nimplied volatility obtained from out-of-the-money calls converges to its new\nlevel immediately rather than gradually. Thus, our analysis reveals that the\nimplied volatility smile is adjusted to jumps in underlying's return\nasymmetrically. Finally, it would be possible to have statistical arbitrage in\nzero-transaction-cost option markets, but under actual option price spreads,\nour results do not imply abnormal option returns.\n",
"title": "Option market (in)efficiency and implied volatility dynamics after return jumps"
}
| null | null | null | null | true | null |
12588
| null |
Default
| null | null |
null |
{
"abstract": " We present a new method that combines alchemical transformation with physical\npathway to accurately and efficiently compute the absolute binding free energy\nof receptor-ligand complex. Currently, the double decoupling method (DDM) and\nthe potential of mean force approach (PMF) methods are widely used to compute\nthe absolute binding free energy of biomolecules. The DDM relies on\nalchemically decoupling the ligand from its environments, which can be\ncomputationally challenging for large ligands and charged ligands because of\nthe large magnitude of the decoupling free energies involved. On the other\nhand, the PMF approach uses physical pathway to extract the ligand out of the\nbinding site, thus avoids the alchemical decoupling of the ligand. However, the\nPMF method has its own drawback because of the reliance on a ligand\nbinding/unbinding pathway free of steric obstruction from the receptor atoms.\nTherefore, in the presence of deeply buried ligand functional groups the\nconvergence of the PMF calculation can be very slow leading to large errors in\nthe computed binding free energy. Here we develop a new method called AlchemPMF\nby combining alchemical transformation with physical pathway to overcome the\nmajor drawback in the PMF method. We have tested the new approach on the\nbinding of a charged ligand to an allosteric site on HIV-1 Integrase. After 20\nns of simulation per umbrella sampling window, the new method yields absolute\nbinding free energies within ~1 kcal/mol from the experimental result, whereas\nthe standard PMF approach and the DDM calculations result in errors of ~5\nkcal/mol and > 2 kcal/mol, respectively. Furthermore, the binding free energy\ncomputed using the new method is associated with smaller statistical error\ncompared with those obtained from the existing methods.\n",
"title": "Combining Alchemical Transformation with Physical Pathway to Accurately Compute Absolute Binding Free Energy"
}
| null | null | null | null | true | null |
12589
| null |
Default
| null | null |
null |
{
"abstract": " Large-scale Hierarchical Classification (HC) involves datasets consisting of\nthousands of classes and millions of training instances with high-dimensional\nfeatures posing several big data challenges. Feature selection that aims to\nselect the subset of discriminant features is an effective strategy to deal\nwith large-scale HC problem. It speeds up the training process, reduces the\nprediction time and minimizes the memory requirements by compressing the total\nsize of learned model weight vectors. Majority of the studies have also shown\nfeature selection to be competent and successful in improving the\nclassification accuracy by removing irrelevant features. In this work, we\ninvestigate various filter-based feature selection methods for dimensionality\nreduction to solve the large-scale HC problem. Our experimental evaluation on\ntext and image datasets with varying distribution of features, classes and\ninstances shows upto 3x order of speed-up on massive datasets and upto 45% less\nmemory requirements for storing the weight vectors of learned model without any\nsignificant loss (improvement for some datasets) in the classification\naccuracy. Source Code: this https URL.\n",
"title": "Embedding Feature Selection for Large-scale Hierarchical Classification"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
12590
| null |
Validated
| null | null |
null |
{
"abstract": " We present an enumeration of orientably-regular maps with automorphism group\nisomorphic to the twisted linear fractional group $M(q^2)$ for any odd prime\npower $q$.\n",
"title": "Orientably-regular maps on twisted linear fractional groups"
}
| null | null | null | null | true | null |
12591
| null |
Default
| null | null |
null |
{
"abstract": " Fitting machine learning models in the low-data limit is challenging. The\nmain challenge is to obtain suitable prior knowledge and encode it into the\nmodel, for instance in the form of a Gaussian process prior. Recent advances in\nmeta-learning offer powerful methods for extracting such prior knowledge from\ndata acquired in related tasks. When it comes to meta-learning in Gaussian\nprocess models, approaches in this setting have mostly focused on learning the\nkernel function of the prior, but not on learning its mean function. In this\nwork, we propose to parameterize the mean function of a Gaussian process with a\ndeep neural network and train it with a meta-learning procedure. We present\nanalytical and empirical evidence that mean function learning can be superior\nto kernel learning alone, particularly if data is scarce.\n",
"title": "Deep Mean Functions for Meta-Learning in Gaussian Processes"
}
| null | null | null | null | true | null |
12592
| null |
Default
| null | null |
null |
{
"abstract": " The Plancherel decomposition of $L^2$ on a pseudo-Riemannian symmetric space\n$GL(n,C)/GL(n,R)$ has spectrum of $[n/2]$ types. We write explicitly orthogonal\nprojectors separating spectrum into uniform pieces\n",
"title": "Projectors separating spectra for $L^2$ on symmetric spaces $GL(n,\\C)/GL(n,\\R)$"
}
| null | null | null | null | true | null |
12593
| null |
Default
| null | null |
null |
{
"abstract": " We construct an absolutely normal number whose continued fraction expansion\nis normal in the sense that it contains all finite patterns of partial\nquotients with the expected asymptotic frequency as given by the Gauss-Kuzmin\nmeasure. The construction is based on ideas of Sierpinski and uses a large\ndeviations theorem for sums of mixing random variables.\n",
"title": "On the continued fraction expansion of absolutely normal numbers"
}
| null | null | null | null | true | null |
12594
| null |
Default
| null | null |
null |
{
"abstract": " Enticing users into exploring Open Data remains an important challenge for\nthe whole Open Data paradigm. Standard stock interfaces often used by Open Data\nportals are anything but inspiring even for tech-savvy users, let alone those\nwithout an articulated interest in data science. To address a broader range of\ncitizens, we designed an open data search interface supporting natural language\ninteractions via popular platforms like Facebook and Skype. Our data-aware\nchatbot answers search requests and suggests relevant open datasets, bringing\nfun factor and a potential of viral dissemination into Open Data exploration.\nThe current system prototype is available for Facebook\n(this https URL) and Skype\n(this https URL) users.\n",
"title": "Talking Open Data"
}
| null | null | null | null | true | null |
12595
| null |
Default
| null | null |
null |
{
"abstract": " Reciprocity is a fundamental principle governing various physical systems,\nwhich ensures that the transfer function between any two points in space is\nidentical, regardless of geometrical or material asymmetries. Breaking this\ntransmission symmetry offers enhanced control over signal transport, isolation\nand source protection. So far, devices that break reciprocity have been mostly\nconsidered in dynamic systems, for electromagnetic, acoustic and mechanical\nwave propagation associated with spatio-temporal variations. Here we show that\nit is possible to strongly break reciprocity in static systems, realizing\nmechanical metamaterials that, by combining large nonlinearities with suitable\ngeometrical asymmetries, and possibly topological features, exhibit vastly\ndifferent output displacements under excitation from different sides, as well\nas one-way displacement amplification. In addition to extending non-reciprocity\nand isolation to statics, our work sheds new light on the understanding of\nenergy propagation in non-linear materials with asymmetric crystalline\nstructures and topological properties, opening avenues for energy absorption,\nconversion and harvesting, soft robotics, prosthetics and optomechanics.\n",
"title": "Static non-reciprocity in mechanical metamaterials"
}
| null | null | null | null | true | null |
12596
| null |
Default
| null | null |
null |
{
"abstract": " Photoacoustic computed tomography (PACT) is an emerging imaging modality that\nexploits optical contrast and ultrasonic detection principles to form images of\nthe photoacoustically induced initial pressure distribution within tissue. The\nPACT reconstruction problem corresponds to an inverse source problem in which\nthe initial pressure distribution is recovered from measurements of the\nradiated wavefield.\nA major challenge in transcranial PACT brain imaging is compensation for\naberrations in the measured data due to the presence of the skull. Ultrasonic\nwaves undergo absorption, scattering and longitudinal-to-shear wave mode\nconversion as they propagate through the skull. To properly account for these\neffects, a wave-equation-based inversion method should be employed that can\nmodel the heterogeneous elastic properties of the skull. In this work, a\nforward model based on a finite-difference time-domain discretization of the\nthree-dimensional elastic wave equation is established and a procedure for\ncomputing the corresponding adjoint of the forward operator is presented.\nMassively parallel implementations of these operators employing multiple\ngraphics processing units (GPUs) are also developed. The developed numerical\nframework is validated and investigated in computer-simulation and experimental\nphantom studies whose designs are motivated by transcranial PACT applications.\n",
"title": "A forward-adjoint operator pair based on the elastic wave equation for use in transcranial photoacoustic tomography"
}
| null | null | null | null | true | null |
12597
| null |
Default
| null | null |
null |
{
"abstract": " We explore to what extent one may hope to preserve geometric properties of\nthree dimensional manifolds with lower scalar curvature bounds under\nGromov-Hausdorff and Intrinsic Flat limits. We introduce a new construction,\ncalled sewing, of three dimensional manifolds that preserves positive scalar\ncurvature. We then use sewing to produce sequences of such manifolds which\nconverge to spaces that fail to have nonnegative scalar curvature in a standard\ngeneralized sense. Since the notion of nonnegative scalar curvature is not\nstrong enough to persist alone, we propose that one pair a lower scalar\ncurvature bound with a lower bound on the area of a closed minimal surface when\ntaking sequences as this will exclude the possibility of sewing of manifolds.\n",
"title": "Sewing Riemannian Manifolds with Positive Scalar Curvature"
}
| null | null |
[
"Mathematics"
] | null | true | null |
12598
| null |
Validated
| null | null |
null |
{
"abstract": " A concentration result for quadratic form of independent subgaussian random\nvariables is derived. If the moments of the random variables satisfy a\n\"Bernstein condition\", then the variance term of the Hanson-Wright inequality\ncan be improved. The Bernstein condition is satisfied, for instance, by all\nlog-concave subgaussian distributions.\n",
"title": "Concentration of quadratic forms under a Bernstein moment assumption"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
12599
| null |
Validated
| null | null |
null |
{
"abstract": " The flexibility of short DNA chains is investigated via computation of the\naverage correlation function between dimers which defines the persistence\nlength. Path integration techniques have been applied to confine the phase\nspace available to base pair fluctuations and derive the partition function.\nThe apparent persistence lengths of a set of short chains have been computed as\na function of the twist conformation both in the over-twisted and the untwisted\nregimes, whereby the equilibrium twist is selected by free energy minimization.\nThe obtained values are significantly lower than those generally attributed to\nkilo-base long DNA. This points to an intrinsic helix flexibility at short\nlength scales, arising from large fluctuational effects and local bending, in\nline with recent experimental indications. The interplay between helical\nuntwisting and persistence length has been discussed for a heterogeneous\nfragment by weighing the effects of the sequence specificities through the\nnon-linear stacking potential.\n",
"title": "Short DNA persistence length in a mesoscopic helical model"
}
| null | null | null | null | true | null |
12600
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.