text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Nowadays, quantum program is widely used and quickly developed. However, the\nabsence of testing methodology restricts their quality. Different input format\nand operator from traditional program make this issue hard to resolve.\nIn this paper, we present QuanFuzz, a search-based test input generator for\nquantum program. We define the quantum sensitive information to evaluate test\ninput for quantum program and use matrix generator to generate test cases with\nhigher coverage. First, we extract quantum sensitive information -- measurement\noperations on those quantum registers and the sensitive branches associated\nwith those measurement results, from the quantum source code. Then, we use the\nsensitive information guided algorithm to mutate the initial input matrix and\nselect those matrices which improve the probability weight for a value of the\nquantum register to trigger the sensitive branch. The process keeps iterating\nuntil the sensitive branch triggered. We tested QuanFuzz on benchmarks and\nacquired 20% - 60% more coverage compared to traditional testing input\ngeneration.\n",
"title": "QuanFuzz: Fuzz Testing of Quantum Program"
}
| null | null | null | null | true | null |
5401
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we focus on option pricing models based on space-time\nfractional diffusion. We briefly revise recent results which show that the\noption price can be represented in the terms of rapidly converging\ndouble-series and apply these results to the data from real markets. We focus\non estimation of model parameters from the market data and estimation of\nimplied volatility within the space-time fractional option pricing models.\n",
"title": "Option Pricing Models Driven by the Space-Time Fractional Diffusion: Series Representation and Applications"
}
| null | null | null | null | true | null |
5402
| null |
Default
| null | null |
null |
{
"abstract": " We built a two-state model of an asexually reproducing organism in a periodic\nenvironment endowed with the capability to anticipate an upcoming environmental\nchange and undergo pre-emptive switching. By virtue of these anticipatory\ntransitions, the organism oscillates between its two states that is a time\n$\\theta$ out of sync with the environmental oscillation. We show that an\nanticipation-capable organism increases its long-term fitness over an organism\nthat oscillates in-sync with the environment, provided $\\theta$ does not exceed\na threshold. We also show that the long-term fitness is maximized for an\noptimal anticipation time that decreases approximately as $1/n$, $n$ being the\nnumber of cell divisions in time $T$. Furthermore, we demonstrate that optimal\n\"anticipators\" outperforms \"bet-hedgers\" in the range of parameters considered.\nFor a sub-optimal ensemble of anticipators, anticipation performs better to\nbet-hedging only when the variance in anticipation is small compared to the\nmean and the rate of pre-emptive transition is high. Taken together, our work\nsuggests that anticipation increases overall fitness of an organism in a\nperiodic environment and it is a viable alternative to bet-hedging provided the\nerror in anticipation is small.\n",
"title": "Anticipation: an effective evolutionary strategy for a sub-optimal population in a cyclic environment"
}
| null | null | null | null | true | null |
5403
| null |
Default
| null | null |
null |
{
"abstract": " This paper analyzes Airbnb listings in the city of San Francisco to better\nunderstand how different attributes such as bedrooms, location, house type\namongst others can be used to accurately predict the price of a new listing\nthat optimal in terms of the host's profitability yet affordable to their\nguests. This model is intended to be helpful to the internal pricing tools that\nAirbnb provides to its hosts. Furthermore, additional analysis is performed to\nascertain the likelihood of a listings availability for potential guests to\nconsider while making a booking. The analysis begins with exploring and\nexamining the data to make necessary transformations that can be conducive for\na better understanding of the problem at large while helping us make\nhypothesis. Moving further, machine learning models are built that are\nintuitive to use to validate the hypothesis on pricing and availability and run\nexperiments in that context to arrive at a viable solution. The paper then\nconcludes with a discussion on the business implications, associated risks and\nfuture scope.\n",
"title": "Unravelling Airbnb Predicting Price for New Listing"
}
| null | null | null | null | true | null |
5404
| null |
Default
| null | null |
null |
{
"abstract": " We give algorithms with running time $2^{O({\\sqrt{k}\\log{k}})} \\cdot\nn^{O(1)}$ for the following problems. Given an $n$-vertex unit disk graph $G$\nand an integer $k$, decide whether $G$ contains (1) a path on exactly/at least\n$k$ vertices, (2) a cycle on exactly $k$ vertices, (3) a cycle on at least $k$\nvertices, (4) a feedback vertex set of size at most $k$, and (5) a set of $k$\npairwise vertex-disjoint cycles. For the first three problems, no\nsubexponential time parameterized algorithms were previously known. For the\nremaining two problems, our algorithms significantly outperform the previously\nbest known parameterized algorithms that run in time $2^{O(k^{0.75}\\log{k})}\n\\cdot n^{O(1)}$. Our algorithms are based on a new kind of tree decompositions\nof unit disk graphs where the separators can have size up to $k^{O(1)}$ and\nthere exists a solution that crosses every separator at most $O(\\sqrt{k})$\ntimes. The running times of our algorithms are optimal up to the $\\log{k}$\nfactor in the exponent, assuming the Exponential Time Hypothesis.\n",
"title": "Finding, Hitting and Packing Cycles in Subexponential Time on Unit Disk Graphs"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5405
| null |
Validated
| null | null |
null |
{
"abstract": " All known life forms are based upon a hierarchy of interwoven feedback loops,\noperating over a cascade of space, time and energy scales. Among the most basic\nloops are those connecting DNA and proteins. For example, in genetic networks,\nDNA genes are expressed as proteins, which may bind near the same genes and\nthereby control their own expression. In this molecular type of self-reference,\ninformation is mapped from the DNA sequence to the protein and back to DNA.\nThere is a variety of dynamic DNA-protein self-reference loops, and the purpose\nof this remark is to discuss certain geometrical and physical aspects related\nto the back and forth mapping between DNA and proteins. The discussion raises\nbasic questions regarding the nature of DNA and proteins as self-referring\nmatter, which are examined in a simple toy model.\n",
"title": "The self-referring DNA and protein: a remark on physical and geometrical aspects"
}
| null | null | null | null | true | null |
5406
| null |
Default
| null | null |
null |
{
"abstract": " The aim of this paper is to design a band-limited optimal input with power\nconstraints for identifying a linear multi-input multi-output system. It is\nassumed that the nominal system parameters are specified. The key idea is to\nuse the spectral decomposition theorem and write the power spectrum as\n$\\phi_{u}(j\\omega)=\\frac{1}{2}H(j\\omega)H^*(j\\omega)$. The matrix $H(j\\omega)$\nis expressed in terms of a truncated basis for\n$\\mathcal{L}^2\\left(\\left[-\\omega_{\\mbox{cut-off}},\\omega_{\\mbox{cut-off}}\\right]\\right)$.\nWith this parameterization, the elements of the Fisher Information Matrix and\nthe power constraints turn out to be homogeneous quadratics in the basis\ncoefficients. The optimality criterion used are the well-known\n$\\mathcal{D}-$optimality, $\\mathcal{A}-$optimality, $\\mathcal{T}-$optimality\nand $\\mathcal{E}-$optimality. The resulting optimization problem is non-convex\nin general. A lower bound on the optimum is obtained through a bi-linear\nformulation of the problem, while an upper bound is obtained through a convex\nrelaxation. These bounds can be computed efficiently as the associated problems\nare convex. The lower bound is used as a sub-optimal solution, the\nsub-optimality of which is determined by the difference in the bounds.\nInterestingly, the bounds match in many instances and thus, the global optimum\nis achieved. A discussion on the non-convexity of the optimization problem is\nalso presented. Simulations are provided for corroboration.\n",
"title": "Optimal input design for system identification using spectral decomposition"
}
| null | null | null | null | true | null |
5407
| null |
Default
| null | null |
null |
{
"abstract": " Due to the rapid growth of the World Wide Web, resource discovery becomes an\nincreasing problem. As an answer to the demand for information management, a\nthird generation of World-Wide Web tools will evolve: information gathering and\nprocessing agents. This paper describes WAVE (Web Analysis and Visualization\nEnvironment), a 3D interface for World-Wide Web information visualization and\nbrowsing. It uses the mathematical theory of concept analysis to conceptually\ncluster objects, and to create a three-dimensional layout of information nodes.\nSo-called \"conceptual scales\" for attributes, such as location, title,\nkeywords, topic, size, or modification time, provide a formal mechanism that\nautomatically classifies and categorizes documents, creating a conceptual\ninformation space. A visualization shell serves as an ergonomically sound user\ninterface for exploring this information space.\n",
"title": "Creating a Web Analysis and Visualization Environment"
}
| null | null | null | null | true | null |
5408
| null |
Default
| null | null |
null |
{
"abstract": " Supervised learning has been very successful for automatic segmentation of\nimages from a single scanner. However, several papers report deteriorated\nperformances when using classifiers trained on images from one scanner to\nsegment images from other scanners. We propose a transfer learning classifier\nthat adapts to differences between training and test images. This method uses a\nweighted ensemble of classifiers trained on individual images. The weight of\neach classifier is determined by the similarity between its training image and\nthe test image.\nWe examine three unsupervised similarity measures, which can be used in\nscenarios where no labeled data from a newly introduced scanner or scanning\nprotocol is available. The measures are based on a divergence, a bag distance,\nand on estimating the labels with a clustering procedure. These measures are\nasymmetric. We study whether the asymmetry can improve classification. Out of\nthe three similarity measures, the bag similarity measure is the most robust\nacross different studies and achieves excellent results on four brain tissue\nsegmentation datasets and three white matter lesion segmentation datasets,\nacquired at different centers and with different scanners and scanning\nprotocols. We show that the asymmetry can indeed be informative, and that\ncomputing the similarity from the test image to the training images is more\nappropriate than the opposite direction.\n",
"title": "Transfer Learning by Asymmetric Image Weighting for Segmentation across Scanners"
}
| null | null | null | null | true | null |
5409
| null |
Default
| null | null |
null |
{
"abstract": " Many \"sharing economy\" platforms, such as Uber and Airbnb, have become\nincreasingly popular, providing consumers with more choices and suppliers a\nchance to make profit. They, however, have also brought about emerging issues\nregarding regulation, tax obligation, and impact on urban environment, and have\ngenerated heated debates from various interest groups. Empirical studies\nregarding these issues are limited, partly due to the unavailability of\nrelevant data. Here we aim to understand service providers of the sharing\neconomy, investigating who joins and who benefits, using the Airbnb market in\nthe United States as a case study. We link more than 211 thousand Airbnb\nlistings owned by 188 thousand hosts with demographic, socio-economic status\n(SES), housing, and tourism characteristics. We show that income and education\nare consistently the two most influential factors that are linked to the\njoining of Airbnb, regardless of the form of participation or year. Areas with\nlower median household income, or higher fraction of residents who have\nBachelor's and higher degrees, tend to have more hosts. However, when\nconsidering the performance of listings, as measured by number of newly\nreceived reviews, we find that income has a positive effect for entire-home\nlistings; listings located in areas with higher median household income tend to\nhave more new reviews. Our findings demonstrate empirically that the\ndisadvantage of SES-disadvantaged areas and the advantage of SES-advantaged\nareas may be present in the sharing economy.\n",
"title": "Service Providers of the Sharing Economy: Who Joins and Who Benefits?"
}
| null | null | null | null | true | null |
5410
| null |
Default
| null | null |
null |
{
"abstract": " We study the Generalized Fermat Equation $x^2 + y^3 = z^p$, to be solved in\ncoprime integers, where $p \\ge 7$ is prime. Using modularity and level lowering\ntechniques, the problem can be reduced to the determination of the sets of\nrational points satisfying certain 2-adic and 3-adic conditions on a finite set\nof twists of the modular curve $X(p)$.\nWe first develop new local criteria to decide if two elliptic curves with\ncertain types of potentially good reduction at 2 and 3 can have symplectically\nor anti-symplectically isomorphic $p$-torsion modules. Using these criteria we\nproduce the minimal list of twists of $X(p)$ that have to be considered, based\non local information at 2 and 3; this list depends on $p \\bmod 24$. Using\nrecent results on mod $p$ representations with image in the normalizer of a\nsplit Cartan subgroup, the list can be further reduced in some cases.\nOur second main result is the complete solution of the equation when $p =\n11$, which previously was the smallest unresolved $p$. One relevant new\ningredient is the use of the `Selmer group Chabauty' method introduced by the\nthird author in a recent preprint, applied in an Elliptic Curve Chabauty\ncontext, to determine relevant points on $X_0(11)$ defined over certain number\nfields of degree 12. This result is conditional on GRH, which is needed to show\ncorrectness of the computation of the class groups of five specific number\nfields of degree 36.\nWe also give some partial results for the case $p = 13$.\n",
"title": "The generalized Fermat equation with exponents 2, 3, n"
}
| null | null | null | null | true | null |
5411
| null |
Default
| null | null |
null |
{
"abstract": " In an earlier work, we constructed the almost strict Morse $n$-category\n$\\mathcal X$ which extends Cohen $\\&$ Jones $\\&$ Segal's flow category. In this\narticle, we define two other almost strict $n$-categories $\\mathcal V$ and\n$\\mathcal W$ where $\\mathcal V$ is based on homomorphisms between real vector\nspaces and $\\mathcal W$ consists of tuples of positive integers. The Morse\nindex and the dimension of the Morse moduli spaces give rise to almost strict\n$n$-category functors $\\mathcal F : \\mathcal X \\to \\mathcal V$ and $\\mathcal G\n: \\mathcal X \\to \\mathcal W$.\n",
"title": "On the image of the almost strict Morse n-category under almost strict n-functors"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5412
| null |
Validated
| null | null |
null |
{
"abstract": " The extension of deep learning towards temporal data processing is gaining an\nincreasing research interest. In this paper we investigate the properties of\nstate dynamics developed in successive levels of deep recurrent neural networks\n(RNNs) in terms of short-term memory abilities. Our results reveal interesting\ninsights that shed light on the nature of layering as a factor of RNN design.\nNoticeably, higher layers in a hierarchically organized RNN architecture\nresults to be inherently biased towards longer memory spans even prior to\ntraining of the recurrent connections. Moreover, in the context of Reservoir\nComputing framework, our analysis also points out the benefit of a layered\nrecurrent organization as an efficient approach to improve the memory skills of\nreservoir models.\n",
"title": "Short-term Memory of Deep RNN"
}
| null | null | null | null | true | null |
5413
| null |
Default
| null | null |
null |
{
"abstract": " We consider the use of Deep Learning methods for modeling complex phenomena\nlike those occurring in natural physical processes. With the large amount of\ndata gathered on these phenomena the data intensive paradigm could begin to\nchallenge more traditional approaches elaborated over the years in fields like\nmaths or physics. However, despite considerable successes in a variety of\napplication domains, the machine learning field is not yet ready to handle the\nlevel of complexity required by such problems. Using an example application,\nnamely Sea Surface Temperature Prediction, we show how general background\nknowledge gained from physics could be used as a guideline for designing\nefficient Deep Learning models. In order to motivate the approach and to assess\nits generality we demonstrate a formal link between the solution of a class of\ndifferential equations underlying a large family of physical phenomena and the\nproposed model. Experiments and comparison with series of baselines including a\nstate of the art numerical approach is then provided.\n",
"title": "Deep Learning for Physical Processes: Incorporating Prior Scientific Knowledge"
}
| null | null | null | null | true | null |
5414
| null |
Default
| null | null |
null |
{
"abstract": " We consider the scalar field profile around relativistic compact objects such\nas neutron stars for a range of modified gravity models with screening\nmechanisms of the chameleon and Damour-Polyakov types. We focus primarily on\ninverse power law chameleons and the environmentally dependent dilaton as\nexamples of both mechanisms. We discuss the modified Tolman-Oppenheimer-Volkoff\nequation and then implement a relaxation algorithm to solve for the scalar\nprofiles numerically. We find that chameleons and dilatons behave in a similar\nmanner and that there is a large degeneracy between the modified gravity\nparameters and the neutron star equation of state. This is exemplified by the\nmodifications to the mass-radius relationship for a variety of model\nparameters.\n",
"title": "Neutron Stars in Screened Modified Gravity: Chameleon vs Dilaton"
}
| null | null | null | null | true | null |
5415
| null |
Default
| null | null |
null |
{
"abstract": " Spin pumping refers to the microwave-driven spin current injection from a\nferromagnet into the adjacent target material. We theoretically investigate the\nspin pumping into superconductors by fully taking account of impurity\nspin-orbit scattering that is indispensable to describe diffusive spin\ntransport with finite spin diffusion length. We calculate temperature\ndependence of the spin pumping signal and show that a pronounced coherence peak\nappears immediately below the superconducting transition temperature Tc, which\nsurvives even in the presence of the spin-orbit scattering. The phenomenon\nprovides us with a new way of studying the dynamic spin susceptibility in a\nsuperconducting thin film. This is contrasted with the nuclear magnetic\nresonance technique used to study a bulk superconductor.\n",
"title": "Spin pumping into superconductors: A new probe of spin dynamics in a superconducting thin film"
}
| null | null | null | null | true | null |
5416
| null |
Default
| null | null |
null |
{
"abstract": " No firm evidence has existed that the ancient Maya civilization recorded\nspecific occurrences of meteor showers or outbursts in the corpus of Maya\nhieroglyphic inscriptions. In fact, there has been no evidence of any\npre-Hispanic civilization in the Western Hemisphere recording any observations\nof any meteor showers on any specific dates.\nThe authors numerically integrated meteoroid-sized particles released by\nComet Halley as early as 1404 BC to identify years within the Maya Classic\nPeriod, AD 250-909, when Eta Aquariid outbursts might have occurred. Outbursts\ndetermined by computer model were then compared to specific events in the Maya\nrecord to see if any correlation existed between the date of the event and the\ndate of the outburst. The model was validated by successfully explaining\nseveral outbursts around the same epoch in the Chinese record. Some outbursts\nobserved by the Maya were due to recent revolutions of Comet Halley, within a\nfew centuries, and some to resonant behavior in older Halley trails, of the\norder of a thousand years. Examples were found of several different Jovian mean\nmotion resonances as well as the 1:3 Saturnian resonance that have controlled\nthe dynamical evolution of meteoroids in apparently observed outbursts.\n",
"title": "Evidence of Eta Aquariid Outbursts Recorded in the Classic Maya Hieroglyphic Script Using Orbital Integrations"
}
| null | null | null | null | true | null |
5417
| null |
Default
| null | null |
null |
{
"abstract": " NURBS curve is widely used in Computer Aided Design and Computer Aided\nGeometric Design. When a single weight approaches infinity, the limit of a\nNURBS curve tends to the corresponding control point. In this paper, a kind of\ncontrol structure of a NURBS curve, called regular control curve, is defined.\nWe prove that the limit of the NURBS curve is exactly its regular control curve\nwhen all of weights approach infinity, where each weight is multiplied by a\ncertain one-parameter function tending to infinity, different for each control\npoint. Moreover, some representative examples are presented to show this\nproperty and indicate its application for shape deformation.\n",
"title": "Degenerations of NURBS curves while all of weights approaching infinity"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5418
| null |
Validated
| null | null |
null |
{
"abstract": " We propose a new class of universal kernel functions which admit a linear\nparametrization using positive semidefinite matrices. These kernels are\ngeneralizations of the Sobolev kernel and are defined by piecewise-polynomial\nfunctions. The class of kernels is termed \"tessellated\" as the resulting\ndiscriminant is defined piecewise with hyper-rectangular domains whose corners\nare determined by the training data. The kernels have scalable complexity, but\neach instance is universal in the sense that its hypothesis space is dense in\n$L_2$. Using numerical testing, we show that for the soft margin SVM, this\nclass can eliminate the need for Gaussian kernels. Furthermore, we demonstrate\nthat when the ratio of the number of training data to features is high, this\nmethod will significantly outperform other kernel learning algorithms. Finally,\nto reduce the complexity associated with SDP-based kernel learning methods, we\nuse a randomized basis for the positive matrices to integrate with existing\nmultiple kernel learning algorithms such as SimpleMKL.\n",
"title": "A Convex Parametrization of a New Class of Universal Kernel Functions for use in Kernel Learning"
}
| null | null | null | null | true | null |
5419
| null |
Default
| null | null |
null |
{
"abstract": " Large batch size training of Neural Networks has been shown to incur accuracy\nloss when trained with the current methods. The exact underlying reasons for\nthis are still not completely understood. Here, we study large batch size\ntraining through the lens of the Hessian operator and robust optimization. In\nparticular, we perform a Hessian based study to analyze exactly how the\nlandscape of the loss function changes when training with large batch size. We\ncompute the true Hessian spectrum, without approximation, by back-propagating\nthe second derivative. Extensive experiments on multiple networks show that\nsaddle-points are not the cause for generalization gap of large batch size\ntraining, and the results consistently show that large batch converges to\npoints with noticeably higher Hessian spectrum. Furthermore, we show that\nrobust training allows one to favor flat areas, as points with large Hessian\nspectrum show poor robustness to adversarial perturbation. We further study\nthis relationship, and provide empirical and theoretical proof that the inner\nloop for robust training is a saddle-free optimization problem \\textit{almost\neverywhere}. We present detailed experiments with five different network\narchitectures, including a residual network, tested on MNIST, CIFAR-10, and\nCIFAR-100 datasets. We have open sourced our method which can be accessed at\n[1].\n",
"title": "Hessian-based Analysis of Large Batch Training and Robustness to Adversaries"
}
| null | null | null | null | true | null |
5420
| null |
Default
| null | null |
null |
{
"abstract": " We present a Bayesian method for feature selection in the presence of\ngrouping information with sparsity on the between- and within group level.\nInstead of using a stochastic algorithm for parameter inference, we employ\nexpectation propagation, which is a deterministic and fast algorithm. Available\nmethods for feature selection in the presence of grouping information have a\nnumber of short-comings: on one hand, lasso methods, while being fast,\nunderestimate the regression coefficients and do not make good use of the\ngrouping information, and on the other hand, Bayesian approaches, while\naccurate in parameter estimation, often rely on the stochastic and slow Gibbs\nsampling procedure to recover the parameters, rendering them infeasible e.g.\nfor gene network reconstruction. Our approach of a Bayesian sparse-group\nframework with expectation propagation enables us to not only recover accurate\nparameter estimates in signal recovery problems, but also makes it possible to\napply this Bayesian framework to large-scale network reconstruction problems.\nThe presented method is generic but in terms of application we focus on gene\nregulatory networks. We show on simulated and experimental data that the method\nconstitutes a good choice for network reconstruction regarding the number of\ncorrectly selected features, prediction on new data and reasonable computing\ntime.\n",
"title": "Sparse-Group Bayesian Feature Selection Using Expectation Propagation for Signal Recovery and Network Reconstruction"
}
| null | null |
[
"Statistics"
] | null | true | null |
5421
| null |
Validated
| null | null |
null |
{
"abstract": " Brain CT has become a standard imaging tool for emergent evaluation of brain\ncondition, and measurement of midline shift (MLS) is one of the most important\nfeatures to address for brain CT assessment. We present a simple method to\nestimate MLS and propose a new alternative parameter to MLS: the ratio of MLS\nover the maximal width of intracranial region (MLS/ICWMAX). Three neurosurgeons\nand our automated system were asked to measure MLS and MLS/ICWMAX in the same\nsets of axial CT images obtained from 41 patients admitted to ICU under\nneurosurgical service. A weighted midline (WML) was plotted based on individual\npixel intensities, with higher weighted given to the darker portions. The MLS\ncould then be measured as the distance between the WML and ideal midline (IML)\nnear the foramen of Monro. The average processing time to output an automatic\nMLS measurement was around 10 seconds. Our automated system achieved an overall\naccuracy of 90.24% when the CT images were calibrated automatically, and\nperformed better when the calibrations of head rotation were done manually\n(accuracy: 92.68%). MLS/ICWMAX and MLS both gave results in same confusion\nmatrices and produced similar ROC curve results. We demonstrated a simple, fast\nand accurate automated system of MLS measurement and introduced a new parameter\n(MLS/ICWMAX) as a good alternative to MLS in terms of estimating the degree of\nbrain deformation, especially when non-DICOM images (e.g. JPEG) are more easily\naccessed.\n",
"title": "A Simple, Fast and Fully Automated Approach for Midline Shift Measurement on Brain Computed Tomography"
}
| null | null | null | null | true | null |
5422
| null |
Default
| null | null |
null |
{
"abstract": " Three properties of the dielectric relaxation in ultra-pure single\ncrystalline H$_{2}$O ice Ih were probed at temperatures between 80-250 K; the\nthermally stimulated depolarization current, static electrical conductivity,\nand dielectric relaxation time. The measurements were made with a guarded\nparallel-plate capacitor constructed of fused quartz with Au electrodes. The\ndata agree with relaxation-based models and provide for the determination of\nactivation energies, which suggest that relaxation in ice is dominated by\nBjerrum defects below 140 K. Furthermore, anisotropy in the dielectric\nrelaxation data reveals that molecular reorientations along the\ncrystallographic $c$-axis are energetically favored over those along the\n$a$-axis between 80-140 K. These results lend support for the postulate of a\nshared origin between the dielectric relaxation dynamics and the thermodynamic\npartial proton-ordering in ice near 100 K, and suggest a preference for\nordering along the $c$-axis.\n",
"title": "Anisotropic Dielectric Relaxation in Single Crystal H$_{2}$O Ice Ih from 80-250 K"
}
| null | null | null | null | true | null |
5423
| null |
Default
| null | null |
null |
{
"abstract": " Protograph-based Raptor-like low-density parity-check codes (PBRL codes) are\na recently proposed family of easily encodable and decodable rate-compatible\nLDPC (RC-LDPC) codes. These codes have an excellent iterative decoding\nthreshold and performance across all design rates. PBRL codes designed thus\nfar, for both long and short block-lengths, have been based on optimizing the\niterative decoding threshold of the protograph of the RC code family at various\ndesign rates.\nIn this work, we propose a design method to obtain better quasi-cyclic (QC)\nRC-LDPC codes with PBRL structure for short block-lengths (of a few hundred\nbits). We achieve this by maximizing an upper bound on the minimum distance of\nany QC-LDPC code that can be obtained from the protograph of a PBRL ensemble.\nThe obtained codes outperform the original PBRL codes at short block-lengths by\nsignificantly improving the error floor behavior at all design rates.\nFurthermore, we identify a reduction in complexity of the design procedure,\nfacilitated by the general structure of a PBRL ensemble.\n",
"title": "Design of Improved Quasi-Cyclic Protograph-Based Raptor-Like LDPC Codes for Short Block-Lengths"
}
| null | null | null | null | true | null |
5424
| null |
Default
| null | null |
null |
{
"abstract": " We empirically evaluate the finite-time performance of several\nsimulation-optimization algorithms on a testbed of problems with the goal of\nmotivating further development of algorithms with strong finite-time\nperformance. We investigate if the observed performance of the algorithms can\nbe explained by properties of the problems, e.g., the number of decision\nvariables, the topology of the objective function, or the magnitude of the\nsimulation error.\n",
"title": "Comparing the Finite-Time Performance of Simulation-Optimization Algorithms"
}
| null | null | null | null | true | null |
5425
| null |
Default
| null | null |
null |
{
"abstract": " Our societies are increasingly dependent on services supplied by computers &\ntheir software. New technology only exacerbates this dependence by increasing\nthe number, performance, and degree of autonomy and inter-connectivity of\nsoftware-empowered computers and cyber-physical \"things\", which translates into\nunprecedented scenarios of interdependence. As a consequence, guaranteeing the\npersistence-of-identity of individual & collective software systems and\nsoftware-backed organisations becomes an important prerequisite toward\nsustaining the safety, security, & quality of the computer services supporting\nhuman societies. Resilience is the term used to refer to the ability of a\nsystem to retain its functional and non-functional identity. In this article we\nconjecture that a better understanding of resilience may be reached by\ndecomposing it into ancillary constituent properties, the same way as a better\ninsight in system dependability was obtained by breaking it down into\nsub-properties. 3 of the main sub-properties of resilience proposed here refer\nrespectively to the ability to perceive environmental changes; understand the\nimplications introduced by those changes; and plan & enact adjustments intended\nto improve the system-environment fit. A fourth property characterises the way\nthe above abilities manifest themselves in computer systems. The 4 properties\nare then analyzed in 3 families of case studies, each consisting of 3 software\nsystems that embed different resilience methods. Our major conclusion is that\nreasoning in terms of resilience sub-properties may help revealing the\ncharacteristics and limitations of classic methods and tools meant to achieve\nsystem and organisational resilience. We conclude by suggesting that our method\nmay prelude to meta-resilient systems -- systems, that is, able to adjust\noptimally their own resilience with respect to changing environmental\nconditions.\n",
"title": "On the Constituent Attributes of Software and Organisational Resilience"
}
| null | null | null | null | true | null |
5426
| null |
Default
| null | null |
null |
{
"abstract": " Using the twisted denominator identity, we derive a closed form root\nmultiplicity formula for all symmetrizable Borcherds-Bozec algebras and discuss\nits applications including the case of Monster Borcherds-Bozec algebra. In the\nsecond half of the paper, we provide the Schofield constuction of symmetric\nBorcherds-Bozec algebras.\n",
"title": "Borcherds-Bozec algebras, root multiplicities and the Schofield construction"
}
| null | null | null | null | true | null |
5427
| null |
Default
| null | null |
null |
{
"abstract": " High pressure can provoke spin transitions in transition metal-bearing\ncompounds. These transitions are of high interest not only for fundamental\nphysics and chemistry, but also may have important implications for\ngeochemistry and geophysics of the Earth and planetary interiors. Here we have\ncarried out a comparative study of the pressure-induced spin transition in\ncompounds with trivalent iron, octahedrally coordinated by oxygen.\nHigh-pressure single-crystal Mössbauer spectroscopy data for FeBO$_3$,\nFe$_2$O$_3$ and Fe$_3$(Fe$_{1.766(2)}$Si$_{0.234(2)}$)(SiO$_4$)$_3$ are\npresented together with detailed analysis of hyperfine parameter behavior. We\nargue that $\\zeta$-Fe$_2$O$_3$ is an intermediate phase in the reconstructive\nphase transition between $\\iota$-Fe$_2$O$_3$ and $\\theta$-Fe$_2$O$_3$ and\nquestion the proposed perovskite-type structure for $\\zeta$-Fe$_2$O$_3$.The\nstructural data show that the spin transition is closely related to the volume\nof the iron octahedron. The transition starts when volumes reach 8.9-9.3\n\\AA$^3$, which corresponds to pressures of 45-60 GPa, depending on the\ncompound. Based on phenomenological arguments we conclude that the spin\ntransition can proceed only as a first-order phase transition in\nmagnetically-ordered compounds. An empirical rule for prediction of cooperative\nbehavior at the spin transition is proposed. The instability of iron octahedra,\ntogether with strong interactions between them in the vicinity of the critical\nvolume, may trigger a phase transition in the metastable phase. We find that\nthe isomer shift of high spin iron ions depends linearly on the octahedron\nvolume with approximately the same coefficient, independent of the particular\ncompounds and/or oxidation state. For eight-fold coordinated Fe$^{2+}$ we\nobserve a significantly weaker nonlinear volume dependence.\n",
"title": "Pressure-induced spin pairing transition of Fe$^{3+}$ in oxygen octahedra"
}
| null | null | null | null | true | null |
5428
| null |
Default
| null | null |
null |
{
"abstract": " LSH (locality sensitive hashing) had emerged as a powerful technique in\nnearest-neighbor search in high dimensions [IM98, HIM12]. Given a point set $P$\nin a metric space, and given parameters $r$ and $\\varepsilon > 0$, the task is\nto preprocess the point set, such that given a query point $q$, one can quickly\ndecide if $q$ is in distance at most $\\leq r$ or $\\geq (1+\\varepsilon)r$ from\nthe point set $P$. Once such a near-neighbor data-structure is available, one\ncan reduce the general nearest-neighbor search to logarithmic number of queries\nin such structures [IM98, Har01, HIM12].\nIn this note, we revisit the most basic settings, where $P$ is a set of\npoints in the binary hypercube $\\{0,1\\}^d$, under the $L_1$/Hamming metric, and\npresent a short description of the LSH scheme in this case. We emphasize that\nthere is no new contribution in this note, except (maybe) the presentation\nitself, which is inspired by the authors recent work [HM17].\n",
"title": "LSH on the Hypercube Revisited"
}
| null | null | null | null | true | null |
5429
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we propose, design and test a new dual-nuclei RF-coil inspired\nby wire metamaterial structures. The coil operates due to resonant excitation\nof hybridized eigenmodes in multimode flat periodic structures comprising\nseveral coupled thin metal strips. It was shown that the field distribution of\nthe coil (i.e. penetration depth) can be controlled independently at two\ndifferent Larmor frequencies by selecting a proper eigenmode in each of two\nmutually orthogonal periodic structures. The proposed coil requires no lumped\ncapacitors for tuning and matching. In order to demonstrate the performance of\nthe new design, an experimental preclinical coil for $^{19}$F/$^{1}$H imaging\nof small animals at 7.05T was engineered and tested on a homogeneous liquid\nphantom and in-vivo. The presented results demonstrate that the coil was well\ntuned and matched simultaneously at two Larmor frequencies and capable of image\nacquisition with both the nuclei reaching large homogeneity area along with a\nsufficient signal-to-noise ratio. In an in-vivo experiment it has been shown\nthat without retuning the setup it was possible to obtain anatomical $^{1}$H\nimages of a mouse under anesthesia consecutively with $^{19}$F images of a tiny\ntube filled with a fluorine-containing liquid and attached to the body of the\nmouse.\n",
"title": "A Novel Metamaterial-Inspired RF-coil for Preclinical Dual-Nuclei MRI"
}
| null | null | null | null | true | null |
5430
| null |
Default
| null | null |
null |
{
"abstract": " Ooids are typically spherical sediment grains characterised by concentric\nlayers encapsulating a core. There is no universally accepted explanation for\nooid genesis, though factors such as agitation, abiotic and/or microbial\nmineralisation and size limitation have been variously invoked. We develop a\nmathematical model for ooid growth, inspired by work on avascular brain\ntumours, that assumes mineralisation in a biofilm to form a central core and\nconcentric growth of laminations. The model predicts a limiting size with the\nsequential width variation of growth rings comparing favourably with those\nobserved in experimentally grown ooids generated from biomicrospheres. In\nreality, this model pattern may be complicated during growth by syngenetic\naggrading neomorphism of the unstable mineral phase, followed by diagenetic\nrecrystallisation that further complicates the structure. Our model provides a\npotential key to understanding the genetic archive preserved in the internal\nstructures of naturally occurring ooids.\n",
"title": "A biofilm and organomineralisation model for the growth and limiting size of ooids"
}
| null | null | null | null | true | null |
5431
| null |
Default
| null | null |
null |
{
"abstract": " We give a polynomial-time algorithm for learning latent-state linear\ndynamical systems without system identification, and without assumptions on the\nspectral radius of the system's transition matrix. The algorithm extends the\nrecently introduced technique of spectral filtering, previously applied only to\nsystems with a symmetric transition matrix, using a novel convex relaxation to\nallow for the efficient identification of phases.\n",
"title": "Spectral Filtering for General Linear Dynamical Systems"
}
| null | null | null | null | true | null |
5432
| null |
Default
| null | null |
null |
{
"abstract": " Let $w_\\alpha(t) := t^{\\alpha}\\,e^{-t}$, where $\\alpha > -1$, be the Laguerre\nweight function, and let $\\|\\cdot\\|_{w_\\alpha}$ be the associated $L_2$-norm,\n$$ \\|f\\|_{w_\\alpha} = \\left\\{\\int_{0}^{\\infty} |f(x)|^2\nw_\\alpha(x)\\,dx\\right\\}^{1/2}\\,. $$ By $\\mathcal{P}_n$ we denote the set of\nalgebraic polynomials of degree $\\le n$.\nWe study the best constant $c_n(\\alpha)$ in the Markov inequality in this\nnorm $$ \\|p_n'\\|_{w_\\alpha} \\le c_n(\\alpha) \\|p_n\\|_{w_\\alpha}\\,,\\qquad p_n \\in\n\\mathcal{P}_n\\,, $$ namely the constant $$ c_n(\\alpha) := \\sup_{p_n \\in\n\\mathcal{P}_n} \\frac{\\|p_n'\\|_{w_\\alpha}}{\\|p_n\\|_{w_\\alpha}}\\,. $$ We derive\nexplicit lower and upper bounds for the Markov constant $c_n(\\alpha)$, as well\nas for the asymptotic Markov constant $$\nc(\\alpha)=\\lim_{n\\rightarrow\\infty}\\frac{c_n(\\alpha)}{n}\\,. $$\n",
"title": "Markov $L_2$-inequality with the Laguerre weight"
}
| null | null | null | null | true | null |
5433
| null |
Default
| null | null |
null |
{
"abstract": " Grids allow users flexible on-demand usage of computing resources through\nremote communication networks. A remarkable example of a Grid in High Energy\nPhysics (HEP) research is used in the ALICE experiment at European Organization\nfor Nuclear Research CERN. Physicists can submit jobs used to process the huge\namount of particle collision data produced by the Large Hadron Collider (LHC).\nGrids face complex security challenges. They are interesting targets for\nattackers seeking for huge computational resources. Since users can execute\narbitrary code in the worker nodes on the Grid sites, special care should be\nput in this environment. Automatic tools to harden and monitor this scenario\nare required. Currently, there is no integrated solution for such requirement.\nThis paper describes a new security framework to allow execution of job\npayloads in a sandboxed context. It also allows process behavior monitoring to\ndetect intrusions, even when new attack methods or zero day vulnerabilities are\nexploited, by a Machine Learning approach. We plan to implement the proposed\nframework as a software prototype that will be tested as a component of the\nALICE Grid middleware.\n",
"title": "Intrusion Prevention and Detection in Grid Computing - The ALICE Case"
}
| null | null | null | null | true | null |
5434
| null |
Default
| null | null |
null |
{
"abstract": " Recently Trajectory-pooled Deep-learning Descriptors were shown to achieve\nstate-of-the-art human action recognition results on a number of datasets. This\npaper improves their performance by applying rank pooling to each trajectory,\nencoding the temporal evolution of deep learning features computed along the\ntrajectory. This leads to Evolution-Preserving Trajectory (EPT) descriptors, a\nnovel type of video descriptor that significantly outperforms Trajectory-pooled\nDeep-learning Descriptors. EPT descriptors are defined based on dense\ntrajectories, and they provide complimentary benefits to video descriptors that\nare not based on trajectories. In particular, we show that the combination of\nEPT descriptors and VideoDarwin leads to state-of-the-art performance on\nHollywood2 and UCF101 datasets.\n",
"title": "Evolution-Preserving Dense Trajectory Descriptors"
}
| null | null | null | null | true | null |
5435
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we present an approach to extract ordered timelines of events,\ntheir participants, locations and times from a set of multilingual and\ncross-lingual data sources. Based on the assumption that event-related\ninformation can be recovered from different documents written in different\nlanguages, we extend the Cross-document Event Ordering task presented at\nSemEval 2015 by specifying two new tasks for, respectively, Multilingual and\nCross-lingual Timeline Extraction. We then develop three deterministic\nalgorithms for timeline extraction based on two main ideas. First, we address\nimplicit temporal relations at document level since explicit time-anchors are\ntoo scarce to build a wide coverage timeline extraction system. Second, we\nleverage several multilingual resources to obtain a single, inter-operable,\nsemantic representation of events across documents and across languages. The\nresult is a highly competitive system that strongly outperforms the current\nstate-of-the-art. Nonetheless, further analysis of the results reveals that\nlinking the event mentions with their target entities and time-anchors remains\na difficult challenge. The systems, resources and scorers are freely available\nto facilitate its use and guarantee the reproducibility of results.\n",
"title": "Multilingual and Cross-lingual Timeline Extraction"
}
| null | null | null | null | true | null |
5436
| null |
Default
| null | null |
null |
{
"abstract": " There has been great interest recently in applying nonparametric kernel\nmixtures in a hierarchical manner to model multiple related data samples\njointly. In such settings several data features are commonly present: (i) the\nrelated samples often share some, if not all, of the mixture components but\nwith differing weights, (ii) only some, not all, of the mixture components vary\nacross the samples, and (iii) often the shared mixture components across\nsamples are not aligned perfectly in terms of their location and spread, but\nrather display small misalignments either due to systematic cross-sample\ndifference or more often due to uncontrolled, extraneous causes. Properly\nincorporating these features in mixture modeling will enhance the efficiency of\ninference, whereas ignoring them not only reduces efficiency but can jeopardize\nthe validity of the inference due to issues such as confounding. We introduce\ntwo techniques for incorporating these features in modeling related data\nsamples using kernel mixtures. The first technique, called $\\psi$-stick\nbreaking, is a joint generative process for the mixing weights through the\nbreaking of both a stick shared by all the samples for the components that do\nnot vary in size across samples and an idiosyncratic stick for each sample for\nthose components that do vary in size. The second technique is to imbue random\nperturbation into the kernels, thereby accounting for cross-sample\nmisalignment. These techniques can be used either separately or together in\nboth parametric and nonparametric kernel mixtures. We derive efficient Bayesian\ninference recipes based on MCMC sampling for models featuring these techniques,\nand illustrate their work through both simulated data and a real flow cytometry\ndata set in prediction/estimation, cross-sample calibration, and testing\nmulti-sample differences.\n",
"title": "Mixture modeling on related samples by $ψ$-stick breaking and kernel perturbation"
}
| null | null |
[
"Statistics"
] | null | true | null |
5437
| null |
Validated
| null | null |
null |
{
"abstract": " The minimum feedback arc set problem asks to delete a minimum number of arcs\n(directed edges) from a digraph (directed graph) to make it free of any\ndirected cycles. In this work we approach this fundamental cycle-constrained\noptimization problem by considering a generalized task of dividing the digraph\ninto D layers of equal size. We solve the D-segmentation problem by the\nreplica-symmetric mean field theory and belief-propagation heuristic\nalgorithms. The minimum feedback arc density of a given random digraph ensemble\nis then obtained by extrapolating the theoretical results to the limit of large\nD. A divide-and-conquer algorithm (nested-BPR) is devised to solve the minimum\nfeedback arc set problem with very good performance and high efficiency.\n",
"title": "Optimal segmentation of directed graph and the minimum number of feedback arcs"
}
| null | null | null | null | true | null |
5438
| null |
Default
| null | null |
null |
{
"abstract": " This paper describes our approach for the triple scoring task at the WSDM Cup\n2017. The task required participants to assign a relevance score for each pair\nof entities and their types in a knowledge base in order to enhance the ranking\nresults in entity retrieval tasks. We propose an approach wherein the outputs\nof multiple neural network classifiers are combined using a supervised machine\nlearning model. The experimental results showed that our proposed method\nachieved the best performance in one out of three measures (i.e., Kendall's\ntau), and performed competitively in the other two measures (i.e., accuracy and\naverage score difference).\n",
"title": "Ensemble of Neural Classifiers for Scoring Knowledge Base Triples"
}
| null | null | null | null | true | null |
5439
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we present an efficient computational framework with the\npurpose of generating weighted pseudo-measurements to improve the quality of\nDistribution System State Estimation (DSSE) and provide observability with\nAdvanced Metering Infrastructure (AMI) against unobservable customers and\nmissing data. The proposed technique is based on a game-theoretic expansion of\nRelevance Vector Machines (RVM). This platform is able to estimate the customer\npower consumption data and quantify its uncertainty while reducing the\nprohibitive computational burden of model training for large AMI datasets. To\nachieve this objective, the large training set is decomposed and distributed\namong multiple parallel learning entities. The resulting estimations from the\nparallel RVMs are then combined using a game-theoretic model based on the idea\nof repeated games with vector payoff. It is observed that through this approach\nand by exploiting the seasonal changes in customers' behavior the accuracy of\npseudo-measurements can be considerably improved, while introducing robustness\nagainst bad training data samples. The proposed pseudo-measurement generation\nmodel is integrated into a DSSE using a closed-loop information system, which\ntakes advantage of a Branch Current State Estimator (BCSE) data to further\nimprove the performance of the designed machine learning framework. This method\nhas been tested on a practical distribution feeder model with smart meter data\nfor verification.\n",
"title": "A Game-Theoretic Data-Driven Approach for Pseudo-Measurement Generation in Distribution System State Estimation"
}
| null | null | null | null | true | null |
5440
| null |
Default
| null | null |
null |
{
"abstract": " As a popular tool for producing meaningful and interpretable models,\nlarge-scale sparse learning works efficiently when the underlying structures\nare indeed or close to sparse. However, naively applying the existing\nregularization methods can result in misleading outcomes due to model\nmisspecification. In particular, the direct sparsity assumption on coefficient\nvectors has been questioned in real applications. Therefore, we consider\nnonsparse learning with the conditional sparsity structure that the coefficient\nvector becomes sparse after taking out the impacts of certain unobservable\nlatent variables. A new methodology of nonsparse learning with latent variables\n(NSL) is proposed to simultaneously recover the significant observable\npredictors and latent factors as well as their effects. We explore a common\nlatent family incorporating population principal components and derive the\nconvergence rates of both sample principal components and their score vectors\nthat hold for a wide class of distributions. With the properly estimated latent\nvariables, properties including model selection consistency and oracle\ninequalities under various prediction and estimation losses are established for\nthe proposed methodology. Our new methodology and results are evidenced by\nsimulation and real data examples.\n",
"title": "Nonsparse learning with latent variables"
}
| null | null | null | null | true | null |
5441
| null |
Default
| null | null |
null |
{
"abstract": " Many problems in industry --- and in the social, natural, information, and\nmedical sciences --- involve discrete data and benefit from approaches from\nsubjects such as network science, information theory, optimization,\nprobability, and statistics. The study of networks is concerned explicitly with\nconnectivity between different entities, and it has become very prominent in\nindustrial settings, an importance that has intensified amidst the modern data\ndeluge. In this commentary, we discuss the role of network analysis in\nindustrial and applied mathematics, and we give several examples of network\nscience in industry. We focus, in particular, on discussing a\nphysical-applied-mathematics approach to the study of networks. We also discuss\nseveral of our own collaborations with industry on projects in network\nanalysis.\n",
"title": "The Role of Network Analysis in Industrial and Applied Mathematics"
}
| null | null | null | null | true | null |
5442
| null |
Default
| null | null |
null |
{
"abstract": " Double-fetch bugs are a special type of race condition, where an unprivileged\nexecution thread is able to change a memory location between the time-of-check\nand time-of-use of a privileged execution thread. If an unprivileged attacker\nchanges the value at the right time, the privileged operation becomes\ninconsistent, leading to a change in control flow, and thus an escalation of\nprivileges for the attacker. More severely, such double-fetch bugs can be\nintroduced by the compiler, entirely invisible on the source-code level.\nWe propose novel techniques to efficiently detect, exploit, and eliminate\ndouble-fetch bugs. We demonstrate the first combination of state-of-the-art\ncache attacks with kernel-fuzzing techniques to allow fully automated\nidentification of double fetches. We demonstrate the first fully automated\nreliable detection and exploitation of double-fetch bugs, making manual\nanalysis as in previous work superfluous. We show that cache-based triggers\noutperform state-of-the-art exploitation techniques significantly, leading to\nan exploitation success rate of up to 97%. Our modified fuzzer automatically\ndetects double fetches and automatically narrows down this candidate set for\ndouble-fetch bugs to the exploitable ones. We present the first generic\ntechnique based on hardware transactional memory, to eliminate double-fetch\nbugs in a fully automated and transparent manner. We extend defensive\nprogramming techniques by retrofitting arbitrary code with automated\ndouble-fetch prevention, both in trusted execution environments as well as in\nsyscalls, with a performance overhead below 1%.\n",
"title": "Automated Detection, Exploitation, and Elimination of Double-Fetch Bugs using Modern CPU Features"
}
| null | null | null | null | true | null |
5443
| null |
Default
| null | null |
null |
{
"abstract": " We analytically study the spontaneous emission of a single optical dipole\nemitter in the vicinity of a plasmonic nanoshell, based on the Lorenz-Mie\ntheory. We show that the fluorescence enhancement due to the coupling between\noptical emitter and sphere can be tuned by the aspect ratio of the core-shell\nnanosphere and by the distance between the quantum emitter and its surface. In\nparticular, we demonstrate that both the enhancement and quenching of the\nfluorescence intensity are associated with plasmonic Fano resonances induced by\nnear- and far-field interactions. These Fano resonances have asymmetry\nparameters whose signs depend on the orientation of the dipole with respect to\nthe spherical nanoshell. We also show that if the atomic dipole is oriented\ntangentially to the nanoshell, the interaction exhibits saddle points in the\nnear-field energy flow. This results in a Lorentzian fluorescence enhancement\nresponse in the near field and a Fano line-shape in the far field. The\nsignatures of this interaction may have interesting applications for sensing\nthe presence and the orientation of optical emitters in close proximity to\nplasmonic nanoshells.\n",
"title": "Fano resonances and fluorescence enhancement of a dipole emitter near a plasmonic nanoshell"
}
| null | null | null | null | true | null |
5444
| null |
Default
| null | null |
null |
{
"abstract": " Any oriented Riemannian manifold with a Spin-structure defines a spectral\ntriple, so the spectral triple can be regarded as a noncommutative\nSpin-manifold. Otherwise for any unoriented Riemannian manifold there is the\ntwo-fold covering by oriented Riemannian manifold. Moreover there are\nnoncommutative generalizations of finite-fold coverings. This circumstances\nyield a notion of unoriented spectral triple which is covered by oriented one.\n",
"title": "Unoriented Spectral Triples"
}
| null | null | null | null | true | null |
5445
| null |
Default
| null | null |
null |
{
"abstract": " Surrogate models provide a low computational cost alternative to evaluating\nexpensive functions. The construction of accurate surrogate models with large\nnumbers of independent variables is currently prohibitive because it requires a\nlarge number of function evaluations. Gradient-enhanced kriging has the\npotential to reduce the number of function evaluations for the desired accuracy\nwhen efficient gradient computation, such as an adjoint method, is available.\nHowever, current gradient-enhanced kriging methods do not scale well with the\nnumber of sampling points due to the rapid growth in the size of the\ncorrelation matrix where new information is added for each sampling point in\neach direction of the design space. They do not scale well with the number of\nindependent variables either due to the increase in the number of\nhyperparameters that needs to be estimated. To address this issue, we develop a\nnew gradient-enhanced surrogate model approach that drastically reduced the\nnumber of hyperparameters through the use of the partial-least squares method\nthat maintains accuracy. In addition, this method is able to control the size\nof the correlation matrix by adding only relevant points defined through the\ninformation provided by the partial-least squares method. To validate our\nmethod, we compare the global accuracy of the proposed method with conventional\nkriging surrogate models on two analytic functions with up to 100 dimensions,\nas well as engineering problems of varied complexity with up to 15 dimensions.\nWe show that the proposed method requires fewer sampling points than\nconventional methods to obtain the desired accuracy, or provides more accuracy\nfor a fixed budget of sampling points. In some cases, we get over 3 times more\naccurate models than a bench of surrogate models from the literature, and also\nover 3200 times faster than standard gradient-enhanced kriging models.\n",
"title": "Gradient-enhanced kriging for high-dimensional problems"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5446
| null |
Validated
| null | null |
null |
{
"abstract": " Recent terrorist attacks carried out on behalf of ISIS on American and\nEuropean soil by lone wolf attackers or sleeper cells remind us of the\nimportance of understanding the dynamics of radicalization mediated by social\nmedia communication channels. In this paper, we shed light on the social media\nactivity of a group of twenty-five thousand users whose association with ISIS\nonline radical propaganda has been manually verified. By using a computational\ntool known as dynamical activity-connectivity maps, based on network and\ntemporal activity patterns, we investigate the dynamics of social influence\nwithin ISIS supporters. We finally quantify the effectiveness of ISIS\npropaganda by determining the adoption of extremist content in the general\npopulation and draw a parallel between radical propaganda and epidemics\nspreading, highlighting that information broadcasters and influential ISIS\nsupporters generate highly-infectious cascades of information contagion. Our\nfindings will help generate effective countermeasures to combat the group and\nother forms of online extremism.\n",
"title": "Contagion dynamics of extremist propaganda in social networks"
}
| null | null | null | null | true | null |
5447
| null |
Default
| null | null |
null |
{
"abstract": " Inference in hidden Markov model has been challenging in terms of scalability\ndue to dependencies in the observation data. In this paper, we utilize the\ninherent memory decay in hidden Markov models, such that the forward and\nbackward probabilities can be carried out with subsequences, enabling efficient\ninference over long sequences of observations. We formulate this forward\nfiltering process in the setting of the random dynamical system and there exist\nLyapunov exponents in the i.i.d random matrices production. And the rate of the\nmemory decay is known as $\\lambda_2-\\lambda_1$, the gap of the top two Lyapunov\nexponents almost surely. An efficient and accurate algorithm is proposed to\nnumerically estimate the gap after the soft-max parametrization. The length of\nsubsequences $B$ given the controlled error $\\epsilon$ is\n$B=\\log(\\epsilon)/(\\lambda_2-\\lambda_1)$. We theoretically prove the validity\nof the algorithm and demonstrate the effectiveness with numerical examples. The\nmethod developed here can be applied to widely used algorithms, such as\nmini-batch stochastic gradient method. Moreover, the continuity of Lyapunov\nspectrum ensures the estimated $B$ could be reused for the nearby parameter\nduring the inference.\n",
"title": "Estimate exponential memory decay in Hidden Markov Model and its applications"
}
| null | null | null | null | true | null |
5448
| null |
Default
| null | null |
null |
{
"abstract": " Kinetic Inductance Detectors (KIDs) have become an attractive alternative to\ntraditional bolometers in the sub-mm and mm observing community due to their\ninnate frequency multiplexing capabilities and simple lithographic processes.\nThese advantages make KIDs a viable option for the $O(500,000)$ detectors\nneeded for the upcoming Cosmic Microwave Background - Stage 4 (CMB-S4)\nexperiment. We have fabricated antenna-coupled MKID array in the 150GHz band\noptimized for CMB detection. Our design uses a twin slot antenna coupled to\ninverted microstrip made from a superconducting Nb/Al bilayer and SiN$_x$,\nwhich is then coupled to an Al KID grown on high resistivity Si. We present the\nfabrication process and measurements of SiN$_x$ microstrip resonators.\n",
"title": "Fabrication of antenna-coupled KID array for Cosmic Microwave Background detection"
}
| null | null |
[
"Physics"
] | null | true | null |
5449
| null |
Validated
| null | null |
null |
{
"abstract": " Penalized regression models such as the lasso have been extensively applied\nto analyzing high-dimensional data sets. However, due to memory limitations,\nexisting R packages like glmnet and ncvreg are not capable of fitting\nlasso-type models for ultrahigh-dimensional, multi-gigabyte data sets that are\nincreasingly seen in many areas such as genetics, genomics, biomedical imaging,\nand high-frequency finance. In this research, we implement an R package called\nbiglasso that tackles this challenge. biglasso utilizes memory-mapped files to\nstore the massive data on the disk, only reading data into memory when\nnecessary during model fitting, and is thus able to handle out-of-core\ncomputation seamlessly. Moreover, it's equipped with newly proposed, more\nefficient feature screening rules, which substantially accelerate the\ncomputation. Benchmarking experiments show that our biglasso package, as\ncompared to existing popular ones like glmnet, is much more memory- and\ncomputation-efficient. We further analyze a 31 GB real data set on a laptop\nwith only 16 GB RAM to demonstrate the out-of-core computation capability of\nbiglasso in analyzing massive data sets that cannot be accommodated by existing\nR packages.\n",
"title": "The biglasso Package: A Memory- and Computation-Efficient Solver for Lasso Model Fitting with Big Data in R"
}
| null | null | null | null | true | null |
5450
| null |
Default
| null | null |
null |
{
"abstract": " Many optimization algorithms converge to stationary points. When the\nunderlying problem is nonconvex, they may get trapped at local minimizers and\noccasionally stagnate near saddle points. We propose the Run-and-Inspect\nMethod, which adds an \"inspect\" phase to existing algorithms that helps escape\nfrom non-global stationary points. The inspection samples a set of points in a\nradius $R$ around the current point. When a sample point yields a sufficient\ndecrease in the objective, we move there and resume an existing algorithm. If\nno sufficient decrease is found, the current point is called an approximate\n$R$-local minimizer. We show that an $R$-local minimizer is globally optimal,\nup to a specific error depending on $R$, if the objective function can be\nimplicitly decomposed into a smooth convex function plus a restricted function\nthat is possibly nonconvex, nonsmooth. For high-dimensional problems, we\nintroduce blockwise inspections to overcome the curse of dimensionality while\nstill maintaining optimality bounds up to a factor equal to the number of\nblocks. Our method performs well on a set of artificial and realistic nonconvex\nproblems by coupling with gradient descent, coordinate descent, EM, and\nprox-linear algorithms.\n",
"title": "Run-and-Inspect Method for Nonconvex Optimization and Global Optimality Bounds for R-Local Minimizers"
}
| null | null | null | null | true | null |
5451
| null |
Default
| null | null |
null |
{
"abstract": " We propose a novel hierarchical generative model with a simple Markovian\nstructure and a corresponding inference model. Both the generative and\ninference model are trained using the adversarial learning paradigm. We\ndemonstrate that the hierarchical structure supports the learning of\nprogressively more abstract representations as well as providing semantically\nmeaningful reconstructions with different levels of fidelity. Furthermore, we\nshow that minimizing the Jensen-Shanon divergence between the generative and\ninference network is enough to minimize the reconstruction error. The resulting\nsemantically meaningful hierarchical latent structure discovery is exemplified\non the CelebA dataset. There, we show that the features learned by our model in\nan unsupervised way outperform the best handcrafted features. Furthermore, the\nextracted features remain competitive when compared to several recent deep\nsupervised approaches on an attribute prediction task on CelebA. Finally, we\nleverage the model's inference network to achieve state-of-the-art performance\non a semi-supervised variant of the MNIST digit classification task.\n",
"title": "Hierarchical Adversarially Learned Inference"
}
| null | null | null | null | true | null |
5452
| null |
Default
| null | null |
null |
{
"abstract": " We report the results of the implementation of a quantum key distribution\n(QKD) network using standard fibre communication lines in Moscow. The developed\nQKD network is based on the paradigm of trusted repeaters and allows a common\nsecret key to be generated between users via an intermediate trusted node. The\nmain feature of the network is the integration of the setups using two types of\nencoding, i.e. polarisation encoding and phase encoding. One of the possible\napplications of the developed QKD network is the continuous key renewal in\nexisting symmetric encryption devices with a key refresh time of up to 14 s.\n",
"title": "Demonstration of a quantum key distribution network in urban fibre-optic communication lines"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5453
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we introduce ZhuSuan, a python probabilistic programming\nlibrary for Bayesian deep learning, which conjoins the complimentary advantages\nof Bayesian methods and deep learning. ZhuSuan is built upon Tensorflow. Unlike\nexisting deep learning libraries, which are mainly designed for deterministic\nneural networks and supervised tasks, ZhuSuan is featured for its deep root\ninto Bayesian inference, thus supporting various kinds of probabilistic models,\nincluding both the traditional hierarchical Bayesian models and recent deep\ngenerative models. We use running examples to illustrate the probabilistic\nprogramming on ZhuSuan, including Bayesian logistic regression, variational\nauto-encoders, deep sigmoid belief networks and Bayesian recurrent neural\nnetworks.\n",
"title": "ZhuSuan: A Library for Bayesian Deep Learning"
}
| null | null | null | null | true | null |
5454
| null |
Default
| null | null |
null |
{
"abstract": " Current action recognition methods heavily rely on trimmed videos for model\ntraining. However, it is expensive and time-consuming to acquire a large-scale\ntrimmed video dataset. This paper presents a new weakly supervised\narchitecture, called UntrimmedNet, which is able to directly learn action\nrecognition models from untrimmed videos without the requirement of temporal\nannotations of action instances. Our UntrimmedNet couples two important\ncomponents, the classification module and the selection module, to learn the\naction models and reason about the temporal duration of action instances,\nrespectively. These two components are implemented with feed-forward networks,\nand UntrimmedNet is therefore an end-to-end trainable architecture. We exploit\nthe learned models for action recognition (WSR) and detection (WSD) on the\nuntrimmed video datasets of THUMOS14 and ActivityNet. Although our UntrimmedNet\nonly employs weak supervision, our method achieves performance superior or\ncomparable to that of those strongly supervised approaches on these two\ndatasets.\n",
"title": "UntrimmedNets for Weakly Supervised Action Recognition and Detection"
}
| null | null | null | null | true | null |
5455
| null |
Default
| null | null |
null |
{
"abstract": " We study the likelihood which relative minima of random polynomial potentials\nsupport the slow-roll conditions for inflation. Consistent with\nrenormalizability and boundedness, the coefficients that appear in the\npotential are chosen to be order one with respect to the energy scale at which\ninflation transpires. Investigation of the single field case illustrates a\nwindow in which the potentials satisfy the slow-roll conditions. When there are\ntwo scalar fields, we find that the probability depends on the choice of\ndistribution for the coefficients. A uniform distribution yields a $0.05\\%$\nprobability of finding a suitable minimum in the random potential whereas a\nmaximum entropy distribution yields a $0.1\\%$ probability.\n",
"title": "Flatness of Minima in Random Inflationary Landscapes"
}
| null | null |
[
"Physics"
] | null | true | null |
5456
| null |
Validated
| null | null |
null |
{
"abstract": " A latent-variable model is introduced for text matching, inferring sentence\nrepresentations by jointly optimizing generative and discriminative objectives.\nTo alleviate typical optimization challenges in latent-variable models for\ntext, we employ deconvolutional networks as the sequence decoder (generator),\nproviding learned latent codes with more semantic information and better\ngeneralization. Our model, trained in an unsupervised manner, yields stronger\nempirical predictive performance than a decoder based on Long Short-Term Memory\n(LSTM), with less parameters and considerably faster training. Further, we\napply it to text sequence-matching problems. The proposed model significantly\noutperforms several strong sentence-encoding baselines, especially in the\nsemi-supervised setting.\n",
"title": "Deconvolutional Latent-Variable Model for Text Sequence Matching"
}
| null | null | null | null | true | null |
5457
| null |
Default
| null | null |
null |
{
"abstract": " We develop a magneto-elastic (ME) coupling model for the interaction between\nthe vortex lattice and crystal elasticity. The theory extends the Kogan-Clem's\nanisotropic Ginzburg-Landau (GL) model to include the elasticity effect. The\nanisotropies in superconductivity and elasticity are simultaneously considered\nin the GL theory frame. We compare the field and angular dependences of the\nmagnetization to the relevant experiments. The contribution of the ME\ninteraction to the magnetization is comparable to the vortex-lattice energy, in\nmaterials with relatively strong pressure dependence of the critical\ntemperature. The theory can give the appropriate slope of the field dependence\nof magnetization near the upper critical field. The magnetization ratio along\ndifferent vortex frame axes is independent with the ME interaction. The\ntheoretical description of the magnetization ratio is applicable only if the\napplied field moderately close to the upper critical field.\n",
"title": "Magneto-elastic coupling model of deformable anisotropic superconductors"
}
| null | null |
[
"Physics"
] | null | true | null |
5458
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the problem of high-dimensional misspecified phase retrieval.\nThis is where we have an $s$-sparse signal vector $\\mathbf{x}_*$ in\n$\\mathbb{R}^n$, which we wish to recover using sampling vectors\n$\\textbf{a}_1,\\ldots,\\textbf{a}_m$, and measurements $y_1,\\ldots,y_m$, which\nare related by the equation $f(\\left<\\textbf{a}_i,\\textbf{x}_*\\right>) = y_i$.\nHere, $f$ is an unknown link function satisfying a positive correlation with\nthe quadratic function. This problem was analyzed in a recent paper by Neykov,\nWang and Liu, who provided recovery guarantees for a two-stage algorithm with\nsample complexity $m = O(s^2\\log n)$. In this paper, we show that the first\nstage of their algorithm suffices for signal recovery with the same sample\ncomplexity, and extend the analysis to non-Gaussian measurements. Furthermore,\nwe show how the algorithm can be generalized to recover a signal vector\n$\\textbf{x}_*$ efficiently given geometric prior information other than\nsparsity.\n",
"title": "Sparse Phase Retrieval via Sparse PCA Despite Model Misspecification: A Simplified and Extended Analysis"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
5459
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the problem of minimizing a convex objective function $F$ when\none can only evaluate its noisy approximation $\\hat{F}$. Unless one assumes\nsome structure on the noise, $\\hat{F}$ may be an arbitrary nonconvex function,\nmaking the task of minimizing $F$ intractable. To overcome this, prior work has\noften focused on the case when $F(x)-\\hat{F}(x)$ is uniformly-bounded. In this\npaper we study the more general case when the noise has magnitude $\\alpha F(x)\n+ \\beta$ for some $\\alpha, \\beta > 0$, and present a polynomial time algorithm\nthat finds an approximate minimizer of $F$ for this noise model. Previously,\nMarkov chains, such as the stochastic gradient Langevin dynamics, have been\nused to arrive at approximate solutions to these optimization problems.\nHowever, for the noise model considered in this paper, no single temperature\nallows such a Markov chain to both mix quickly and concentrate near the global\nminimizer. We bypass this by combining \"simulated annealing\" with the\nstochastic gradient Langevin dynamics, and gradually decreasing the temperature\nof the chain in order to approach the global minimizer. As a corollary one can\napproximately minimize a nonconvex function that is close to a convex function;\nhowever, the closeness can deteriorate as one moves away from the optimum.\n",
"title": "Convex Optimization with Unbounded Nonconvex Oracles using Simulated Annealing"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5460
| null |
Validated
| null | null |
null |
{
"abstract": " We describe the purification of xenon from traces of the radioactive noble\ngas radon using a cryogenic distillation column. The distillation column is\nintegrated into the gas purification loop of the XENON100 detector for online\nradon removal. This enabled us to significantly reduce the constant $^{222}$Rn\nbackground originating from radon emanation. After inserting an auxiliary\n$^{222}$Rn emanation source in the gas loop, we determined a radon reduction\nfactor of R > 27 (95% C.L.) for the distillation column by monitoring the\n$^{222}$Rn activity concentration inside the XENON100 detector.\n",
"title": "Online $^{222}$Rn removal by cryogenic distillation in the XENON100 experiment"
}
| null | null | null | null | true | null |
5461
| null |
Default
| null | null |
null |
{
"abstract": " The Kite graph $Kite_{p}^{q}$ is obtained by appending the complete graph\n$K_{p}$ to a pendant vertex of the path $P_{q}$. In this paper, the kite graph\nis proved to be determined by the spectrum of its adjacency matrix.\n",
"title": "The Kite Graph is Determined by Its Adjacency Spectrum"
}
| null | null | null | null | true | null |
5462
| null |
Default
| null | null |
null |
{
"abstract": " We consider the problem of graph matchability in non-identically distributed\nnetworks. In a general class of edge-independent networks, we demonstrate that\ngraph matchability is almost surely lost when matching the networks directly,\nand is almost perfectly recovered when first centering the networks using\nUniversal Singular Value Thresholding before matching. These theoretical\nresults are then demonstrated in both real and synthetic simulation settings.\nWe also recover analogous core-matchability results in a very general core-junk\nnetwork model, wherein some vertices do not correspond between the graph pair.\n",
"title": "Matchability of heterogeneous networks pairs"
}
| null | null | null | null | true | null |
5463
| null |
Default
| null | null |
null |
{
"abstract": " University curriculum, both on a campus level and on a per-major level, are\naffected in a complex way by many decisions of many administrators and faculty\nover time. As universities across the United States share an urgency to\nsignificantly improve student success and success retention, there is a\npressing need to better understand how the student population is progressing\nthrough the curriculum, and how to provide better supporting infrastructure and\nrefine the curriculum for the purpose of improving student outcomes. This work\nhas developed a visual knowledge discovery system called eCamp that pulls\ntogether a variety of populationscale data products, including student grades,\nmajor descriptions, and graduation records. These datasets were previously\ndisconnected and only available to and maintained by independent campus\noffices. The framework models and analyzes the multi-level relationships hidden\nwithin these data products, and visualizes the student flow patterns through\nindividual majors as well as through a hierarchy of majors. These results\nsupport analytical tasks involving student outcomes, student retention, and\ncurriculum design. It is shown how eCamp has revealed student progression\ninformation that was previously unavailable.\n",
"title": "Visual Progression Analysis of Student Records Data"
}
| null | null | null | null | true | null |
5464
| null |
Default
| null | null |
null |
{
"abstract": " While linear mixed model (LMM) has shown a competitive performance in\ncorrecting spurious associations raised by population stratification, family\nstructures, and cryptic relatedness, more challenges are still to be addressed\nregarding the complex structure of genotypic and phenotypic data. For example,\ngeneticists have discovered that some clusters of phenotypes are more\nco-expressed than others. Hence, a joint analysis that can utilize such\nrelatedness information in a heterogeneous data set is crucial for genetic\nmodeling.\nWe proposed the sparse graph-structured linear mixed model (sGLMM) that can\nincorporate the relatedness information from traits in a dataset with\nconfounding correction. Our method is capable of uncovering the genetic\nassociations of a large number of phenotypes together while considering the\nrelatedness of these phenotypes. Through extensive simulation experiments, we\nshow that the proposed model outperforms other existing approaches and can\nmodel correlation from both population structure and shared signals. Further,\nwe validate the effectiveness of sGLMM in the real-world genomic dataset on two\ndifferent species from plants and humans. In Arabidopsis thaliana data, sGLMM\nbehaves better than all other baseline models for 63.4% traits. We also discuss\nthe potential causal genetic variation of Human Alzheimer's disease discovered\nby our model and justify some of the most important genetic loci.\n",
"title": "A Sparse Graph-Structured Lasso Mixed Model for Genetic Association with Confounding Correction"
}
| null | null | null | null | true | null |
5465
| null |
Default
| null | null |
null |
{
"abstract": " Diffusions and related random walk procedures are of central importance in\nmany areas of machine learning, data analysis, and applied mathematics. Because\nthey spread mass agnostically at each step in an iterative manner, they can\nsometimes spread mass \"too aggressively,\" thereby failing to find the \"right\"\nclusters. We introduce a novel Capacity Releasing Diffusion (CRD) Process,\nwhich is both faster and stays more local than the classical spectral diffusion\nprocess. As an application, we use our CRD Process to develop an improved local\nalgorithm for graph clustering. Our local graph clustering method can find\nlocal clusters in a model of clustering where one begins the CRD Process in a\ncluster whose vertices are connected better internally than externally by an\n$O(\\log^2 n)$ factor, where $n$ is the number of nodes in the cluster. Thus,\nour CRD Process is the first local graph clustering algorithm that is not\nsubject to the well-known quadratic Cheeger barrier. Our result requires a\ncertain smoothness condition, which we expect to be an artifact of our\nanalysis. Our empirical evaluation demonstrates improved results, in particular\nfor realistic social graphs where there are moderately good---but not very\ngood---clusters.\n",
"title": "Capacity Releasing Diffusion for Speed and Locality"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5466
| null |
Validated
| null | null |
null |
{
"abstract": " Continuing the series of works following Weyl's one-term asymptotic formula\nfor the counting function $N(\\lambda)=\\sum_{n=1}^\\infty(\\lambda_n{-}\\lambda)_-$\nof the eigenvalues of the Dirichlet Laplacian and the much later found two-term\nexpansion on domains with highly regular boundary by Ivrii and Melrose, we\nprove a two-term asymptotic expansion of the $N$-th Cesàro mean of the\neigenvalues of $\\sqrt{-\\Delta + m^2} - m$ for $m>0$ with Dirichlet boundary\ncondition on a bounded domain $\\Omega\\subset\\mathbb R^d$ for $d\\geq 2$,\nextending a result by Frank and Geisinger for the fractional Laplacian ($m=0$)\nand improving upon the small-time asymptotics of the heat trace $Z(t) =\n\\sum_{n=1}^\\infty e^{-t \\lambda_n}$ by Bañuelos et al. and Park and Song.\n",
"title": "Two-term spectral asymptotics for the Dirichlet pseudo-relativistic kinetic energy operator on a bounded domain"
}
| null | null | null | null | true | null |
5467
| null |
Default
| null | null |
null |
{
"abstract": " Large sample size equivalence between the celebrated {\\it approximated}\nGood-Turing estimator of the probability to discover a species already observed\na certain number of times (Good, 1953) and the modern Bayesian nonparametric\ncounterpart has been recently established by virtue of a particular smoothing\nrule based on the two-parameter Poisson-Dirichlet model. Here we improve on\nthis result showing that, for any finite sample size, when the population\nfrequencies are assumed to be selected from a superpopulation with\ntwo-parameter Poisson-Dirichlet distribution, then Bayesian nonparametric\nestimation of the discovery probabilities corresponds to Good-Turing {\\it\nexact} estimation. Moreover under general superpopulation hypothesis the\nGood-Turing solution admits an interpretation as a modern Bayesian\nnonparametric estimator under partial information.\n",
"title": "Exact Good-Turing characterization of the two-parameter Poisson-Dirichlet superpopulation model"
}
| null | null | null | null | true | null |
5468
| null |
Default
| null | null |
null |
{
"abstract": " We prove an exponential deviation inequality for the convex hull of a finite\nsample of i.i.d. random points with a density supported on an arbitrary convex\nbody in $\\R^d$, $d\\geq 2$. When the density is uniform, our result yields rate\noptimal upper bounds for all the moments of the missing volume of the convex\nhull, uniformly over all convex bodies of $\\R^d$: We make no restrictions on\ntheir volume, location in the space or smoothness of their boundary. After\nextending an identity due to Efron, we also prove upper bounds for the moments\nof the number of vertices of the random polytope. Surprisingly, these bounds do\nnot depend on the underlying density and we prove that the growth rates that we\nobtain are tight in a certain sense.\n",
"title": "Uniform deviation and moment inequalities for random polytopes with general densities in arbitrary convex bodies"
}
| null | null | null | null | true | null |
5469
| null |
Default
| null | null |
null |
{
"abstract": " This two-part paper addresses the design of retail electricity tariffs for\ndistribution systems with distributed energy resources (DERs). Part I presents\na framework to optimize an ex-ante two-part tariff for a regulated monopolistic\nretailer who faces stochastic wholesale prices on the one hand and stochastic\ndemand on the other. In Part II, the integration of DERs is addressed by\nanalyzing their endogenous effect on the optimal two-part tariff and the\ninduced welfare gains. Two DER integration models are considered: (i) a\ndecentralized model involving behind-the-meter DERs in a net metering setting,\nand (ii) a centralized model involving DERs integrated by the retailer. It is\nshown that DERs integrated under either model can achieve the same social\nwelfare and the net-metering tariff structure is optimal. The retail prices\nunder both integration models are equal and reflect the expected wholesale\nprices. The connection charges differ and are affected by the retailer's fixed\ncosts as well as the statistical dependencies between wholesale prices and\nbehind-the-meter DERs. In particular, the connection charge of the\ndecentralized model is generally higher than that of the centralized model. An\nempirical analysis is presented to estimate the impact of DER on welfare\ndistribution and inter-class cross-subsidies using real price and demand data\nand simulations. The analysis shows that, with the prevailing retail pricing\nand net-metering, consumer welfare decreases with the level of DER integration.\nIssues of cross-subsidy and practical drawbacks of decentralized integration\nare also discussed.\n",
"title": "On the Efficiency of Connection Charges---Part II: Integration of Distributed Energy Resources"
}
| null | null | null | null | true | null |
5470
| null |
Default
| null | null |
null |
{
"abstract": " The standard LSTM recurrent neural networks while very powerful in long-range\ndependency sequence applications have highly complex structure and relatively\nlarge (adaptive) parameters. In this work, we present empirical comparison\nbetween the standard LSTM recurrent neural network architecture and three new\nparameter-reduced variants obtained by eliminating combinations of the input\nsignal, bias, and hidden unit signals from individual gating signals. The\nexperiments on two sequence datasets show that the three new variants, called\nsimply as LSTM1, LSTM2, and LSTM3, can achieve comparable performance to the\nstandard LSTM model with less (adaptive) parameters.\n",
"title": "Simplified Gating in Long Short-term Memory (LSTM) Recurrent Neural Networks"
}
| null | null | null | null | true | null |
5471
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we apply an extended Landau-Lifschitz equation, as introduced\nby Baňas et al. for the simulation of heat-assisted magnetic recording.\nThis equation has similarities with the Landau-Lifshitz-Bloch equation. The\nBaňas equation is supposed to be used in a continuum setting with sub-grain\ndiscretization by the finite-element method. Thus, local geometric features and\nnonuniform magnetic states during switching are taken into account. We\nimplement the Baňas model and test its capability for predicting the\nrecording performance in a realistic recording scenario. By performing\nrecording simulations on 100 media slabs with randomized granular structure and\nconsecutive read back calculation, the write position shift and transition\njitter for bit lengths of 10nm, 12nm, and 20nm are calculated.\n",
"title": "Transition Jitter in Heat Assisted Magnetic Recording by Micromagnetic Simulation"
}
| null | null |
[
"Physics"
] | null | true | null |
5472
| null |
Validated
| null | null |
null |
{
"abstract": " Response delay is an inherent and essential part of human actions. In the\ncontext of human balance control, the response delay is traditionally modeled\nusing the formalism of delay-differential equations, which adopts the\napproximation of fixed delay. However, experimental studies revealing\nsubstantial variability, adaptive anticipation, and non-stationary dynamics of\nresponse delay provide evidence against this approximation. In this paper, we\ncall for development of principally new mathematical formalism describing human\nresponse delay. To support this, we present the experimental data from a simple\nvirtual stick balancing task. Our results demonstrate that human response delay\nis a widely distributed random variable with complex properties, which can\nexhibit oscillatory and adaptive dynamics characterized by long-range\ncorrelations. Given this, we argue that the fixed-delay approximation ignores\nessential properties of human response, and conclude with possible directions\nfor future developments of new mathematical notions describing human control.\n",
"title": "Complexity of human response delay in intermittent control: The case of virtual stick balancing"
}
| null | null | null | null | true | null |
5473
| null |
Default
| null | null |
null |
{
"abstract": " This note contains some examples of hyperkähler varieties $X$ having a\ngroup $G$ of non-symplectic automorphisms, and such that the action of $G$ on\ncertain Chow groups of $X$ is as predicted by Bloch's conjecture. The examples\nrange in dimension from $6$ to $132$. For each example, the quotient $Y=X/G$ is\na Calabi-Yau variety which has interesting Chow-theoretic properties; in\nparticular, the variety $Y$ satisfies (part of) a strong version of the\nBeauville-Voisin conjecture.\n",
"title": "Algebraic cycles on some special hyperkähler varieties"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5474
| null |
Validated
| null | null |
null |
{
"abstract": " We introduce and analyze the following general concept of recurrence. Let $G$\nbe a group and let $X$ be a G-space with the action $G\\times X\\longrightarrow\nX$, $(g,x)\\longmapsto gx$. For a family $\\mathfrak{F}$ of subset of $X$ and\n$A\\in \\mathfrak{F}$, we denote $\\Delta_{\\mathfrak{F}}(A)=\\{g\\in G: gB\\subseteq\nA$ for some $B\\in \\mathfrak{F}, \\ B\\subseteq A\\}$, and say that a subset $R$ of\n$G$ is $\\mathfrak{F}$-recurrent if $R\\bigcap \\Delta_{\\mathfrak{F}}\n(A)\\neq\\emptyset$ for each $A\\in \\mathfrak{F}$.\n",
"title": "On recurrence in G-spaces"
}
| null | null | null | null | true | null |
5475
| null |
Default
| null | null |
null |
{
"abstract": " Here, we present a novel approach to solve the problem of reconstructing\nperceived stimuli from brain responses by combining probabilistic inference\nwith deep learning. Our approach first inverts the linear transformation from\nlatent features to brain responses with maximum a posteriori estimation and\nthen inverts the nonlinear transformation from perceived stimuli to latent\nfeatures with adversarial training of convolutional neural networks. We test\nour approach with a functional magnetic resonance imaging experiment and show\nthat it can generate state-of-the-art reconstructions of perceived faces from\nbrain activations.\n",
"title": "Deep adversarial neural decoding"
}
| null | null | null | null | true | null |
5476
| null |
Default
| null | null |
null |
{
"abstract": " We study the problem of guarding an orthogonal polyhedron having reflex edges\nin just two directions (as opposed to three) by placing guards on reflex edges\nonly.\nWe show that (r - g)/2 + 1 reflex edge guards are sufficient, where r is the\nnumber of reflex edges in a given polyhedron and g is its genus. This bound is\ntight for g=0. We thereby generalize a classic planar Art Gallery theorem of\nO'Rourke, which states that the same upper bound holds for vertex guards in an\northogonal polygon with r reflex vertices and g holes.\nThen we give a similar upper bound in terms of m, the total number of edges\nin the polyhedron. We prove that (m - 4)/8 + g reflex edge guards are\nsufficient, whereas the previous best known bound was 11m/72 + g/6 - 1 edge\nguards (not necessarily reflex).\nWe also discuss the setting in which guards are open (i.e., they are segments\nwithout the endpoints), proving that the same results hold even in this more\nchallenging case.\nFinally, we show how to compute guard locations in O(n log n) time.\n",
"title": "Optimally Guarding 2-Reflex Orthogonal Polyhedra by Reflex Edge Guards"
}
| null | null |
[
"Computer Science"
] | null | true | null |
5477
| null |
Validated
| null | null |
null |
{
"abstract": " For any positive integer $m$, the complete graph on $2^{2m}(2^m+2)$ vertices\nis decomposed into $2^m+1$ commuting strongly regular graphs, which give rise\nto a symmetric association scheme of class $2^{m+2}-2$. Furthermore, the\neigenmatrices of the symmetric association schemes are determined explicitly.\nAs an application, the eigenmatrix of the commutative strongly regular\ndecomposition obtained from the strongly regular graphs is derived.\n",
"title": "Strongly regular decompositions and symmetric association schemes of a power of two"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5478
| null |
Validated
| null | null |
null |
{
"abstract": " The control and sensing of large-scale systems results in combinatorial\nproblems not only for sensor and actuator placement but also for scheduling or\nobservability/controllability. Such combinatorial constraints in system design\nand implementation can be captured using a structure known as matroids. In\nparticular, the algebraic structure of matroids can be exploited to develop\nscalable algorithms for sensor and actuator selection, along with quantifiable\napproximation bounds. However, in large-scale systems, sensors and actuators\nmay fail or may be (cyber-)attacked. The objective of this paper is to focus on\nresilient matroid-constrained problems arising in control and sensing but in\nthe presence of sensor and actuator failures. In general, resilient\nmatroid-constrained problems are computationally hard. Contrary to the\nnon-resilient case (with no failures), even though they often involve objective\nfunctions that are monotone or submodular, no scalable approximation algorithms\nare known for their solution. In this paper, we provide the first algorithm,\nthat also has the following properties: First, it achieves system-wide\nresiliency, i.e., the algorithm is valid for any number of denial-of-service\nattacks or failures. Second, it is scalable, as our algorithm terminates with\nthe same running time as state-of-the-art algorithms for (non-resilient)\nmatroid-constrained optimization. Third, it provides provable approximation\nbounds on the system performance, since for monotone objective functions our\nalgorithm guarantees a solution close to the optimal. We quantify our\nalgorithm's approximation performance using a notion of curvature for monotone\n(not necessarily submodular) set functions. Finally, we support our theoretical\nanalyses with numerical experiments, by considering a control-aware sensor\nselection scenario, namely, sensing-constrained robot navigation.\n",
"title": "Resilient Non-Submodular Maximization over Matroid Constraints"
}
| null | null | null | null | true | null |
5479
| null |
Default
| null | null |
null |
{
"abstract": " Based on periodogram-ratios of two univariate time series at different\nfrequency points, two tests are proposed for comparing their spectra. One is an\nAnderson-Darling-like statistic for testing the equality of two time-invariant\nspectra. The other is the maximum of Anderson-Darling-like statistics for\ntesting the equality of two spectra no matter that they are time-invariant and\ntime-varying. Both of two tests are applicable for independent or dependent\ntime series. Several simulation examples show that the proposed statistics\noutperform those that are also based on periodogram-ratios but constructed by\nthe Pearson-like statistics.\n",
"title": "Tests for comparing time-invariant and time-varying spectra based on the Anderson-Darling statistic"
}
| null | null | null | null | true | null |
5480
| null |
Default
| null | null |
null |
{
"abstract": " In this article we consider conditions under which projection operators in\nmultiplicity free semi-simple tensor categories satisfy Temperley-Lieb like\nrelations. This is then used as a stepping stone to prove sufficient conditions\nfor obtaining a representation of the Birman-Murakami-Wenzl algebra from a\nbraided multiplicity free semi-simple tensor category. The results are found by\nutalising the data of the categories. There is considerable overlap with the\nresults found in arXiv:1607.08908, where proofs are shown by manipulating\ndiagrams.\n",
"title": "Temperley-Lieb and Birman-Murakami-Wenzl like relations from multiplicity free semi-simple tensor system"
}
| null | null | null | null | true | null |
5481
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we consider the divergence parabolic equation with bounded and\nmeasurable coefficients related to Hormander's vector fields and establish a\nNash type result, i.e., the local Holder regularity for weak solutions. After\nderiving the parabolic Sobolev inequality, (1,1) type Poincaré inequality of\nHormander's vector fields and a De Giorgi type Lemma, the Holder regularity\nof weak solutions to the equation is proved based on the estimates of\noscillations of solutions and the isomorphism between parabolic Campanato space\nand parabolic Holder space. As a consequence, we give the Harnack inequality\nof weak solutions by showing an extension property of positivity for functions\nin the De Giorgi class.\n",
"title": "A Nash Type result for Divergence Parabolic Equation related to Hormander's vector fields"
}
| null | null | null | null | true | null |
5482
| null |
Default
| null | null |
null |
{
"abstract": " We exhibit a Hamel basis for the concrete $*$-algebra $\\mathfrak{M}_o$\nassociated to monotone commutation relations realised on the monotone Fock\nspace, mainly composed by Wick ordered words of annihilators and creators. We\napply such a result to investigate spreadability and exchangeability of the\nstochastic processes arising from such commutation relations. In particular, we\nshow that spreadability comes from a monoidal action implementing a dissipative\ndynamics on the norm closure $C^*$-algebra $\\mathfrak{M} =\n\\overline{\\mathfrak{M}_o}$. Moreover, we determine the structure of spreadable\nand exchangeable monotone stochastic processes using their correspondence with\nsp\\-reading invariant and symmetric monotone states, respectively.\n",
"title": "Wick order, spreadability and exchangeability for monotone commutation relations"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5483
| null |
Validated
| null | null |
null |
{
"abstract": " The tasks of identifying separation structures and clusters in flow data are\nfundamental to flow visualization. Significant work has been devoted to these\ntasks in flow represented by vector fields, but there are unique challenges in\naddressing these tasks for time-varying particle data. The unstructured nature\nof particle data, nonuniform and sparse sampling, and the inability to access\narbitrary particles in space-time make it difficult to define separation and\nclustering for particle data. We observe that weaker notions of separation and\nclustering through continuous measures of these structures are meaningful when\ncoupled with user exploration. We achieve this goal by defining a measure of\nparticle similarity between pairs of particles. More specifically, separation\noccurs when spatially-localized particles are dissimilar, while clustering is\ncharacterized by sets of particles that are similar to one another. To be\nrobust to imperfections in sampling we use diffusion geometry to compute\nparticle similarity. Diffusion geometry is parameterized by a scale that allows\na user to explore separation and clustering in a continuous manner. We\nillustrate the benefits of our technique on a variety of 2D and 3D flow\ndatasets, from particles integrated in fluid simulations based on time-varying\nvector fields, to particle-based simulations in astrophysics.\n",
"title": "Visualizing Time-Varying Particle Flows with Diffusion Geometry"
}
| null | null | null | null | true | null |
5484
| null |
Default
| null | null |
null |
{
"abstract": " We present two simple ways of reducing the number of parameters and\naccelerating the training of large Long Short-Term Memory (LSTM) networks: the\nfirst one is \"matrix factorization by design\" of LSTM matrix into the product\nof two smaller matrices, and the second one is partitioning of LSTM matrix, its\ninputs and states into the independent groups. Both approaches allow us to\ntrain large LSTM networks significantly faster to the near state-of the art\nperplexity while using significantly less RNN parameters.\n",
"title": "Factorization tricks for LSTM networks"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
5485
| null |
Validated
| null | null |
null |
{
"abstract": " The study of mereology (parts and wholes) in the context of formal approaches\nto vagueness can be approached in a number of ways. In the context of rough\nsets, mereological concepts with a set-theoretic or valuation based ontology\nacquire complex and diverse behavior. In this research a general rough set\nframework called granular operator spaces is extended and the nature of\nparthood in it is explored from a minimally intrusive point of view. This is\nused to develop counting strategies that help in classifying the framework. The\ndeveloped methodologies would be useful for drawing involved conclusions about\nthe nature of data (and validity of assumptions about it) from antichains\nderived from context. The problem addressed is also about whether counting\nprocedures help in confirming that the approximations involved in formation of\ndata are indeed rough approximations?\n",
"title": "Pure Rough Mereology and Counting"
}
| null | null | null | null | true | null |
5486
| null |
Default
| null | null |
null |
{
"abstract": " We start from a variational model for nematic elastomers that involves two\nenergies: mechanical and nematic. The first one consists of a nonlinear elastic\nenergy which is influenced by the orientation of the molecules of the nematic\nelastomer. The nematic energy is an Oseen--Frank energy in the deformed\nconfiguration. The constraint of the positivity of the determinant of the\ndeformation gradient is imposed. The functionals are not assumed to have the\nusual polyconvexity or quasiconvexity assumptions to be lower semicontinuous.\nWe instead compute its relaxation, that is, the lower semicontinuous envelope,\nwhich turns out to be the quasiconvexification of the mechanical term plus the\ntangential quasiconvexification of the nematic term. The main assumptions are\nthat the quasiconvexification of the mechanical term is polyconvex and that the\ndeformation is in the Sobolev space $W^{1,p}$ (with $p>n-1$ and $n$ the\ndimension of the space) and does not present cavitation.\n",
"title": "Relaxation of nonlinear elastic energies involving deformed configuration and applications to nematic elastomers"
}
| null | null | null | null | true | null |
5487
| null |
Default
| null | null |
null |
{
"abstract": " Deep Neural Networks (DNNs) have revolutionized numerous applications, but\nthe demand for ever more performance remains unabated. Scaling DNN computations\nto larger clusters is generally done by distributing tasks in batch mode using\nmethods such as distributed synchronous SGD. Among the issues with this\napproach is that to make the distributed cluster work with high utilization,\nthe workload distributed to each node must be large, which implies nontrivial\ngrowth in the SGD mini-batch size.\nIn this paper, we propose a framework called FPDeep, which uses a hybrid of\nmodel and layer parallelism to configure distributed reconfigurable clusters to\ntrain DNNs. This approach has numerous benefits. First, the design does not\nsuffer from batch size growth. Second, novel workload and weight partitioning\nleads to balanced loads of both among nodes. And third, the entire system is a\nfine-grained pipeline. This leads to high parallelism and utilization and also\nminimizes the time features need to be cached while waiting for\nback-propagation. As a result, storage demand is reduced to the point where\nonly on-chip memory is used for the convolution layers. We evaluate FPDeep with\nthe Alexnet, VGG-16, and VGG-19 benchmarks. Experimental results show that\nFPDeep has good scalability to a large number of FPGAs, with the limiting\nfactor being the FPGA-to-FPGA bandwidth. With 6 transceivers per FPGA, FPDeep\nshows linearity up to 83 FPGAs. Energy efficiency is evaluated with respect to\nGOPs/J. FPDeep provides, on average, 6.36x higher energy efficiency than\ncomparable GPU servers.\n",
"title": "A Scalable Framework for Acceleration of CNN Training on Deeply-Pipelined FPGA Clusters with Weight and Workload Balancing"
}
| null | null | null | null | true | null |
5488
| null |
Default
| null | null |
null |
{
"abstract": " Recent advances in neural word embedding provide significant benefit to\nvarious information retrieval tasks. However as shown by recent studies,\nadapting the embedding models for the needs of IR tasks can bring considerable\nfurther improvements. The embedding models in general define the term\nrelatedness by exploiting the terms' co-occurrences in short-window contexts.\nAn alternative (and well-studied) approach in IR for related terms to a query\nis using local information i.e. a set of top-retrieved documents. In view of\nthese two methods of term relatedness, in this work, we report our study on\nincorporating the local information of the query in the word embeddings. One\nmain challenge in this direction is that the dense vectors of word embeddings\nand their estimation of term-to-term relatedness remain difficult to interpret\nand hard to analyze. As an alternative, explicit word representations propose\nvectors whose dimensions are easily interpretable, and recent methods show\ncompetitive performance to the dense vectors. We introduce a neural-based\nexplicit representation, rooted in the conceptual ideas of the word2vec\nSkip-Gram model. The method provides interpretable explicit vectors while\nkeeping the effectiveness of the Skip-Gram model. The evaluation of various\nexplicit representations on word association collections shows that the newly\nproposed method out- performs the state-of-the-art explicit representations\nwhen tasked with ranking highly similar terms. Based on the introduced ex-\nplicit representation, we discuss our approaches on integrating local documents\nin globally-trained embedding models and discuss the preliminary results.\n",
"title": "Toward Incorporation of Relevant Documents in word2vec"
}
| null | null | null | null | true | null |
5489
| null |
Default
| null | null |
null |
{
"abstract": " Iterative load balancing algorithms for indivisible tokens have been studied\nintensively in the past. Complementing previous worst-case analyses, we study\nan average-case scenario where the load inputs are drawn from a fixed\nprobability distribution. For cycles, tori, hypercubes and expanders, we obtain\nalmost matching upper and lower bounds on the discrepancy, the difference\nbetween the maximum and the minimum load. Our bounds hold for a variety of\nprobability distributions including the uniform and binomial distribution but\nalso distributions with unbounded range such as the Poisson and geometric\ndistribution. For graphs with slow convergence like cycles and tori, our\nresults demonstrate a substantial difference between the convergence in the\nworst- and average-case. An important ingredient in our analysis is new upper\nbound on the t-step transition probability of a general Markov chain, which is\nderived by invoking the evolving set process.\n",
"title": "Randomized Load Balancing on Networks with Stochastic Inputs"
}
| null | null | null | null | true | null |
5490
| null |
Default
| null | null |
null |
{
"abstract": " The Whitney immersion is a Lagrangian sphere inside the four-dimensional\nsymplectic vector space which has a single transverse double point of\nself-intersection index $+1.$ This Lagrangian also arises as the Weinstein\nskeleton of the complement of a binodal cubic curve inside the projective\nplane, and the latter Weinstein manifold is thus the `standard' neighbourhood\nof Lagrangian immersions of this type. We classify the Lagrangians inside such\na neighbourhood which are homologous to the Whitney immersion, and which either\nare embedded or immersed with a single double point; they are shown to be\nHamiltonian isotopic to either product tori, Chekanov tori, or rescalings of\nthe Whitney immersion.\n",
"title": "The classification of Lagrangians nearby the Whitney immersion"
}
| null | null | null | null | true | null |
5491
| null |
Default
| null | null |
null |
{
"abstract": " A simulation study of energy resolution, position resolution, and\n$\\pi^0$-$\\gamma$ separation using multivariate methods of a sampling\ncalorimeter is presented. As a realistic example, the geometry of the\ncalorimeter is taken from the design geometry of the Shashlik calorimeter which\nwas considered as a candidate for CMS endcap for the phase II of LHC running.\nThe methods proposed in this paper can be easily adapted to various geometrical\nlayouts of a sampling calorimeter. Energy resolution is studied for different\nlayouts and different absorber-scintillator combinations of the Shashlik\ndetector. It is shown that a boosted decision tree using fine grained\ninformation of the calorimeter can perform three times better than a cut-based\nmethod for separation of $\\pi^0$ from $\\gamma$ over a large energy range of 20\nGeV-200 GeV.\n",
"title": "Simulation study of energy resolution, position resolution and $π^0$-$γ$ separation of a sampling electromagnetic calorimeter at high energies"
}
| null | null | null | null | true | null |
5492
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we study the Multi-Round Influence Maximization (MRIM)\nproblem, where influence propagates in multiple rounds independently from\npossibly different seed sets, and the goal is to select seeds for each round to\nmaximize the expected number of nodes that are activated in at least one round.\nMRIM problem models the viral marketing scenarios in which advertisers conduct\nmultiple rounds of viral marketing to promote one product. We consider two\ndifferent settings: 1) the non-adaptive MRIM, where the advertiser needs to\ndetermine the seed sets for all rounds at the very beginning, and 2) the\nadaptive MRIM, where the advertiser can select seed sets adaptively based on\nthe propagation results in the previous rounds. For the non-adaptive setting,\nwe design two algorithms that exhibit an interesting tradeoff between\nefficiency and effectiveness: a cross-round greedy algorithm that selects seeds\nat a global level and achieves $1/2 - \\varepsilon$ approximation ratio, and a\nwithin-round greedy algorithm that selects seeds round by round and achieves\n$1-e^{-(1-1/e)}-\\varepsilon \\approx 0.46 - \\varepsilon$ approximation ratio but\nsaves running time by a factor related to the number of rounds. For the\nadaptive setting, we design an adaptive algorithm that guarantees\n$1-e^{-(1-1/e)}-\\varepsilon$ approximation to the adaptive optimal solution. In\nall cases, we further design scalable algorithms based on the reverse influence\nsampling approach and achieve near-linear running time. We conduct experiments\non several real-world networks and demonstrate that our algorithms are\neffective for the MRIM task.\n",
"title": "Multi-Round Influence Maximization (Extended Version)"
}
| null | null | null | null | true | null |
5493
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks achieve stellar generalisation on a variety of problems,\ndespite often being large enough to easily fit all their training data. Here we\nstudy the generalisation dynamics of two-layer neural networks in a\nteacher-student setup, where one network, the student, is trained using\nstochastic gradient descent (SGD) on data generated by another network, called\nthe teacher. We show how for this problem, the dynamics of SGD are captured by\na set of differential equations. In particular, we demonstrate analytically\nthat the generalisation error of the student increases linearly with the\nnetwork size, with other relevant parameters held constant. Our results\nindicate that achieving good generalisation in neural networks depends on the\ninterplay of at least the algorithm, its learning rate, the model architecture,\nand the data set.\n",
"title": "Generalisation dynamics of online learning in over-parameterised neural networks"
}
| null | null | null | null | true | null |
5494
| null |
Default
| null | null |
null |
{
"abstract": " This work is motivated by the problem of testing for differences in the mean\nelectricity prices before and after Germany's abrupt nuclear phaseout after the\nnuclear disaster in Fukushima Daiichi, Japan, in mid-March 2011. Taking into\naccount the nature of the data and the auction design of the electricity\nmarket, we approach this problem using a Local Linear Kernel (LLK) estimator\nfor the nonparametric mean function of sparse covariate-adjusted functional\ndata. We build upon recent theoretical work on the LLK estimator and propose a\ntwo-sample test statistics using a finite sample correction to avoid size\ndistortions. Our nonparametric test results on the price differences point to a\nSimpson's paradox explaining an unexpected result recently reported in the\nliterature.\n",
"title": "Nonparametric Testing for Differences in Electricity Prices: The Case of the Fukushima Nuclear Accident"
}
| null | null | null | null | true | null |
5495
| null |
Default
| null | null |
null |
{
"abstract": " The synchronized magnetization dynamics in ferromagnets on a nonmagnetic\nheavy metal caused by the spin Hall effect is investigated theoretically. The\ndirect and inverse spin Hall effects near the ferromagnetic/nonmagnetic\ninterface generate longitudinal and transverse electric currents. The\nphenomenon is known as the spin Hall magnetoresistance effect, whose magnitude\ndepends on the magnetization direction in the ferromagnet due to the spin\ntransfer effect. When another ferromagnet is placed onto the same nonmagnet,\nthese currents are again converted to the spin current by the spin Hall effect\nand excite the spin torque to this additional ferromagnet, resulting in the\nexcitation of the coupled motions of the magnetizations. The in-phase or\nantiphase synchronization of the magnetization oscillations, depending on the\nvalue of the Gilbert damping constant and the field-like torque strength, is\nfound in the transverse geometry by solving the Landau-Lifshitz-Gilbert\nequation numerically. On the other hand, in addition to these synchronizations,\nthe synchronization having a phase difference of a quarter of a period is also\nfound in the longitudinal geometry. The analytical theory clarifying the\nrelation among the current, frequency, and phase difference is also developed,\nwhere it is shown that the phase differences observed in the numerical\nsimulations correspond to that giving the fixed points of the energy supplied\nby the coupling torque.\n",
"title": "Dynamic coupling of ferromagnets via spin Hall magnetoresistance"
}
| null | null | null | null | true | null |
5496
| null |
Default
| null | null |
null |
{
"abstract": " The permutation test is known as the exact test procedure in statistics.\nHowever, often it is not exact in practice and only an approximate method since\nonly a small fraction of every possible permutation is generated. Even for a\nsmall sample size, it often requires to generate tens of thousands\npermutations, which can be a serious computational bottleneck. In this paper,\nwe propose a novel combinatorial inference procedure that enumerates all\npossible permutations combinatorially without any resampling. The proposed\nmethod is validated against the standard permutation test in simulation studies\nwith the ground truth. The method is further applied in twin DTI study in\ndetermining the genetic contribution of the minimum spanning tree of the\nstructural brain connectivity.\n",
"title": "Exact Combinatorial Inference for Brain Images"
}
| null | null | null | null | true | null |
5497
| null |
Default
| null | null |
null |
{
"abstract": " Avalanche photodiodes (APDs) are a practical option for space-based quantum\ncommunications requiring single-photon detection. However, radiation damage to\nAPDs significantly increases their dark count rates and reduces their useful\nlifetimes in orbit. We show that high-power laser annealing of irradiated APDs\nof three different models (Excelitas C30902SH, Excelitas SLiK, and Laser\nComponents SAP500S2) heals the radiation damage and substantially restores low\ndark count rates. Of nine samples, the maximum dark count rate reduction factor\nvaries between 5.3 and 758 when operating at minus 80 degrees Celsius. The\nillumination power to reach these reduction factors ranges from 0.8 to 1.6 W.\nOther photon detection characteristics, such as photon detection efficiency,\ntiming jitter, and afterpulsing probability, remain mostly unaffected. These\nresults herald a promising method to extend the lifetime of a quantum satellite\nequipped with APDs.\n",
"title": "Laser annealing heals radiation damage in avalanche photodiodes"
}
| null | null | null | null | true | null |
5498
| null |
Default
| null | null |
null |
{
"abstract": " We are interested in the development of surrogate models for uncertainty\nquantification and propagation in problems governed by stochastic PDEs using a\ndeep convolutional encoder-decoder network in a similar fashion to approaches\nconsidered in deep learning for image-to-image regression tasks. Since normal\nneural networks are data intensive and cannot provide predictive uncertainty,\nwe propose a Bayesian approach to convolutional neural nets. A recently\nintroduced variational gradient descent algorithm based on Stein's method is\nscaled to deep convolutional networks to perform approximate Bayesian inference\non millions of uncertain network parameters. This approach achieves state of\nthe art performance in terms of predictive accuracy and uncertainty\nquantification in comparison to other approaches in Bayesian neural networks as\nwell as techniques that include Gaussian processes and ensemble methods even\nwhen the training data size is relatively small. To evaluate the performance of\nthis approach, we consider standard uncertainty quantification benchmark\nproblems including flow in heterogeneous media defined in terms of limited\ndata-driven permeability realizations. The performance of the surrogate model\ndeveloped is very good even though there is no underlying structure shared\nbetween the input (permeability) and output (flow/pressure) fields as is often\nthe case in the image-to-image regression models used in computer vision\nproblems. Studies are performed with an underlying stochastic input\ndimensionality up to $4,225$ where most other uncertainty quantification\nmethods fail. Uncertainty propagation tasks are considered and the predictive\noutput Bayesian statistics are compared to those obtained with Monte Carlo\nestimates.\n",
"title": "Bayesian Deep Convolutional Encoder-Decoder Networks for Surrogate Modeling and Uncertainty Quantification"
}
| null | null | null | null | true | null |
5499
| null |
Default
| null | null |
null |
{
"abstract": " In [MMO] (arXiv:1704.03413), we reworked and generalized equivariant infinite\nloop space theory, which shows how to construct $G$-spectra from $G$-spaces\nwith suitable structure. In this paper, we construct a new variant of the\nequivariant Segal machine that starts from the category $\\scr{F}$ of finite\nsets rather than from the category ${\\scr{F}}_G$ of finite $G$-sets and which\nis equivalent to the machine studied by Shimakawa and in [MMO]. In contrast to\nthe machine in [MMO], the new machine gives a lax symmetric monoidal functor\nfrom the symmetric monoidal category of $\\scr{F}$-$G$-spaces to the symmetric\nmonoidal category of orthogonal $G$-spectra. We relate it multiplicatively to\nsuspension $G$-spectra and to Eilenberg-MacLane $G$-spectra via lax symmetric\nmonoidal functors from based $G$-spaces and from abelian groups to\n$\\scr{F}$-$G$-spaces. Even non-equivariantly, this gives an appealing new\nvariant of the Segal machine. This new variant makes the equivariant\ngeneralization of the theory essentially formal, hence is likely to be\napplicable in other contexts.\n",
"title": "A symmetric monoidal and equivariant Segal infinite loop space machine"
}
| null | null |
[
"Mathematics"
] | null | true | null |
5500
| null |
Validated
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.