text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Machine learning approaches hold great potential for the automated detection\nof lung nodules in chest radiographs, but training the algorithms requires vary\nlarge amounts of manually annotated images, which are difficult to obtain. Weak\nlabels indicating whether a radiograph is likely to contain pulmonary nodules\nare typically easier to obtain at scale by parsing historical free-text\nradiological reports associated to the radiographs. Using a repositotory of\nover 700,000 chest radiographs, in this study we demonstrate that promising\nnodule detection performance can be achieved using weak labels through\nconvolutional neural networks for radiograph classification. We propose two\nnetwork architectures for the classification of images likely to contain\npulmonary nodules using both weak labels and manually-delineated bounding\nboxes, when these are available. Annotated nodules are used at training time to\ndeliver a visual attention mechanism informing the model about its localisation\nperformance. The first architecture extracts saliency maps from high-level\nconvolutional layers and compares the estimated position of a nodule against\nthe ground truth, when this is available. A corresponding localisation error is\nthen back-propagated along with the softmax classification error. The second\napproach consists of a recurrent attention model that learns to observe a short\nsequence of smaller image portions through reinforcement learning. When a\nnodule annotation is available at training time, the reward function is\nmodified accordingly so that exploring portions of the radiographs away from a\nnodule incurs a larger penalty. Our empirical results demonstrate the potential\nadvantages of these architectures in comparison to competing methodologies.\n",
"title": "Learning to detect chest radiographs containing lung nodules using visual attention networks"
}
| null | null | null | null | true | null |
6901
| null |
Default
| null | null |
null |
{
"abstract": " The Sinc approximation has shown high efficiency for numerical methods in\nmany fields. Conformal maps play an important role in the success, i.e.,\nappropriate conformal map must be employed to elicit high performance of the\nSinc approximation. Appropriate conformal maps have been proposed for typical\ncases; however, such maps may not be optimal. Thus, the performance of the Sinc\napproximation may be improved by using another conformal map rather than an\nexisting map. In this paper, we propose a new conformal map for the case where\nfunctions are defined over the semi-infinite interval and decay exponentially.\nThen, we demonstrate in both theoretical and numerical ways that the\nconvergence rate is improved by replacing the existing conformal map with the\nproposed map.\n",
"title": "New conformal map for the Sinc approximation for exponentially decaying functions over the semi-infinite interval"
}
| null | null | null | null | true | null |
6902
| null |
Default
| null | null |
null |
{
"abstract": " In recent years the role of epidemic models in informing public health\npolicies has progressively grown. Models have become increasingly realistic and\nmore complex, requiring the use of multiple data sources to estimate all\nquantities of interest. This review summarises the different types of\nstochastic epidemic models that use evidence synthesis and highlights current\nchallenges.\n",
"title": "Evidence synthesis for stochastic epidemic models"
}
| null | null |
[
"Statistics"
] | null | true | null |
6903
| null |
Validated
| null | null |
null |
{
"abstract": " We introduce a new model describing multiple resonances in Kerr optical\ncavities. It perfectly agrees quantitatively with the Ikeda map and predicts\ncomplex phenomena such as super cavity solitons and coexistence of multiple\nnonlinear states.\n",
"title": "The multi-resonant Lugiato-Lefever model"
}
| null | null |
[
"Physics"
] | null | true | null |
6904
| null |
Validated
| null | null |
null |
{
"abstract": " Bayesian shrinkage methods have generated a lot of recent interest as tools\nfor high-dimensional regression and model selection. These methods naturally\nfacilitate tractable uncertainty quantification and incorporation of prior\ninformation. A common feature of these models, including the Bayesian lasso,\nglobal-local shrinkage priors, and spike-and-slab priors is that the\ncorresponding priors on the regression coefficients can be expressed as scale\nmixture of normals. While the three-step Gibbs sampler used to sample from the\noften intractable associated posterior density has been shown to be\ngeometrically ergodic for several of these models (Khare and Hobert, 2013; Pal\nand Khare, 2014), it has been demonstrated recently that convergence of this\nsampler can still be quite slow in modern high-dimensional settings despite\nthis apparent theoretical safeguard. We propose a new method to draw from the\nsame posterior via a tractable two-step blocked Gibbs sampler. We demonstrate\nthat our proposed two-step blocked sampler exhibits vastly superior convergence\nbehavior compared to the original three- step sampler in high-dimensional\nregimes on both real and simulated data. We also provide a detailed theoretical\nunderpinning to the new method in the context of the Bayesian lasso. First, we\nderive explicit upper bounds for the (geometric) rate of convergence.\nFurthermore, we demonstrate theoretically that while the original Bayesian\nlasso chain is not Hilbert-Schmidt, the proposed chain is trace class (and\nhence Hilbert-Schmidt). The trace class property has useful theoretical and\npractical implications. It implies that the corresponding Markov operator is\ncompact, and its eigenvalues are summable. It also facilitates a rigorous\ncomparison of the two-step blocked chain with \"sandwich\" algorithms which aim\nto improve performance of the two-step chain by inserting an inexpensive extra\nstep.\n",
"title": "Scalable Bayesian shrinkage and uncertainty quantification in high-dimensional regression"
}
| null | null | null | null | true | null |
6905
| null |
Default
| null | null |
null |
{
"abstract": " The object of the present paper is to study certain properties and\ncharacteristics of the operator $Q_{p,\\beta}^{\\alpha}$defined on p-valent\nanalytic function by using technique of differential subordination.We also\nobtained result involving majorization problems by applying the operator to\np-valent analytic function.Relevant connection of the the result are presented\nhere with those obtained by earlier worker are pointed out.\n",
"title": "Inclusion and Majorization Properties of Certain Subclasses of Multivalent Analytic Functions Involving a Linear Operator"
}
| null | null | null | null | true | null |
6906
| null |
Default
| null | null |
null |
{
"abstract": " We investigate fundamental model-theoretic dividing lines (the order\nproperty, the independence property, the strict order property, and the tree\nproperty 2) in the context of least fixed-point (LFP) logic over families of\nfinite structures. We show that, unlike the first-order (FO) case, the order\nproperty and the independence property are equivalent, but all of the other\nnatural implications are strict. We identify the LFP strict order property with\nproficiency, a well-studied notion in finite model theory.\nGregory McColm conjectured that FO and LFP definability coincide over a\nfamily C of finite structures exactly when C is non-proficient. McColm's\nconjecture is false in general, but as an application of our results, we show\nthat it holds under standard FO tameness assumptions adapted to families of\nfinite structures.\n",
"title": "Tameness in least fixed-point logic and McColm's conjecture"
}
| null | null | null | null | true | null |
6907
| null |
Default
| null | null |
null |
{
"abstract": " The paper presents an analysis of Polish Fireball Network (PFN) observations\nof enhanced activity of the Southern Taurid meteor shower in 2005 and 2015. In\n2005, between October 20 and November 10, seven stations of PFN determined 107\naccurate orbits with 37 of them belonging to the Southern Taurid shower. In the\nsame period of 2015, 25 stations of PFN recorded 719 accurate orbits with 215\norbits of the Southern Taurids. Both maxima were rich in fireballs which\naccounted to 17% of all observed Taurids. The whole sample of Taurid fireballs\nis quite uniform in the sense of starting and terminal heights of the\ntrajectory. On the other hand a clear decreasing trend in geocentric velocity\nwith increasing solar longitude was observed.\nOrbital parameters of observed Southern Taurids were compared to orbital\nelements of Near Earth Objects (NEO) from the NEODYS-2 database. Using the\nDrummond criterion $D'$ with threshold as low as 0.06, we found over 100\nfireballs strikingly similar to the orbit of asteroid 2015 TX24. Several dozens\nof Southern Taurids have orbits similar to three other asteroids, namely: 2005\nTF50, 2005 UR and 2010 TU149. All mentioned NEOs have orbital periods very\nclose to the 7:2 resonance with Jupiter's orbit. It confirms a theory of a\n\"resonant meteoroid swarm\" within the Taurid complex that predicts that in\nspecific years, the Earth is hit by a greater number of meteoroids capable of\nproducing fireballs.\n",
"title": "Enhanced activity of the Southern Taurids in 2005 and 2015"
}
| null | null | null | null | true | null |
6908
| null |
Default
| null | null |
null |
{
"abstract": " We study the variation of Iwasawa invariants of the anticyclotomic Selmer\ngroups of congruent modular forms under the Heegner hypothesis. In particular,\nwe show that even if the Selmer groups we study may have positive coranks, the\nmu-invariant vanishes for one modular form if and only if it vanishes for the\nother, and that their lambda-invariants are related by an explicit formula.\nThis generalizes results of Greenberg-Vatsal for the cyclotomic extension, as\nwell as results of Pollack-Weston and Castella-Kim-Longo for the anticyclotomic\nextension when the Selmer groups in question are cotorsion.\n",
"title": "Comparing anticyclotomic Selmer groups of positive coranks for congruent modular forms"
}
| null | null |
[
"Mathematics"
] | null | true | null |
6909
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we study how to predict the results of LTL model checking\nusing some machine learning algorithms. Some Kripke structures and LTL formulas\nand their model checking results are made up data set. The approaches based on\nthe Random Forest (RF), K-Nearest Neighbors (KNN), Decision tree (DT), and\nLogistic Regression (LR) are used to training and prediction. The experiment\nresults show that the average computation efficiencies of the RF, LR, DT, and\nKNN-based approaches are 2066181, 2525333, 1894000 and 294 times than that of\nthe existing approach, respectively.\n",
"title": "Predicting the Results of LTL Model Checking using Multiple Machine Learning Algorithms"
}
| null | null | null | null | true | null |
6910
| null |
Default
| null | null |
null |
{
"abstract": " Properties of two ThCr2Si2-type materials are discussed within the context of\ntheir established structural and magnetic symmetries. Both materials develop\ncollinear, G-type antiferromagnetic order above room temperature, and magnetic\nions occupy acentric sites in centrosymmetric structures. We refute a previous\nconjecture that BaMn2As2 is an example of a magnetoelectric material with\nhexadecapole order by exposing flaws in supporting arguments, principally, an\nomission of discrete symmetries enforced by the symmetry of sites used by Mn\nions and, also, improper classifications of the primary and secondary\norder-parameters. Implications for future experiments designed to improve our\nunderstanding of BaMn2P2 and BaMn2As2 magnetoelectric properties, using neutron\nand x-ray diffraction, are examined. Patterns of Bragg spots caused by\nconventional magnetic dipoles and magnetoelectric (Dirac) multipoles are\npredicted to be distinct, which raises the intriguing possibility of a unique\nand comprehensive examination of the magnetoelectric state by diffraction. A\nroto-inversion operation in Mn site symmetry is ultimately responsible for the\ndistinguishing features.\n",
"title": "Magnetoelectric properties of the layered room-temperature antiferromagnets BaMn2P2 and BaMn2As2"
}
| null | null | null | null | true | null |
6911
| null |
Default
| null | null |
null |
{
"abstract": " An inverse problem in spectroscopy is considered. The objective is to restore\nthe discrete spectrum from observed spectrum data, taking into account the\nspectrometer's line spread function. The problem is reduced to solution of a\nsystem of linear-nonlinear equations (SLNE) with respect to intensities and\nfrequencies of the discrete spectral lines. The SLNE is linear with respect to\nlines' intensities and nonlinear with respect to the lines' frequencies. The\nintegral approximation algorithm is proposed for the solution of this SLNE. The\nalgorithm combines solution of linear integral equations with solution of a\nsystem of linear algebraic equations and avoids nonlinear equations. Numerical\nexamples of the application of the technique, both to synthetic and\nexperimental spectra, demonstrate the efficacy of the proposed approach in\nenabling an effective enhancement of the spectrometer's resolution.\n",
"title": "Discrete Spectrum Reconstruction using Integral Approximation Algorithm"
}
| null | null | null | null | true | null |
6912
| null |
Default
| null | null |
null |
{
"abstract": " Variational autoencoders (VAE) are directed generative models that learn\nfactorial latent variables. As noted by Burda et al. (2015), these models\nexhibit the problem of factor over-pruning where a significant number of\nstochastic factors fail to learn anything and become inactive. This can limit\ntheir modeling power and their ability to learn diverse and meaningful latent\nrepresentations. In this paper, we evaluate several methods to address this\nproblem and propose a more effective model-based approach called the epitomic\nvariational autoencoder (eVAE). The so-called epitomes of this model are groups\nof mutually exclusive latent factors that compete to explain the data. This\napproach helps prevent inactive units since each group is pressured to explain\nthe data. We compare the approaches with qualitative and quantitative results\non MNIST and TFD datasets. Our results show that eVAE makes efficient use of\nmodel capacity and generalizes better than VAE.\n",
"title": "Tackling Over-pruning in Variational Autoencoders"
}
| null | null | null | null | true | null |
6913
| null |
Default
| null | null |
null |
{
"abstract": " We consider the statics and dynamics of a stable, mobile three-dimensional\n(3D) spatiotemporal light bullet in a cubic-quintic nonlinear medium with a\nfocusing cubic nonlinearity above a critical value and any defocusing quintic\nnonlinearity. The 3D light bullet can propagate with a constant velocity in any\ndirection. Stability of the light bullet under a small perturbation is\nestablished numerically.We consider frontal collision between two light bullets\nwith different relative velocities. At large velocities the collision is\nelastic with the bullets emerge after collision with practically no distortion.\nAt small velocities two bullets coalesce to form a bullet molecule. At a small\nrange of intermediate velocities the localized bullets could form a single\nentity which expands indefinitely leading to a destruction of the bullets after\ncollision. The present study is based on an analytic Lagrange variational\napproximation and a full numerical solution of the 3D nonlinear Schrödinger\nequation.\n",
"title": "Elastic collision and molecule formation of spatiotemporal light bullets in a cubic-quintic nonlinear medium"
}
| null | null | null | null | true | null |
6914
| null |
Default
| null | null |
null |
{
"abstract": " One key challenge in talent search is to translate complex criteria of a\nhiring position into a search query, while it is relatively easy for a searcher\nto list examples of suitable candidates for a given position. To improve search\nefficiency, we propose the next generation of talent search at LinkedIn, also\nreferred to as Search By Ideal Candidates. In this system, a searcher provides\none or several ideal candidates as the input to hire for a given position. The\nsystem then generates a query based on the ideal candidates and uses it to\nretrieve and rank results. Shifting from the traditional Query-By-Keyword to\nthis new Query-By-Example system poses a number of challenges: How to generate\na query that best describes the candidates? When moving to a completely\ndifferent paradigm, how does one leverage previous product logs to learn\nranking models and/or evaluate the new system with no existing usage logs?\nFinally, given the different nature between the two search paradigms, the\nranking features typically used for Query-By-Keyword systems might not be\noptimal for Query-By-Example. This paper describes our approach to solving\nthese challenges. We present experimental results confirming the effectiveness\nof the proposed solution, particularly on query building and search ranking\ntasks. As of writing this paper, the new system has been available to all\nLinkedIn members.\n",
"title": "From Query-By-Keyword to Query-By-Example: LinkedIn Talent Search Approach"
}
| null | null | null | null | true | null |
6915
| null |
Default
| null | null |
null |
{
"abstract": " The environmental impacts of medium to large scale buildings receive\nsubstantial attention in research, industry, and media. This paper studies the\nenergy savings potential of a commercial soccer stadium during day-to-day\noperation. Buildings of this kind are characterized by special purpose system\ninstallations like grass heating systems and by event-driven usage patterns.\nThis work presents a methodology to holistically analyze the stadiums\ncharacteristics and integrate its existing instrumentation into a\nCyber-Physical System, enabling to deploy different control strategies\nflexibly. In total, seven different strategies for controlling the studied\nstadiums grass heating system are developed and tested in operation.\nExperiments in winter season 2014/2015 validated the strategies impacts within\nthe real operational setup of the Commerzbank Arena, Frankfurt, Germany. With\n95% confidence, these experiments saved up to 66% of median daily\nweather-normalized energy consumption. Extrapolated to an average heating\nseason, this corresponds to savings of 775 MWh and 148 t of CO2 emissions. In\nwinter 2015/2016 an additional predictive nighttime heating experiment targeted\nlower temperatures, which increased the savings to up to 85%, equivalent to 1\nGWh (197 t CO2) in an average winter. Beyond achieving significant energy\nsavings, the different control strategies also met the target temperature\nlevels to the satisfaction of the stadiums operational staff. While the case\nstudy constitutes a significant part, the discussions dedicated to the\ntransferability of this work to other stadiums and other building types show\nthat the concepts and the approach are of general nature. Furthermore, this\nwork demonstrates the first successful application of Deep Belief Networks to\nregress and predict the thermal evolution of building systems.\n",
"title": "Cyber-Physical System for Energy-Efficient Stadium Operation: Methodology and Experimental Validation"
}
| null | null |
[
"Computer Science"
] | null | true | null |
6916
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we derive a Bayesian model order selection rule by using the\nexponentially embedded family method, termed Bayesian EEF. Unlike many other\nBayesian model selection methods, the Bayesian EEF can use vague proper priors\nand improper noninformative priors to be objective in the elicitation of\nparameter priors. Moreover, the penalty term of the rule is shown to be the sum\nof half of the parameter dimension and the estimated mutual information between\nparameter and observed data. This helps to reveal the EEF mechanism in\nselecting model orders and may provide new insights into the open problems of\nchoosing an optimal penalty term for model order selection and choosing a good\nprior from information theoretic viewpoints. The important example of linear\nmodel order selection is given to illustrate the algorithms and arguments.\nLastly, the Bayesian EEF that uses Jeffreys prior coincides with the EEF rule\nderived by frequentist strategies. This shows another interesting relationship\nbetween the frequentist and Bayesian philosophies for model selection.\n",
"title": "On Bayesian Exponentially Embedded Family for Model Order Selection"
}
| null | null | null | null | true | null |
6917
| null |
Default
| null | null |
null |
{
"abstract": " Sky models have been used in the past to calibrate individual low radio\nfrequency telescopes. Here we generalize this approach from a single antenna to\na two element interferometer and formulate the problem in a manner to allow us\nto estimate the flux density of the Sun using the normalized cross-correlations\n(visibilities) measured on a low resolution interferometric baseline. For wide\nfield-of-view instruments, typically the case at low radio frequencies, this\napproach can provide robust absolute solar flux calibration for well\ncharacterized antennas and receiver systems. It can provide a reliable and\ncomputationally lean method for extracting parameters of physical interest\nusing a small fraction of the voluminous interferometric data, which can be\nprohibitingly compute intensive to calibrate and image using conventional\napproaches. We demonstrate this technique by applying it to data from the\nMurchison Widefield Array and assess its reliability.\n",
"title": "Estimating solar flux density at low radio frequencies using a sky brightness model"
}
| null | null |
[
"Physics"
] | null | true | null |
6918
| null |
Validated
| null | null |
null |
{
"abstract": " Recurrent neural networks (RNNs) serve as a fundamental building block for\nmany sequence tasks across natural language processing. Recent research has\nfocused on recurrent dropout techniques or custom RNN cells in order to improve\nperformance. Both of these can require substantial modifications to the machine\nlearning model or to the underlying RNN configurations. We revisit traditional\nregularization techniques, specifically L2 regularization on RNN activations\nand slowness regularization over successive hidden states, to improve the\nperformance of RNNs on the task of language modeling. Both of these techniques\nrequire minimal modification to existing RNN architectures and result in\nperformance improvements comparable or superior to more complicated\nregularization techniques or custom cell architectures. These regularization\ntechniques can be used without any modification on optimized LSTM\nimplementations such as the NVIDIA cuDNN LSTM.\n",
"title": "Revisiting Activation Regularization for Language RNNs"
}
| null | null |
[
"Computer Science"
] | null | true | null |
6919
| null |
Validated
| null | null |
null |
{
"abstract": " Biological systems are typically highly open, non-equilibrium systems that\nare very challenging to understand from a statistical mechanics perspective.\nWhile statistical treatments of evolutionary biological systems have a long and\nrich history, examination of the time-dependent non-equilibrium dynamics has\nbeen less studied. In this paper we first derive a generalized master equation\nin the genotype space for diploid organisms incorporating the processes of\nselection, mutation, recombination, and reproduction. The master equation is\ndefined in terms of continuous time and can handle an arbitrary number of gene\nloci and alleles, and can be defined in terms of an absolute population or\nprobabilities. We examine and analytically solve several prototypical cases\nwhich illustrate the interplay of the various processes and discuss the\ntimescales of their evolution. The entropy production during the evolution\ntowards steady state is calculated and we find that it agrees with predictions\nfrom non-equilibrium statistical mechanics where it is large when the\npopulation distribution evolves towards a more viable genotype. The stability\nof the non-equilibrium steady state is confirmed using the Glansdorff-Prigogine\ncriterion.\n",
"title": "Non-equilibrium time dynamics of genetic evolution"
}
| null | null |
[
"Quantitative Biology"
] | null | true | null |
6920
| null |
Validated
| null | null |
null |
{
"abstract": " In this work, we outline the entropy viscosity method and discuss how the\nchoice of scaling influences the size of viscosity for a simple shock problem.\nWe present examples to illustrate the performance of the entropy viscosity\nmethod under two distinct scalings.\n",
"title": "On the scaling of entropy viscosity in high order methods"
}
| null | null |
[
"Mathematics"
] | null | true | null |
6921
| null |
Validated
| null | null |
null |
{
"abstract": " Convolutional neural networks (CNNs) are similar to \"ordinary\" neural\nnetworks in the sense that they are made up of hidden layers consisting of\nneurons with \"learnable\" parameters. These neurons receive inputs, performs a\ndot product, and then follows it with a non-linearity. The whole network\nexpresses the mapping between raw image pixels and their class scores.\nConventionally, the Softmax function is the classifier used at the last layer\nof this network. However, there have been studies (Alalshekmubarak and Smith,\n2013; Agarap, 2017; Tang, 2013) conducted to challenge this norm. The cited\nstudies introduce the usage of linear support vector machine (SVM) in an\nartificial neural network architecture. This project is yet another take on the\nsubject, and is inspired by (Tang, 2013). Empirical data has shown that the\nCNN-SVM model was able to achieve a test accuracy of ~99.04% using the MNIST\ndataset (LeCun, Cortes, and Burges, 2010). On the other hand, the CNN-Softmax\nwas able to achieve a test accuracy of ~99.23% using the same dataset. Both\nmodels were also tested on the recently-published Fashion-MNIST dataset (Xiao,\nRasul, and Vollgraf, 2017), which is suppose to be a more difficult image\nclassification dataset than MNIST (Zalandoresearch, 2017). This proved to be\nthe case as CNN-SVM reached a test accuracy of ~90.72%, while the CNN-Softmax\nreached a test accuracy of ~91.86%. The said results may be improved if data\npreprocessing techniques were employed on the datasets, and if the base CNN\nmodel was a relatively more sophisticated than the one used in this study.\n",
"title": "An Architecture Combining Convolutional Neural Network (CNN) and Support Vector Machine (SVM) for Image Classification"
}
| null | null | null | null | true | null |
6922
| null |
Default
| null | null |
null |
{
"abstract": " Ultraviolet self-interaction energies in field theory sometimes contain\nmeaningful physical quantities. The self-energies in such as classical\nelectrodynamics are usually subtracted from the rest mass. For the consistent\ntreatment of energies as sources of curvature in the Einstein field equations,\nthis study includes these subtracted self-energies into vacuum energy expressed\nby the constant Lambda (used in such as Lambda-CDM). In this study, the\nself-energies in electrodynamics and macroscopic classical Einstein field\nequations are examined, using the formalisms with the ultraviolet cutoff\nscheme. One of the cutoff formalisms is the field theory in terms of the\nstep-function-type basis functions, developed by the present authors. The other\nis a continuum theory of a fundamental particle with the same cutoff length.\nBased on the effectiveness of the continuum theory with the cutoff length shown\nin the examination, the dominant self-energy is the quadratic term of the Higgs\nfield at a quantum level (classical self-energies are reduced to logarithmic\nforms by quantum corrections). The cutoff length is then determined to\nreproduce today's tiny value of Lambda for vacuum energy. Additionally, a field\nwith nonperiodic vanishing boundary conditions is treated, showing that the\nfield has no zero-point energy.\n",
"title": "Derivation of the cutoff length from the quantum quadratic enhancement of a mass in vacuum energy constant Lambda"
}
| null | null | null | null | true | null |
6923
| null |
Default
| null | null |
null |
{
"abstract": " Given a graphical model, one essential problem is MAP inference, that is,\nfinding the most likely configuration of states according to the model.\nAlthough this problem is NP-hard, large instances can be solved in practice. A\nmajor open question is to explain why this is true. We give a natural condition\nunder which we can provably perform MAP inference in polynomial time. We\nrequire that the number of fractional vertices in the LP relaxation exceeding\nthe optimal solution is bounded by a polynomial in the problem size. This\nresolves an open question by Dimakis, Gohari, and Wainwright. In contrast, for\ngeneral LP relaxations of integer programs, known techniques can only handle a\nconstant number of fractional vertices whose value exceeds the optimal\nsolution. We experimentally verify this condition and demonstrate how efficient\nvarious integer programming methods are at removing fractional solutions.\n",
"title": "Exact MAP Inference by Avoiding Fractional Vertices"
}
| null | null | null | null | true | null |
6924
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we investigated the feasibility of applying deep learning\ntechniques to solve Poisson's equation. A deep convolutional neural network is\nset up to predict the distribution of electric potential in 2D or 3D cases.\nWith proper training data generated from a finite difference solver, the strong\napproximation capability of the deep convolutional neural network allows it to\nmake correct prediction given information of the source and distribution of\npermittivity. With applications of L2 regularization, numerical experiments\nshow that the predication error of 2D cases can reach below 1.5\\% and the\npredication of 3D cases can reach below 3\\%, with a significant reduction in\nCPU time compared with the traditional solver based on finite difference\nmethods.\n",
"title": "Study on a Poisson's Equation Solver Based On Deep Learning Technique"
}
| null | null | null | null | true | null |
6925
| null |
Default
| null | null |
null |
{
"abstract": " The 4d-transition-metals carbides (ZrC, NbC) and nitrides (ZrN, NbN) in the\nrocksalt structure, as well as their ternary alloys, have been recently studied\nby means of a first-principles full potential linearized augmented plane waves\nmethod within the local density approximation. These materials are important\nbecause of their interesting mechanical and physical properties, which make\nthem suitable for many technological applications. Here, by using a simple\ntheoretical model, we estimate the bulk moduli of their ternary alloys\nZr$_x$Nb$_{1-x}$C and Zr$_x$Nb$_{1-x}$N in terms of the bulk moduli of the end\nmembers alone. The results are comparable to those deduced from the\nfirst-principles calculations.\n",
"title": "On the compressibility of the transition-metal carbides and nitrides alloys Zr_xNb_{1-x}C and Zr_xNb_{1-x}N"
}
| null | null |
[
"Physics"
] | null | true | null |
6926
| null |
Validated
| null | null |
null |
{
"abstract": " This work aimed, to determine the characteristics of activity series from\nfractal geometry concepts application, in addition to evaluate the possibility\nof identifying individuals with fibromyalgia. Activity level data were\ncollected from 27 healthy subjects and 27 fibromyalgia patients, with the use\nof clock-like devices equipped with accelerometers, for about four weeks, all\nday long. The activity series were evaluated through fractal and multifractal\nmethods. Hurst exponent analysis exhibited values according to other studies\n($H>0.5$) for both groups ($H=0.98\\pm0.04$ for healthy subjects and\n$H=0.97\\pm0.03$ for fibromyalgia patients), however, it is not possible to\ndistinguish between the two groups by such analysis. Activity time series also\nexhibited a multifractal pattern. A paired analysis of the spectra indices for\nthe sleep and awake states revealed differences between healthy subjects and\nfibromyalgia patients. The individuals feature differences between awake and\nsleep states, having statistically significant differences for $\\alpha_{q-} -\n\\alpha_{0}$ in healthy subjects ($p = 0.014$) and $D_{0}$ for patients with\nfibromyalgia ($p = 0.013$). The approach has proven to be an option on the\ncharacterisation of such kind of signals and was able to differ between both\nhealthy and fibromyalgia groups. This outcome suggests changes in the\nphysiologic mechanisms of movement control.\n",
"title": "On multifractals: a non-linear study of actigraphy data"
}
| null | null | null | null | true | null |
6927
| null |
Default
| null | null |
null |
{
"abstract": " Series expansions of unknown fields $\\Phi=\\sum\\varphi_n Z_n$ in elongated\nwaveguides are commonly used in acoustics, optics, geophysics, water waves and\nother applications, in the context of coupled-mode theories (CMTs). The\ntransverse functions $Z_n$ are determined by solving local Sturm-Liouville\nproblems (reference waveguides). In most cases, the boundary conditions\nassigned to $Z_n$ cannot be compatible with the physical boundary conditions of\n$\\Phi$, leading to slowly convergent series, and rendering CMTs mild-slope\napproximations. In the present paper, the heuristic approach introduced in\n(Athanassoulis & Belibassakis 1999, J. Fluid Mech. 389, 275-301) is generalized\nand justified. It is proved that an appropriately enhanced series expansion\nbecomes an exact, rapidly-convergent representation of the field $\\Phi$, valid\nfor any smooth, nonplanar boundaries and any smooth enough $\\Phi$. This series\nexpansion can be differentiated termwise everywhere in the domain, including\nthe boundaries, implementing an exact semi-separation of variables for\nnon-separable domains. The efficiency of the method is illustrated by solving a\nboundary value problem for the Laplace equation, and computing the\ncorresponding Dirichlet-to-Neumann operator, involved in Hamiltonian equations\nfor nonlinear water waves. The present method provides accurate results with\nonly a few modes for quite general domains. Extensions to general waveguides\nare also discussed.\n",
"title": "Exact semi-separation of variables in waveguides with nonplanar boundaries"
}
| null | null | null | null | true | null |
6928
| null |
Default
| null | null |
null |
{
"abstract": " We prove a universal limit theorem for the halting time, or iteration count,\nof the power/inverse power methods and the QR eigenvalue algorithm.\nSpecifically, we analyze the required number of iterations to compute extreme\neigenvalues of random, positive-definite sample covariance matrices to within a\nprescribed tolerance. The universality theorem provides a complexity estimate\nfor the algorithms which, in this random setting, holds with high probability.\nThe method of proof relies on recent results on the statistics of the\neigenvalues and eigenvectors of random sample covariance matrices (i.e.,\ndelocalization, rigidity and edge universality).\n",
"title": "Universality for eigenvalue algorithms on sample covariance matrices"
}
| null | null | null | null | true | null |
6929
| null |
Default
| null | null |
null |
{
"abstract": " We demonstrate that in residual neural networks (ResNets) dynamical isometry\nis achievable irrespectively of the activation function used. We do that by\nderiving, with the help of Free Probability and Random Matrix Theories, a\nuniversal formula for the spectral density of the input-output Jacobian at\ninitialization, in the large network width and depth limit. The resulting\nsingular value spectrum depends on a single parameter, which we calculate for a\nvariety of popular activation functions, by analyzing the signal propagation in\nthe artificial neural network. We corroborate our results with numerical\nsimulations of both random matrices and ResNets applied to the CIFAR-10\nclassification problem. Moreover, we study the consequence of this universal\nbehavior for the initial and late phases of the learning processes. We conclude\nby drawing attention to the simple fact, that initialization acts as a\nconfounding factor between the choice of activation function and the rate of\nlearning. We propose that in ResNets this can be resolved based on our results,\nby ensuring the same level of dynamical isometry at initialization.\n",
"title": "Dynamical Isometry is Achieved in Residual Networks in a Universal Way for any Activation Function"
}
| null | null | null | null | true | null |
6930
| null |
Default
| null | null |
null |
{
"abstract": " Multi-label classification is an important learning problem with many\napplications. In this work, we propose a principled similarity-based approach\nfor multi-label learning called SML. We also introduce a similarity-based\napproach for predicting the label set size. The experimental results\ndemonstrate the effectiveness of SML for multi-label classification where it is\nshown to compare favorably with a wide variety of existing algorithms across a\nrange of evaluation criterion.\n",
"title": "Similarity-based Multi-label Learning"
}
| null | null | null | null | true | null |
6931
| null |
Default
| null | null |
null |
{
"abstract": " We are concerned about burst synchronization (BS), related to neural\ninformation processes in health and disease, in the Barabási-Albert\nscale-free network (SFN) composed of inhibitory bursting Hindmarsh-Rose\nneurons. This inhibitory neuronal population has adaptive dynamic synaptic\nstrengths governed by the inhibitory spike-timing-dependent plasticity (iSTDP).\nIn previous works without considering iSTDP, BS was found to appear in a range\nof noise intensities for fixed synaptic inhibition strengths. In contrast, in\nour present work, we take into consideration iSTDP and investigate its effect\non BS by varying the noise intensity. Our new main result is to find occurrence\nof a Matthew effect in inhibitory synaptic plasticity: good BS gets better via\nLTD, while bad BS get worse via LTP. This kind of Matthew effect in inhibitory\nsynaptic plasticity is in contrast to that in excitatory synaptic plasticity\nwhere good (bad) synchronization gets better (worse) via LTP (LTD). We note\nthat, due to inhibition, the roles of LTD and LTP in inhibitory synaptic\nplasticity are reversed in comparison with those in excitatory synaptic\nplasticity. Moreover, emergences of LTD and LTP of synaptic inhibition\nstrengths are intensively investigated via a microscopic method based on the\ndistributions of time delays between the pre- and the post-synaptic burst onset\ntimes. Finally, in the presence of iSTDP we investigate the effects of network\narchitecture on BS by varying the symmetric attachment degree $l^*$ and the\nasymmetry parameter $\\Delta l$ in the SFN.\n",
"title": "Burst Synchronization in A Scale-Free Neuronal Network with Inhibitory Spike-Timing-Dependent Plasticity"
}
| null | null | null | null | true | null |
6932
| null |
Default
| null | null |
null |
{
"abstract": " The collapse of a collisionless self-gravitating system, with the fast\nachievement of a quasi-stationary state, is driven by violent relaxation, with\na typical particle interacting with the time-changing collective potential. It\nis traditionally assumed that this evolution is governed by the Vlasov-Poisson\nequation, in which case entropy must be conserved. We run N-body simulations of\nisolated self-gravitating systems, using three simulation codes: NBODY-6\n(direct summation without softening), NBODY-2 (direct summation with softening)\nand GADGET-2 (tree code with softening), for different numbers of particles and\ninitial conditions. At each snapshot, we estimate the Shannon entropy of the\ndistribution function with three different techniques: Kernel, Nearest Neighbor\nand EnBiD. For all simulation codes and estimators, the entropy evolution\nconverges to the same limit as N increases. During violent relaxation, the\nentropy has a fast increase followed by damping oscillations, indicating that\nviolent relaxation must be described by a kinetic equation other than the\nVlasov-Poisson, even for N as large as that of astronomical structures. This\nindicates that violent relaxation cannot be described by a time-reversible\nequation, shedding some light on the so-called \"fundamental paradox of stellar\ndynamics\". The long-term evolution is well described by the orbit-averaged\nFokker-Planck model, with Coulomb logarithm values in the expected range 10-12.\nBy means of NBODY-2, we also study the dependence of the 2-body relaxation\ntime-scale on the softening length. The approach presented in the current work\ncan potentially provide a general method for testing any kinetic equation\nintended to describe the macroscopic evolution of N-body systems.\n",
"title": "The Arrow of Time in the collapse of collisionless self-gravitating systems: non-validity of the Vlasov-Poisson equation during violent relaxation"
}
| null | null |
[
"Physics"
] | null | true | null |
6933
| null |
Validated
| null | null |
null |
{
"abstract": " The possibility of realizing non-Abelian excitations (non-Abelions) in\ntwo-dimensional (2D) Abelian states of matter has generated a lot of interest\nrecently. A well-known example of such non-Abelions are parafermion zeros modes\n(PFZMs) which can be realized at the endpoints of the so called genons in\nfractional quantum Hall (FQH) states or fractional Chern insulators (FCIs). In\nthis letter, we discuss some known signatures of PFZMs and also introduce some\nnovel ones. In particular, we show that the topological entanglement entropy\n(TEE) shifts by a quantized value after crossing PFZMs. Utilizing those\nsignatures, we present the first large scale numerical study of PFZMs and their\nstability against perturbations in both FQH states and FCIs within the\ndensity-Matrix-Renormalization-Group (DMRG) framework. Our results can help\nbuild a closer connection with future experiments on FQH states with genons.\n",
"title": "Numerical Observation of Parafermion Zero Modes and their Stability in 2D Topological States"
}
| null | null | null | null | true | null |
6934
| null |
Default
| null | null |
null |
{
"abstract": " We approach the development of models and control strategies of\nsusceptible-infected-susceptible (SIS) epidemic processes from the perspective\nof marked temporal point processes and stochastic optimal control of stochastic\ndifferential equations (SDEs) with jumps. In contrast to previous work, this\nnovel perspective is particularly well-suited to make use of fine-grained data\nabout disease outbreaks and lets us overcome the shortcomings of current\ncontrol strategies. Our control strategy resorts to treatment intensities to\ndetermine who to treat and when to do so to minimize the amount of infected\nindividuals over time. Preliminary experiments with synthetic data show that\nour control strategy consistently outperforms several alternatives. Looking\ninto the future, we believe our methodology provides a promising step towards\nthe development of practical data-driven control strategies of epidemic\nprocesses.\n",
"title": "Stochastic Optimal Control of Epidemic Processes in Networks"
}
| null | null | null | null | true | null |
6935
| null |
Default
| null | null |
null |
{
"abstract": " The moving sofa problem, posed by L. Moser in 1966, asks for the planar shape\nof maximal area that can move around a right-angled corner in a hallway of unit\nwidth. It is known that a maximal area shape exists, and that its area is at\nleast 2.2195... - the area of an explicit construction found by Gerver in 1992\n- and at most $2\\sqrt{2}=2.82...$, with the lower bound being conjectured as\nthe true value. We prove a new and improved upper bound of 2.37. The method\ninvolves a computer-assisted proof scheme that can be used to rigorously derive\nfurther improved upper bounds that converge to the correct value.\n",
"title": "Improved upper bounds in the moving sofa problem"
}
| null | null |
[
"Mathematics"
] | null | true | null |
6936
| null |
Validated
| null | null |
null |
{
"abstract": " IntroductionThe free and cued selective reminding test is used to identify\nmemory deficits in mild cognitive impairment and demented patients. It allows\nassessing three processes: encoding, storage, and recollection of verbal\nepisodic memory.MethodsWe investigated the neural correlates of these three\nmemory processes in a large cohort study. The Memento cohort enrolled 2323\noutpatients presenting either with subjective cognitive decline or mild\ncognitive impairment who underwent cognitive, structural MRI and, for a subset,\nfluorodeoxyglucose--positron emission tomography evaluations.ResultsEncoding\nwas associated with a network including parietal and temporal cortices; storage\nwas mainly associated with entorhinal and parahippocampal regions, bilaterally;\nretrieval was associated with a widespread network encompassing frontal\nregions.DiscussionThe neural correlates of episodic memory processes can be\nassessed in large and standardized cohorts of patients at risk for Alzheimer's\ndisease. Their relation to pathophysiological markers of Alzheimer's disease\nremains to be studied.\n",
"title": "Neural correlates of episodic memory in the Memento cohort"
}
| null | null | null | null | true | null |
6937
| null |
Default
| null | null |
null |
{
"abstract": " Understanding protein function is one of the keys to understanding life at\nthe molecular level. It is also important in several scenarios including human\ndisease and drug discovery. In this age of rapid and affordable biological\nsequencing, the number of sequences accumulating in databases is rising with an\nincreasing rate. This presents many challenges for biologists and computer\nscientists alike. In order to make sense of this huge quantity of data, these\nsequences should be annotated with functional properties. UniProtKB consists of\ntwo components: i) the UniProtKB/Swiss-Prot database containing protein\nsequences with reliable information manually reviewed by expert bio-curators\nand ii) the UniProtKB/TrEMBL database that is used for storing and processing\nthe unknown sequences. Hence, for all proteins we have available the sequence\nalong with few more information such as the taxon and some structural domains.\nPairwise similarity can be defined and computed on proteins based on such\nattributes. Other important attributes, while present for proteins in\nSwiss-Prot, are often missing for proteins in TrEMBL, such as their function\nand cellular localization. The enormous number of protein sequences now in\nTrEMBL calls for rapid procedures to annotate them automatically. In this work,\nwe present DistNBLP, a novel Distributed Neighborhood-Based Label Propagation\napproach for large-scale annotation of proteins. To do this, the functional\nannotations of reviewed proteins are used to predict those of non-reviewed\nproteins using label propagation on a graph representation of the protein\ndatabase. DistNBLP is built on top of the \"akka\" toolkit for building resilient\ndistributed message-driven applications.\n",
"title": "Neighborhood-Based Label Propagation in Large Protein Graphs"
}
| null | null | null | null | true | null |
6938
| null |
Default
| null | null |
null |
{
"abstract": " Compressive sensing is a powerful technique for recovering sparse solutions\nof underdetermined linear systems, which is often encountered in uncertainty\nquantification analysis of expensive and high-dimensional physical models. We\nperform numerical investigations employing several compressive sensing solvers\nthat target the unconstrained LASSO formulation, with a focus on linear systems\nthat arise in the construction of polynomial chaos expansions. With core\nsolvers of l1_ls, SpaRSA, CGIST, FPC_AS, and ADMM, we develop techniques to\nmitigate overfitting through an automated selection of regularization constant\nbased on cross-validation, and a heuristic strategy to guide the stop-sampling\ndecision. Practical recommendations on parameter settings for these techniques\nare provided and discussed. The overall method is applied to a series of\nnumerical examples of increasing complexity, including large eddy simulations\nof supersonic turbulent jet-in-crossflow involving a 24-dimensional input.\nThrough empirical phase-transition diagrams and convergence plots, we\nillustrate sparse recovery performance under structures induced by polynomial\nchaos, accuracy and computational tradeoffs between polynomial bases of\ndifferent degrees, and practicability of conducting compressive sensing for a\nrealistic, high-dimensional physical application. Across test cases studied in\nthis paper, we find ADMM to have demonstrated empirical advantages through\nconsistent lower errors and faster computational times.\n",
"title": "Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions"
}
| null | null | null | null | true | null |
6939
| null |
Default
| null | null |
null |
{
"abstract": " We show that the class of groups with $k$-multiple context-free word problem\nis closed under graphs of groups with finite edge groups.\n",
"title": "Closure Properties in the Class of Multiple Context Free Groups"
}
| null | null | null | null | true | null |
6940
| null |
Default
| null | null |
null |
{
"abstract": " We study numerically the superconductor-insulator transition in\ntwo-dimensional inhomogeneous superconductors with gauge disorder, described by\nfour different quantum rotor models: a gauge glass, a flux glass, a binary\nphase glass and a Gaussian phase glass. The first two models, describe the\ncombined effect of geometrical disorder in the array of local superconducting\nislands and a uniform external magnetic field while the last two describe the\neffects of random negative Josephson-junction couplings or $\\pi$ junctions.\nMonte Carlo simulations in the path-integral representation of the models are\nused to determine the critical exponents and the universal conductivity at the\nquantum phase transition. The gauge and flux glass models display the same\ncritical behavior, within the estimated numerical uncertainties. Similar\nagreement is found for the binary and Gaussian phase-glass models. Despite the\ndifferent symmetries and disorder correlations, we find that the universal\nconductivity of these models is approximately the same. In particular, the\nratio of this value to that of the pure model agrees with recent experiments on\nnanohole thin film superconductors in a magnetic field, in the large disorder\nlimit.\n",
"title": "Random gauge models of the superconductor-insulator transition in two-dimensional disordered superconductors"
}
| null | null | null | null | true | null |
6941
| null |
Default
| null | null |
null |
{
"abstract": " We use techniques from functorial quantum field theory to provide a geometric\ndescription of the parity anomaly in fermionic systems coupled to background\ngauge and gravitational fields on odd-dimensional spacetimes. We give an\nexplicit construction of a geometric cobordism bicategory which incorporates\ngeneral background fields in a stack, and together with the theory of symmetric\nmonoidal bicategories we use it to provide the concrete forms of invertible\nextended quantum field theories which capture anomalies in both the path\nintegral and Hamiltonian frameworks. Specialising this situation by using the\nextension of the Atiyah-Patodi-Singer index theorem to manifolds with corners\ndue to Loya and Melrose, we obtain a new Hamiltonian perspective on the parity\nanomaly. We compute explicitly the 2-cocycle of the projective representation\nof the gauge symmetry on the quantum state space, which is defined in a\nparity-symmetric way by suitably augmenting the standard chiral fermionic Fock\nspaces with Lagrangian subspaces of zero modes of the Dirac Hamiltonian that\nnaturally appear in the index theorem. We describe the significance of our\nconstructions for the bulk-boundary correspondence in a large class of\ntime-reversal invariant gauge-gravity symmetry-protected topological phases of\nquantum matter with gapless charged boundary fermions, including the standard\ntopological insulator in 3+1 dimensions.\n",
"title": "Extended quantum field theory, index theory and the parity anomaly"
}
| null | null | null | null | true | null |
6942
| null |
Default
| null | null |
null |
{
"abstract": " We give a simple, multiplicative-weight update algorithm for learning\nundirected graphical models or Markov random fields (MRFs). The approach is\nnew, and for the well-studied case of Ising models or Boltzmann machines, we\nobtain an algorithm that uses a nearly optimal number of samples and has\nquadratic running time (up to logarithmic factors), subsuming and improving on\nall prior work. Additionally, we give the first efficient algorithm for\nlearning Ising models over general alphabets.\nOur main application is an algorithm for learning the structure of t-wise\nMRFs with nearly-optimal sample complexity (up to polynomial losses in\nnecessary terms that depend on the weights) and running time that is\n$n^{O(t)}$. In addition, given $n^{O(t)}$ samples, we can also learn the\nparameters of the model and generate a hypothesis that is close in statistical\ndistance to the true MRF. All prior work runs in time $n^{\\Omega(d)}$ for\ngraphs of bounded degree d and does not generate a hypothesis close in\nstatistical distance even for t=3. We observe that our runtime has the correct\ndependence on n and t assuming the hardness of learning sparse parities with\nnoise.\nOur algorithm--the Sparsitron-- is easy to implement (has only one parameter)\nand holds in the on-line setting. Its analysis applies a regret bound from\nFreund and Schapire's classic Hedge algorithm. It also gives the first solution\nto the problem of learning sparse Generalized Linear Models (GLMs).\n",
"title": "Learning Graphical Models Using Multiplicative Weights"
}
| null | null | null | null | true | null |
6943
| null |
Default
| null | null |
null |
{
"abstract": " We have studied neutron response of PARIS phoswich [LaBr$_3$(Ce)-NaI(Tl)]\ndetector which is being developed for measuring the high energy (E$_{\\gamma}$ =\n5 - 30 MeV) $\\gamma$ rays emitted from the decay of highly collective states in\natomic nuclei. The relative neutron detection efficiency of LaBr$_3$(Ce) and\nNaI(Tl) crystal of the phoswich detector has been measured using the\ntime-of-flight (TOF) and pulse shape discrimination (PSD) technique in the\nenergy range of E$_n$ = 1 - 9 MeV and compared with the GEANT4 based\nsimulations. It has been found that for E$_n$ $>$ 3 MeV, $\\sim$ 95 \\% of\nneutrons have the primary interaction in the LaBr$_3$(Ce) crystal, indicating\nthat a clear n-$\\gamma$ separation can be achieved even at $\\sim$15 cm flight\npath.\n",
"title": "Neutron response of PARIS phoswich detector"
}
| null | null | null | null | true | null |
6944
| null |
Default
| null | null |
null |
{
"abstract": " The mine detection in an unexplored area is an optimization problem where\nmultiple mines, randomly distributed throughout an area, need to be discovered\nand disarmed in a minimum amount of time. We propose a strategy to explore an\nunknown area, using a stigmergy approach based on ants behavior, and a novel\nswarm based protocol to recruit and coordinate robots for disarming the mines\ncooperatively. Simulation tests are presented to show the effectiveness of our\nproposed Ant-based Task Robot Coordination (ATRC) with only the exploration\ntask and with both exploration and recruiting strategies. Multiple minimization\nobjectives have been considered: the robots' recruiting time and the overall\narea exploration time. We discuss, through simulation, different cases under\ndifferent network and field conditions, performed by the robots. The results\nhave shown that the proposed decentralized approaches enable the swarm of\nrobots to perform cooperative tasks intelligently without any central control.\n",
"title": "Swarm robotics in wireless distributed protocol design for coordinating robots involved in cooperative tasks"
}
| null | null | null | null | true | null |
6945
| null |
Default
| null | null |
null |
{
"abstract": " We present the strongest known knot invariant that can be computed\neffectively (in polynomial time).\n",
"title": "A polynomial time knot polynomial"
}
| null | null |
[
"Mathematics"
] | null | true | null |
6946
| null |
Validated
| null | null |
null |
{
"abstract": " The main purpose of this paper is to formalize the modelling process,\nanalysis and mathematical definition of corruption when entering into a\ncontract between principal agent and producers. The formulation of the problem\nand the definition of concepts for the general case are considered. For\ndefiniteness, all calculations and formulas are given for the case of three\nproducers, one principal agent and one intermediary. Economic analysis of\ncorruption allowed building a mathematical model of interaction between agents.\nFinancial resources distribution problem in a contract with a corrupted\nintermediary is considered.Then proposed conditions for corruption emergence\nand its possible consequences. Optimal non-corruption schemes of financial\nresources distribution in a contract are formed, when principal agent's choice\nis limited first only by asymmetrical information and then also by external\ninfluences.Numerical examples suggesting optimal corruption-free agents'\nbehaviour are presented.\n",
"title": "Corruption-free scheme of entering into contract: mathematical model"
}
| null | null | null | null | true | null |
6947
| null |
Default
| null | null |
null |
{
"abstract": " We study biplane graphs drawn on a finite planar point set $S$ in general\nposition. This is the family of geometric graphs whose vertex set is $S$ and\ncan be decomposed into two plane graphs. We show that two maximal biplane\ngraphs---in the sense that no edge can be added while staying biplane---may\ndiffer in the number of edges, and we provide an efficient algorithm for adding\nedges to a biplane graph to make it maximal. We also study extremal properties\nof maximal biplane graphs such as the maximum number of edges and the largest\nmaximum connectivity over $n$-element point sets.\n",
"title": "Geometric Biplane Graphs I: Maximal Graphs"
}
| null | null | null | null | true | null |
6948
| null |
Default
| null | null |
null |
{
"abstract": " The main goal of this paper is to design a market operator (MO) and a\ndistribution network operator (DNO) for a network of microgrids in\nconsideration of multiple objectives. This is a high-level design and only\nthose microgrids with nondispatchable renewable energy sources are considered.\nFor a power grid in the network, the net value derived from providing power to\nthe network must be maximized. For a microgrid, it is desirable to maximize the\nnet gain derived from consuming the received power. Finally, for an independent\nsystem operator, stored energy levels at microgrids must be maintained as close\nas possible to storage capacity to secure network emergency operation. To\nachieve these objectives, a multiobjective approach is proposed. The price\nsignal generated by the MO and power distributed by the DNO are assigned based\non a Pareto optimal solution of a multiobjective optimization problem. By using\nthe proposed approach, a fair scheme that does not advantage one particular\nobjective can be attained. Simulations are provided to validate the proposed\nmethodology.\n",
"title": "A Multiobjective Approach to Multimicrogrid System Design"
}
| null | null | null | null | true | null |
6949
| null |
Default
| null | null |
null |
{
"abstract": " In Web search, entity-seeking queries often trigger a special Question\nAnswering (QA) system. It may use a parser to interpret the question to a\nstructured query, execute that on a knowledge graph (KG), and return direct\nentity responses. QA systems based on precise parsing tend to be brittle: minor\nsyntax variations may dramatically change the response. Moreover, KG coverage\nis patchy. At the other extreme, a large corpus may provide broader coverage,\nbut in an unstructured, unreliable form. We present AQQUCN, a QA system that\ngracefully combines KG and corpus evidence. AQQUCN accepts a broad spectrum of\nquery syntax, between well-formed questions to short `telegraphic' keyword\nsequences. In the face of inherent query ambiguities, AQQUCN aggregates signals\nfrom KGs and large corpora to directly rank KG entities, rather than commit to\none semantic interpretation of the query. AQQUCN models the ideal\ninterpretation as an unobservable or latent variable. Interpretations and\ncandidate entity responses are scored as pairs, by combining signals from\nmultiple convolutional networks that operate collectively on the query, KG and\ncorpus. On four public query workloads, amounting to over 8,000 queries with\ndiverse query syntax, we see 5--16% absolute improvement in mean average\nprecision (MAP), compared to the entity ranking performance of recent systems.\nOur system is also competitive at entity set retrieval, almost doubling F1\nscores for challenging short queries.\n",
"title": "Neural Architecture for Question Answering Using a Knowledge Graph and Web Corpus"
}
| null | null | null | null | true | null |
6950
| null |
Default
| null | null |
null |
{
"abstract": " We show that $R^{cl}(\\omega\\cdot 2,3)^2$ is equal to $\\omega^3\\cdot 2$.\n",
"title": "Calculating the closed ordinal Ramsey number $R^{cl}(ω\\cdot 2,3)^2$"
}
| null | null | null | null | true | null |
6951
| null |
Default
| null | null |
null |
{
"abstract": " Spatially explicit capture recapture (SECR) models have gained enormous\npopularity to solve abundance estimation problems in ecology. In this study, we\ndevelop a novel Bayesian SECR model that disentangles the process of animal\nmovement through a detector from the process of recording data by a detector in\nthe face of imperfect detection. We integrate this complexity into an advanced\nversion of a recent SECR model involving partially identified individuals\n(Royle, 2015). We assess the performance of our model over a range of realistic\nsimulation scenarios and demonstrate that estimates of population size $N$\nimprove when we utilize the proposed model relative to the model that does not\nexplicitly estimate trap detection probability (Royle, 2015). We confront and\ninvestigate the proposed model with a spatial capture-recapture data set from a\ncamera trapping survey on tigers (\\textit{Panthera tigris}) in Nagarahole,\nsouthern India. Trap detection probability is estimated at 0.489 and therefore\njustifies the necessity to utilize our model in field situations. We discuss\npossible extensions, future work and relevance of our model to other\nstatistical applications beyond ecology.\n",
"title": "A spatially explicit capture recapture model for partially identified individuals when trap detection rate is less than one"
}
| null | null | null | null | true | null |
6952
| null |
Default
| null | null |
null |
{
"abstract": " Obtaining models that capture imaging markers relevant for disease\nprogression and treatment monitoring is challenging. Models are typically based\non large amounts of data with annotated examples of known markers aiming at\nautomating detection. High annotation effort and the limitation to a vocabulary\nof known markers limit the power of such approaches. Here, we perform\nunsupervised learning to identify anomalies in imaging data as candidates for\nmarkers. We propose AnoGAN, a deep convolutional generative adversarial network\nto learn a manifold of normal anatomical variability, accompanying a novel\nanomaly scoring scheme based on the mapping from image space to a latent space.\nApplied to new data, the model labels anomalies, and scores image patches\nindicating their fit into the learned distribution. Results on optical\ncoherence tomography images of the retina demonstrate that the approach\ncorrectly identifies anomalous images, such as images containing retinal fluid\nor hyperreflective foci.\n",
"title": "Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery"
}
| null | null | null | null | true | null |
6953
| null |
Default
| null | null |
null |
{
"abstract": " Recent advances of derivative-free optimization allow efficient approximating\nthe global optimal solutions of sophisticated functions, such as functions with\nmany local optima, non-differentiable and non-continuous functions. This\narticle describes the ZOOpt (this https URL) toolbox that\nprovides efficient derivative-free solvers and are designed easy to use. ZOOpt\nprovides a Python package for single-thread optimization, and a light-weighted\ndistributed version with the help of the Julia language for Python described\nfunctions. ZOOpt toolbox particularly focuses on optimization problems in\nmachine learning, addressing high-dimensional, noisy, and large-scale problems.\nThe toolbox is being maintained toward ready-to-use tool in real-world machine\nlearning tasks.\n",
"title": "ZOOpt: Toolbox for Derivative-Free Optimization"
}
| null | null | null | null | true | null |
6954
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we propose a model for estimating volatility from financial\ntime series, extending the non-Gaussian family of space-state models with exact\nmarginal likelihood proposed by Gamerman, Santos and Franco (2013). On the\nliterature there are models focused on estimating financial assets risk,\nhowever, most of them rely on MCMC methods based on Metropolis algorithms,\nsince full conditional posterior distributions are not known. We present an\nalternative model capable of estimating the volatility, in an automatic way,\nsince all full conditional posterior distributions are known, and it is\npossible to obtain an exact sample of parameters via Gibbs Sampler. The\nincorporation of jumps in returns allows the model to capture speculative\nmovements of the data, so that their influence does not propagate to\nvolatility. We evaluate the performance of the algorithm using synthetic and\nreal data time series.\nKeywords: Financial time series, Stochastic volatility, Gibbs Sampler,\nDynamic linear models.\n",
"title": "Non-Gaussian Stochastic Volatility Model with Jumps via Gibbs Sampler"
}
| null | null | null | null | true | null |
6955
| null |
Default
| null | null |
null |
{
"abstract": " To overcome the travelling difficulty for the visually impaired group, this\npaper presents a novel ETA (Electronic Travel Aids)-smart guiding device in the\nshape of a pair of eyeglasses for giving these people guidance efficiently and\nsafely. Different from existing works, a novel multi sensor fusion based\nobstacle avoiding algorithm is proposed, which utilizes both the depth sensor\nand ultrasonic sensor to solve the problems of detecting small obstacles, and\ntransparent obstacles, e.g. the French door. For totally blind people, three\nkinds of auditory cues were developed to inform the direction where they can go\nahead. Whereas for weak sighted people, visual enhancement which leverages the\nAR (Augment Reality) technique and integrates the traversable direction is\nadopted. The prototype consisting of a pair of display glasses and several low\ncost sensors is developed, and its efficiency and accuracy were tested by a\nnumber of users. The experimental results show that the smart guiding glasses\ncan effectively improve the user's travelling experience in complicated indoor\nenvironment. Thus it serves as a consumer device for helping the visually\nimpaired people to travel safely.\n",
"title": "Smart Guiding Glasses for Visually Impaired People in Indoor Environment"
}
| null | null | null | null | true | null |
6956
| null |
Default
| null | null |
null |
{
"abstract": " Poisson factorization is a probabilistic model of users and items for\nrecommendation systems, where the so-called implicit consumer data is modeled\nby a factorized Poisson distribution. There are many variants of Poisson\nfactorization methods who show state-of-the-art performance on real-world\nrecommendation tasks. However, most of them do not explicitly take into account\nthe temporal behavior and the recurrent activities of users which is essential\nto recommend the right item to the right user at the right time. In this paper,\nwe introduce Recurrent Poisson Factorization (RPF) framework that generalizes\nthe classical PF methods by utilizing a Poisson process for modeling the\nimplicit feedback. RPF treats time as a natural constituent of the model and\nbrings to the table a rich family of time-sensitive factorization models. To\nelaborate, we instantiate several variants of RPF who are capable of handling\ndynamic user preferences and item specification (DRPF), modeling the\nsocial-aspect of product adoption (SRPF), and capturing the consumption\nheterogeneity among users and items (HRPF). We also develop a variational\nalgorithm for approximate posterior inference that scales up to massive data\nsets. Furthermore, we demonstrate RPF's superior performance over many\nstate-of-the-art methods on synthetic dataset, and large scale real-world\ndatasets on music streaming logs, and user-item interactions in M-Commerce\nplatforms.\n",
"title": "Recurrent Poisson Factorization for Temporal Recommendation"
}
| null | null | null | null | true | null |
6957
| null |
Default
| null | null |
null |
{
"abstract": " Using the unfolding method given in \\cite{HL}, we prove the conjectures on\nsign-coherence and a recurrence formula respectively of ${\\bf g}$-vectors for\nacyclic sign-skew-symmetric cluster algebras. As a following consequence, the\nconjecture is affirmed in the same case which states that the ${\\bf g}$-vectors\nof any cluster form a basis of $\\mathbb Z^n$. Also, the additive\ncategorification of an acyclic sign-skew-symmetric cluster algebra $\\mathcal\nA(\\Sigma)$ is given, which is realized as $(\\mathcal C^{\\widetilde Q},\\Gamma)$\nfor a Frobenius $2$-Calabi-Yau category $\\mathcal C^{\\widetilde Q}$ constructed\nfrom an unfolding $(Q,\\Gamma)$ of the acyclic exchange matrix $B$ of $\\mathcal\nA(\\Sigma)$.\n",
"title": "Categorification of sign-skew-symmetric cluster algebras and some conjectures on g-vectors"
}
| null | null |
[
"Mathematics"
] | null | true | null |
6958
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we address the Bounded Cardinality Hub Location Routing with\nRoute Capacity wherein each hub acts as a transshipment node for one directed\nroute. The number of hubs lies between a minimum and a maximum and the\nhub-level network is a complete subgraph. The transshipment operations take\nplace at the hub nodes and flow transfer time from a hub-level transporter to a\nspoke-level vehicle influences spoke- to-hub allocations. We propose a\nmathematical model and a branch-and-cut algorithm based on Benders\ndecomposition to solve the problem. To accelerate convergence, our solution\nframework embeds an efficient heuristic producing high-quality solutions in\nshort computation times. In addition, we show how symmetry can be exploited to\naccelerate and improve the performance of our method.\n",
"title": "Capacitated Bounded Cardinality Hub Routing Problem: Model and Solution Algorithm"
}
| null | null |
[
"Mathematics"
] | null | true | null |
6959
| null |
Validated
| null | null |
null |
{
"abstract": " One of recent trends [30, 31, 14] in network architec- ture design is\nstacking small filters (e.g., 1x1 or 3x3) in the entire network because the\nstacked small filters is more ef- ficient than a large kernel, given the same\ncomputational complexity. However, in the field of semantic segmenta- tion,\nwhere we need to perform dense per-pixel prediction, we find that the large\nkernel (and effective receptive field) plays an important role when we have to\nperform the clas- sification and localization tasks simultaneously. Following\nour design principle, we propose a Global Convolutional Network to address both\nthe classification and localization issues for the semantic segmentation. We\nalso suggest a residual-based boundary refinement to further refine the ob-\nject boundaries. Our approach achieves state-of-art perfor- mance on two public\nbenchmarks and significantly outper- forms previous results, 82.2% (vs 80.2%)\non PASCAL VOC 2012 dataset and 76.9% (vs 71.8%) on Cityscapes dataset.\n",
"title": "Large Kernel Matters -- Improve Semantic Segmentation by Global Convolutional Network"
}
| null | null | null | null | true | null |
6960
| null |
Default
| null | null |
null |
{
"abstract": " We study how the gas in a sample of galaxies (M* > 10e9 Msun) in clusters,\nobtained in a cosmological simulation, is affected by the interaction with the\nintra-cluster medium (ICM). The dynamical state of each elemental parcel of gas\nis studied using the total energy. At z ~ 2, the galaxies in the simulation are\nevenly distributed within clusters, moving later on towards more central\nlocations. In this process, gas from the ICM is accreted and mixed with the gas\nin the galactic halo. Simultaneously, the interaction with the environment\nremoves part of the gas. A characteristic stellar mass around M* ~ 10e10 Msun\nappears as a threshold marking two differentiated behaviours. Below this mass,\ngalaxies are located at the external part of clusters and have eccentric\norbits. The effect of the interaction with the environment is marginal. Above,\ngalaxies are mainly located at the inner part of clusters with mostly radial\norbits with low velocities. In these massive systems, part of the gas, strongly\ncorrelated with the stellar mass of the galaxy, is removed. The amount of\nremoved gas is sub-dominant compared with the quantity of retained gas which is\ncontinuously influenced by the hot gas coming from the ICM. The analysis of\nindividual galaxies reveals the existence of a complex pattern of flows,\nturbulence and a constant fuelling of gas to the hot corona from the ICM that\ncould make the global effect of the interaction of galaxies with their\nenvironment to be substantially less dramatic than previously expected.\n",
"title": "Is ram-pressure stripping an efficient mechanism to remove gas in galaxies?"
}
| null | null | null | null | true | null |
6961
| null |
Default
| null | null |
null |
{
"abstract": " We present hidden fluid mechanics (HFM), a physics informed deep learning\nframework capable of encoding an important class of physical laws governing\nfluid motions, namely the Navier-Stokes equations. In particular, we seek to\nleverage the underlying conservation laws (i.e., for mass, momentum, and\nenergy) to infer hidden quantities of interest such as velocity and pressure\nfields merely from spatio-temporal visualizations of a passive scaler (e.g.,\ndye or smoke), transported in arbitrarily complex domains (e.g., in human\narteries or brain aneurysms). Our approach towards solving the aforementioned\ndata assimilation problem is unique as we design an algorithm that is agnostic\nto the geometry or the initial and boundary conditions. This makes HFM highly\nflexible in choosing the spatio-temporal domain of interest for data\nacquisition as well as subsequent training and predictions. Consequently, the\npredictions made by HFM are among those cases where a pure machine learning\nstrategy or a mere scientific computing approach simply cannot reproduce. The\nproposed algorithm achieves accurate predictions of the pressure and velocity\nfields in both two and three dimensional flows for several benchmark problems\nmotivated by real-world applications. Our results demonstrate that this\nrelatively simple methodology can be used in physical and biomedical problems\nto extract valuable quantitative information (e.g., lift and drag forces or\nwall shear stresses in arteries) for which direct measurements may not be\npossible.\n",
"title": "Hidden Fluid Mechanics: A Navier-Stokes Informed Deep Learning Framework for Assimilating Flow Visualization Data"
}
| null | null | null | null | true | null |
6962
| null |
Default
| null | null |
null |
{
"abstract": " We study the problem of testing for structure in networks using relations\nbetween the observed frequencies of small subgraphs. We consider the statistics\n\\begin{align*} T_3 & =(\\text{edge frequency})^3 - \\text{triangle frequency}\\\\\nT_2 & =3(\\text{edge frequency})^2(1-\\text{edge frequency}) - \\text{V-shape\nfrequency} \\end{align*} and prove a central limit theorem for $(T_2, T_3)$\nunder an Erdős-Rényi null model. We then analyze the power of the\nassociated $\\chi^2$ test statistic under a general class of alternative models.\nIn particular, when the alternative is a $k$-community stochastic block model,\nwith $k$ unknown, the power of the test approaches one. Moreover, the\nsignal-to-noise ratio required is strictly weaker than that required for\ncommunity detection. We also study the relation with other statistics over\nthree-node subgraphs, and analyze the error under two natural algorithms for\nsampling small subgraphs. Together, our results show how global structural\ncharacteristics of networks can be inferred from local subgraph frequencies,\nwithout requiring the global community structure to be explicitly estimated.\n",
"title": "Testing Network Structure Using Relations Between Small Subgraph Probabilities"
}
| null | null | null | null | true | null |
6963
| null |
Default
| null | null |
null |
{
"abstract": " Three dimensional magnetohydrodynamical simulations were carried out in order\nto perform a new polarization study of the radio emission of the supernova\nremnant SN 1006. These simulations consider that the remnant expands into a\nturbulent interstellar medium (including both magnetic field and density\nperturbations). Based on the referenced-polar angle technique, a statistical\nstudy was done on observational and numerical magnetic field position-angle\ndistributions. Our results show that a turbulent medium with an adiabatic index\nof 1.3 can reproduce the polarization properties of the SN 1006 remnant. This\nstatistical study reveals itself as a useful tool for obtaining the orientation\nof the ambient magnetic field, previous to be swept up by the main supernova\nremnant shock.\n",
"title": "A 3D MHD simulation of SN 1006: a polarized emission study for the turbulent case"
}
| null | null | null | null | true | null |
6964
| null |
Default
| null | null |
null |
{
"abstract": " Automatic mesh-based shape generation is of great interest across a wide\nrange of disciplines, from industrial design to gaming, computer graphics and\nvarious other forms of digital art. While most traditional methods focus on\nprimitive based model generation, advances in deep learning made it possible to\nlearn 3-dimensional geometric shape representations in an end-to-end manner.\nHowever, most current deep learning based frameworks focus on the\nrepresentation and generation of voxel and point-cloud based shapes, making it\nnot directly applicable to design and graphics communities. This study\naddresses the needs for automatic generation of mesh-based geometries, and\npropose a novel framework that utilizes signed distance function representation\nthat generates detail preserving three-dimensional surface mesh by a deep\nlearning based approach.\n",
"title": "Hierarchical Detail Enhancing Mesh-Based Shape Generation with 3D Generative Adversarial Network"
}
| null | null | null | null | true | null |
6965
| null |
Default
| null | null |
null |
{
"abstract": " The Sunyaev-Zel'dovich (SZ) effect is a powerful probe of the evolution of\nstructures in the universe, and is thus highly sensitive to cosmological\nparameters $\\sigma_8$ and $\\Omega_m$, though its power is hampered by the\ncurrent uncertainties on the cluster mass calibration. In this analysis we\nrevisit constraints on these cosmological parameters as well as the hydrostatic\nmass bias, by performing (i) a robust estimation of the tSZ power-spectrum,\n(ii) a complete modeling and analysis of the tSZ bispectrum, and (iii) a\ncombined analysis of galaxy clusters number count, tSZ power spectrum, and tSZ\nbispectrum. From this analysis, we derive as final constraints $\\sigma_8 = 0.79\n\\pm 0.02$, $\\Omega_{\\rm m} = 0.29 \\pm 0.02$, and $(1-b) = 0.71 \\pm 0.07$. These\nresults favour a high value for the hydrostatic mass bias compared to numerical\nsimulations and weak-lensing based estimations. They are furthermore consistent\nwith both previous tSZ analyses, CMB derived cosmological parameters, and\nancillary estimations of the hydrostatic mass bias.\n",
"title": "Combined analysis of galaxy cluster number count, thermal Sunyaev-Zel'dovich power spectrum, and bispectrum"
}
| null | null | null | null | true | null |
6966
| null |
Default
| null | null |
null |
{
"abstract": " A number of recent works have used a variety of combinatorial constructions\nto derive Tanner graphs for LDPC codes and some of these have been shown to\nperform well in terms of their probability of error curves and error floors.\nSuch graphs are bipartite and many of these constructions yield biregular\ngraphs where the degree of left vertices is a constant $c+1$ and that of the\nright vertices is a constant $d+1$. Such graphs are termed $(c+1,d+1)$\nbiregular bipartite graphs here. One property of interest in such work is the\ngirth of the graph and the number of short cycles in the graph, cycles of\nlength either the girth or slightly larger. Such numbers have been shown to be\nrelated to the error floor of the probability of error curve of the related\nLDPC code. Using known results of graph theory, it is shown how the girth and\nthe number of cycles of length equal to the girth may be computed for these\n$(c+1,d+1)$ biregular bipartite graphs knowing only the parameters $c$ and $d$\nand the numbers of left and right vertices. While numerous algorithms to\ndetermine the number of short cycles in arbitrary graphs exist, the reduction\nof the problem from an algorithm to a computation for these biregular bipartite\ngraphs is of interest.\n",
"title": "On short cycle enumeration in biregular bipartite graphs"
}
| null | null | null | null | true | null |
6967
| null |
Default
| null | null |
null |
{
"abstract": " Exhaled air contains aerosol of submicron droplets of the alveolar lining\nfluid (ALF), which are generated in the small airways of a human lung. Since\nthe exhaled particles are micro-samples of the ALF, their trapping opens up an\nopportunity to collect non-invasively a native material from respiratory tract.\nRecent studies of the particle characteristics (such as size distribution,\nconcentration and composition) in healthy and diseased subjects performed under\nvarious conditions have demonstrated a high potential of the analysis of\nexhaled aerosol droplets for identifying and monitoring pathological processes\nin the ALF. In this paper we present a new method for sampling of aerosol\nparticles during the exhaled breath barbotage (EBB) through liquid. The\nbarbotage procedure results in accumulation of the pulmonary surfactant, being\nthe main component of ALF, on the liquid surface, which makes possible the\nstudy its surface properties. We also propose a data processing algorithm to\nevaluate the surface pressure ($\\pi$) -- surface concentration ($\\Gamma$)\nisotherm from the raw data measured in a Langmuir trough. Finally, we analyze\nthe $(\\pi-\\Gamma)$ isotherms obtained for the samples collected in the groups\nof healthy volunteers and patients with pulmonary tuberculosis and compare them\nwith the isotherm measured for the artificial pulmonary surfactant.\n",
"title": "Exhaled breath barbotage: a new method for pulmonary surfactant dysfunction assessment"
}
| null | null | null | null | true | null |
6968
| null |
Default
| null | null |
null |
{
"abstract": " Expressive variations of tempo and dynamics are an important aspect of music\nperformances, involving a variety of underlying factors. Previous work has\nshowed a relation between such expressive variations (in particular expressive\ntempo) and perceptual characteristics derived from the musical score, such as\nmusical expectations, and perceived tension. In this work we use a\ncomputational approach to study the role of three measures of tonal tension\nproposed by Herremans and Chew (2016) in the prediction of expressive\nperformances of classical piano music. These features capture tonal\nrelationships of the music represented in Chew's spiral array model, a three\ndimensional representation of pitch classes, chords and keys constructed in\nsuch a way that spatial proximity represents close tonal relationships. We use\nnon-linear sequential models (recurrent neural networks) to assess the\ncontribution of these features to the prediction of expressive dynamics and\nexpressive tempo using a dataset of Mozart piano sonatas performed by a\nprofessional concert pianist. Experiments of models trained with and without\ntonal tension features show that tonal tension helps predict change of tempo\nand dynamics more than absolute tempo and dynamics values. Furthermore, the\nimprovement is stronger for dynamics than for tempo.\n",
"title": "A Computational Study of the Role of Tonal Tension in Expressive Piano Performance"
}
| null | null | null | null | true | null |
6969
| null |
Default
| null | null |
null |
{
"abstract": " This paper deals with the convergence time analysis of a class of fixed-time\nstable systems with the aim to provide a new non-conservative upper bound for\nits settling time. Our contribution is threefold. First, we revisit a\nwell-known class of fixed-time stable systems showing the conservatism of the\nclassical upper estimate of the settling time. Second, we provide the smallest\nconstant that uniformly upper bounds the settling time of any trajectory of the\nsystem under consideration. Then, introducing a slight modification of the\nprevious class of fixed-time systems, we propose a new predefined-time\nconvergent algorithm where the least upper bound of the settling time is set a\npriori as a parameter of the system. This calculation is a valuable\ncontribution toward online differentiators, observers, and controllers in\napplications with real-time constraints.\n",
"title": "On the least upper bound for the settling time of a class of fixed-time stable systems"
}
| null | null | null | null | true | null |
6970
| null |
Default
| null | null |
null |
{
"abstract": " Self-organization is a process where order of a whole system arises out of\nlocal interactions between small components of a system.\nEmergy, defined as the amount of (solar) energy used to make a product or a\nservice, is becoming an important ecological indicator. To explain observed\nself-organization of systems by emergy the Maximum Empower Principle (MEP) was\nproposed initially without a mathematical formulation.\nEmergy analysis is based on four rules called emergy algebra. Most of emergy\ncomputations in steady state are in fact approximate results, which rely on\nlinear algebra. In such a context, a mathematical formulation of the MEP has\nbeen proposed by Giannantoni (2002).\nIn 2012 Le Corre and the second author of this paper have proposed a rigorous\nmathematical framework for emergy analysis. They established that the exact\ncomputation of emergy is based on the so-called max-plus algebra and seven\ncoherent axioms that replace the emergy algebra. In this paper the MEP in\nsteady state is formalized in the context of the max-plus algebra and graph\ntheory. The main concepts of the paper are (a) a particular graph called\n'emergy graph', (b) the notion of compatible paths of the emergy graph, and (c)\nsets of compatible paths, which are called 'emergy states'. The main results of\nthe paper are as follows:\n(1) Emergy is mathematically expressed as a maximum over all possible emergy\nstates. (2) The maximum is always reached by an emergy state. (3) Only prevail\nemergy states for which the maximum is reached.\n",
"title": "Self-organization and the Maximum Empower Principle in the Framework of max-plus Algebra"
}
| null | null | null | null | true | null |
6971
| null |
Default
| null | null |
null |
{
"abstract": " Nonequilibrium work-Hamiltonian connection for a microstate plays a central\nrole in diverse branches of statistical thermodynamics (fluctuation theorems,\nquantum thermodynamics, stochastic thermodynamics, etc.). We show that the\nchange in the Hamiltonian for a microstate should be identified with the work\ndone by it, and not the work done on it. This contradicts the current practice\nin the field. The difference represents a contribution whose average gives the\nwork that is dissipated due to irreversibility. As the latter has been\noverlooked, the current identification does not properly account for\nirreversibilty. As an example, we show that the corrected version of\nJarzynski's relation can be applied to free expansion, where the original\nrelation fails. Thus, the correction has far-reaching consequences and requires\nreassessment of current applications.\n",
"title": "Nonequilibrium Work and its Hamiltonian Connection for a Microstate in Nonequilibrium Statistical Thermodynamics: A Case of Mistaken Identity"
}
| null | null | null | null | true | null |
6972
| null |
Default
| null | null |
null |
{
"abstract": " We study thick subcategories defined by modules of complexity one in\n$\\underline{\\md}R$, where $R$ is the exterior algebra in $n+1$ indeterminates.\n",
"title": "Thick Subcategories of the stable category of modules over the exterior algebra I"
}
| null | null | null | null | true | null |
6973
| null |
Default
| null | null |
null |
{
"abstract": " We examine whether various characteristics of planet-driven spiral arms can\nbe used to constrain the masses of unseen planets and their positions within\ntheir disks. By carrying out two-dimensional hydrodynamic simulations varying\nplanet mass and disk gas temperature, we find that a larger number of spiral\narms form with a smaller planet mass and a lower disk temperature. A planet\nexcites two or more spiral arms interior to its orbit for a range of disk\ntemperature characterized by the disk aspect ratio $0.04\\leq(h/r)_p\\leq0.15$,\nwhereas exterior to a planet's orbit multiple spiral arms can form only in cold\ndisks with $(h/r)_p \\lesssim 0.06$. Constraining the planet mass with the pitch\nangle of spiral arms requires accurate disk temperature measurements that might\nbe challenging even with ALMA. However, the property that the pitch angle of\nplanet-driven spiral arms decreases away from the planet can be a powerful\ndiagnostic to determine whether the planet is located interior or exterior to\nthe observed spirals. The arm-to-arm separations increase as a function of\nplanet mass, consistent with previous studies; however, the exact slope depends\non disk temperature as well as the radial location where the arm-to-arm\nseparations are measured. We apply these diagnostics to the spiral arms seen in\nMWC 758 and Elias 2-27. As shown in Bae et al. (2017), planet-driven spiral\narms can create concentric rings and gaps, which can produce more dominant\nobservable signature than spiral arms under certain circumstances. We discuss\nthe observability of planet-driven spiral arms versus rings and gaps.\n",
"title": "Planet-driven spiral arms in protoplanetary disks: II. Implications"
}
| null | null |
[
"Physics"
] | null | true | null |
6974
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we study the ideal structure of reduced $C^*$-algebras\n$C^*_r(G)$ associated to étale groupoids $G$. In particular, we characterize\nwhen there is a one-to-one correspondence between the closed, two-sided ideals\nin $C_r^*(G)$ and the open invariant subsets of the unit space $G^{(0)}$ of\n$G$. As a consequence, we show that if $G$ is an inner exact, essentially\nprincipal, ample groupoid, then $C_r^*(G)$ is (strongly) purely infinite if and\nonly if every non-zero projection in $C_0(G^{(0)})$ is properly infinite in\n$C_r^*(G)$. We also establish a sufficient condition on the ample groupoid $G$\nthat ensures pure infiniteness of $C_r^*(G)$ in terms of paradoxicality of\ncompact open subsets of the unit space $G^{(0)}$.\nFinally, we introduce the type semigroup for ample groupoids and also obtain\na dichotomy result: Let $G$ be an ample groupoid with compact unit space which\nis minimal and topologically principal. If the type semigroup is almost\nunperforated, then $C_r^*(G)$ is a simple $C^*$-algebra which is either stably\nfinite or strongly purely infinite.\n",
"title": "Ideal structure and pure infiniteness of ample groupoid $C^*$-algebras"
}
| null | null | null | null | true | null |
6975
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we study nonparametric mean curvature type flows in\n$M\\times\\mathbb{R}$ which are represented as graphs $(x,u(x,t))$ over a domain\nin a Riemannian manifold $M$ with prescribed contact angle. The speed of $u$ is\nthe mean curvature speed minus an admissible function $\\psi(x,u,Du)$. Long time\nexistence and uniformly convergence are established if $\\psi(x,u, Du)\\equiv 0$\nwith vertical contact angle and $\\psi(x,u,Du)=h(x,u)\\omega$ with $h_u(x,u)\\geq\nh_0>0$ and $\\omega=\\sqrt{1+|Du|^2}$. Their applications include mean curvature\ntype equations with prescribed contact angle boundary condition and the\nasymptotic behavior of nonparametric mean curvature flows of graphs over a\nconvex domain in $M^2$ which is a surface with nonnegative Ricci curvature.\n",
"title": "Nonparametric mean curvature type flows of graphs with contact angle conditions"
}
| null | null | null | null | true | null |
6976
| null |
Default
| null | null |
null |
{
"abstract": " In this article we develop a new sequential Monte Carlo (SMC) method for\nmultilevel (ML) Monte Carlo estimation. In particular, the method can be used\nto estimate expectations with respect to a target probability distribution over\nan infinite-dimensional and non-compact space as given, for example, by a\nBayesian inverse problem with Gaussian random field prior. Under suitable\nassumptions the MLSMC method has the optimal $O(\\epsilon^{-2})$ bound on the\ncost to obtain a mean-square error of $O(\\epsilon^2)$. The algorithm is\naccelerated by dimension-independent likelihood-informed (DILI) proposals\ndesigned for Gaussian priors, leveraging a novel variation which uses empirical\nsample covariance information in lieu of Hessian information, hence eliminating\nthe requirement for gradient evaluations. The efficiency of the algorithm is\nillustrated on two examples: inversion of noisy pressure measurements in a PDE\nmodel of Darcy flow to recover the posterior distribution of the permeability\nfield, and inversion of noisy measurements of the solution of an SDE to recover\nthe posterior path measure.\n",
"title": "Multilevel Sequential Monte Carlo with Dimension-Independent Likelihood-Informed Proposals"
}
| null | null |
[
"Statistics"
] | null | true | null |
6977
| null |
Validated
| null | null |
null |
{
"abstract": " We formulate the Nambu-Goldstone theorem as a triangular relation between\npairs of Goldstone bosons with the degenerate vacuum. The vacuum degeneracy is\nthen a natural consequence of this relation. Inside the scenario of String\nTheory, we then find that there is a correspondence between the way how the\n$D$-branes interact and the properties of the Goldstone bosons.\n",
"title": "Spontaneous symmetry breaking as a triangular relation between pairs of Goldstone bosons and the degenerate vacuum: Interactions of D-branes"
}
| null | null |
[
"Physics"
] | null | true | null |
6978
| null |
Validated
| null | null |
null |
{
"abstract": " Large data collections required for the training of neural networks often\ncontain sensitive information such as the medical histories of patients, and\nthe privacy of the training data must be preserved. In this paper, we introduce\na dropout technique that provides an elegant Bayesian interpretation to\ndropout, and show that the intrinsic noise added, with the primary goal of\nregularization, can be exploited to obtain a degree of differential privacy.\nThe iterative nature of training neural networks presents a challenge for\nprivacy-preserving estimation since multiple iterations increase the amount of\nnoise added. We overcome this by using a relaxed notion of differential\nprivacy, called concentrated differential privacy, which provides tighter\nestimates on the overall privacy loss. We demonstrate the accuracy of our\nprivacy-preserving dropout algorithm on benchmark datasets.\n",
"title": "Differentially Private Dropout"
}
| null | null | null | null | true | null |
6979
| null |
Default
| null | null |
null |
{
"abstract": " Usually when applying the mimetic model to the early universe, higher\nderivative terms are needed to promote the mimetic field to be dynamical.\nHowever such models suffer from the ghost and/or the gradient instabilities and\nsimple extensions cannot cure this pathology. We point out in this paper that\nit is possible to overcome this difficulty by considering the direct couplings\nof the higher derivatives of the mimetic field to the curvature of the\nspacetime.\n",
"title": "On (in)stabilities of perturbations in mimetic models with higher derivatives"
}
| null | null | null | null | true | null |
6980
| null |
Default
| null | null |
null |
{
"abstract": " At equilibrium, thermodynamic and kinetic information can be extracted from\nbiomolecular energy landscapes by many techniques. However, while static,\nensemble techniques yield thermodynamic data, often only dynamic,\nsingle-molecule techniques can yield the kinetic data that describes\ntransition-state energy barriers. Here we present a generalized framework based\nupon dwell-time distributions that can be used to connect such static, ensemble\ntechniques with dynamic, single-molecule techniques, and thus characterize\nenergy landscapes to greater resolutions. We demonstrate the utility of this\nframework by applying it to cryogenic electron microscopy (cryo-EM) and\nsingle-molecule fluorescence resonance energy transfer (smFRET) studies of the\nbacterial ribosomal pre-translocation complex. Among other benefits,\napplication of this framework to these data explains why two transient,\nintermediate conformations of the pre-translocation complex, which are observed\nin a cryo-EM study, may not be observed in several smFRET studies.\n",
"title": "Quantitative Connection Between Ensemble Thermodynamics and Single-Molecule Kinetics: A Case Study Using Cryogenic Electron Microscopy and Single-Molecule Fluorescence Resonance Energy Transfer Investigations of the Ribosome"
}
| null | null | null | null | true | null |
6981
| null |
Default
| null | null |
null |
{
"abstract": " Generative Adversarial Networks (GANs) have become a widely popular framework\nfor generative modelling of high-dimensional datasets. However their training\nis well-known to be difficult. This work presents a rigorous statistical\nanalysis of GANs providing straight-forward explanations for common training\npathologies such as vanishing gradients. Furthermore, it proposes a new\ntraining objective, Kernel GANs, and demonstrates its practical effectiveness\non large-scale real-world data sets. A key element in the analysis is the\ndistinction between training with respect to the (unknown) data distribution,\nand its empirical counterpart. To overcome issues in GAN training, we pursue\nthe idea of smoothing the Jensen-Shannon Divergence (JSD) by incorporating\nnoise in the input distributions of the discriminator. As we show, this\neffectively leads to an empirical version of the JSD in which the true and the\ngenerator densities are replaced by kernel density estimates, which leads to\nKernel GANs.\n",
"title": "Non-parametric estimation of Jensen-Shannon Divergence in Generative Adversarial Network training"
}
| null | null | null | null | true | null |
6982
| null |
Default
| null | null |
null |
{
"abstract": " Molecular dynamics (MD) simulations allow the exploration of the phase space\nof biopolymers through the integration of equations of motion of their\nconstituent atoms. The analysis of MD trajectories often relies on the choice\nof collective variables (CVs) along which the dynamics of the system is\nprojected. We developed a graphical user interface (GUI) for facilitating the\ninteractive choice of the appropriate CVs. The GUI allows: defining\ninteractively new CVs; partitioning the configurations into microstates\ncharacterized by similar values of the CVs; calculating the free energies of\nthe microstates for both unbiased and biased (metadynamics) simulations;\nclustering the microstates in kinetic basins; visualizing the free energy\nlandscape as a function of a subset of the CVs used for the analysis. A simple\nmouse click allows one to quickly inspect structures corresponding to specific\npoints in the landscape.\n",
"title": "METAGUI 3: a graphical user interface for choosing the collective variables in molecular dynamics simulations"
}
| null | null |
[
"Physics"
] | null | true | null |
6983
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the problem of deep neural net compression by quantization: given\na large, reference net, we want to quantize its real-valued weights using a\ncodebook with $K$ entries so that the training loss of the quantized net is\nminimal. The codebook can be optimally learned jointly with the net, or fixed,\nas for binarization or ternarization approaches. Previous work has quantized\nthe weights of the reference net, or incorporated rounding operations in the\nbackpropagation algorithm, but this has no guarantee of converging to a\nloss-optimal, quantized net. We describe a new approach based on the recently\nproposed framework of model compression as constrained optimization\n\\citep{Carreir17a}. This results in a simple iterative \"learning-compression\"\nalgorithm, which alternates a step that learns a net of continuous weights with\na step that quantizes (or binarizes/ternarizes) the weights, and is guaranteed\nto converge to local optimum of the loss for quantized nets. We develop\nalgorithms for an adaptive codebook or a (partially) fixed codebook. The latter\nincludes binarization, ternarization, powers-of-two and other important\nparticular cases. We show experimentally that we can achieve much higher\ncompression rates than previous quantization work (even using just 1 bit per\nweight) with negligible loss degradation.\n",
"title": "Model compression as constrained optimization, with application to neural nets. Part II: quantization"
}
| null | null | null | null | true | null |
6984
| null |
Default
| null | null |
null |
{
"abstract": " Three-way data can be conveniently modelled by using matrix variate\ndistributions. Although there has been a lot of work for the matrix variate\nnormal distribution, there is little work in the area of matrix skew\ndistributions. Three matrix variate distributions that incorporate skewness, as\nwell as other flexible properties such as concentration, are discussed.\nEquivalences to multivariate analogues are presented, and moment generating\nfunctions are derived. Maximum likelihood parameter estimation is discussed,\nand simulated data is used for illustration.\n",
"title": "Three Skewed Matrix Variate Distributions"
}
| null | null | null | null | true | null |
6985
| null |
Default
| null | null |
null |
{
"abstract": " We explore the emergence of persistent infection in a closed region where the\ndisease progression of the individuals is given by the SIRS model, with an\nindividual becoming infected on contact with another infected individual within\na given range. We focus on the role of synchronization in the persistence of\ncontagion. Our key result is that higher degree of synchronization, both\nglobally in the population and locally in the neighborhoods, hinders\npersistence of infection. Importantly, we find that early short-time asynchrony\nappears to be a consistent precursor to future persistence of infection, and\ncan potentially provide valuable early warnings for sustained contagion in a\npopulation patch. Thus transient synchronization can help anticipate the\nlong-term persistence of infection. Further we demonstrate that when the range\nof influence of an infected individual is wider, one obtains lower persistent\ninfection. This counter-intuitive observation can also be understood through\nthe relation of synchronization to infection burn-out.\n",
"title": "Anticipating Persistent Infection"
}
| null | null | null | null | true | null |
6986
| null |
Default
| null | null |
null |
{
"abstract": " No high-resolution canopy height map exists for global mangroves. Here we\npresent the first global mangrove height map at a consistent 30 m pixel\nresolution derived from digital elevation model data collected through shuttle\nradar topography mission. Additionally, we refined the current global mangrove\narea maps by discarding the non-mangrove areas that are included in current\nmangrove maps.\n",
"title": "The first global-scale 30 m resolution mangrove canopy height map using Shuttle Radar Topography Mission data"
}
| null | null | null | null | true | null |
6987
| null |
Default
| null | null |
null |
{
"abstract": " Accurate estimation of regional wall thicknesses (RWT) of left ventricular\n(LV) myocardium from cardiac MR sequences is of significant importance for\nidentification and diagnosis of cardiac disease. Existing RWT estimation still\nrelies on segmentation of LV myocardium, which requires strong prior\ninformation and user interaction. No work has been devoted into direct\nestimation of RWT from cardiac MR images due to the diverse shapes and\nstructures for various subjects and cardiac diseases, as well as the complex\nregional deformation of LV myocardium during the systole and diastole phases of\nthe cardiac cycle. In this paper, we present a newly proposed Residual\nRecurrent Neural Network (ResRNN) that fully leverages the spatial and temporal\ndynamics of LV myocardium to achieve accurate frame-wise RWT estimation. Our\nResRNN comprises two paths: 1) a feed forward convolution neural network (CNN)\nfor effective and robust CNN embedding learning of various cardiac images and\npreliminary estimation of RWT from each frame itself independently, and 2) a\nrecurrent neural network (RNN) for further improving the estimation by modeling\nspatial and temporal dynamics of LV myocardium. For the RNN path, we design for\ncardiac sequences a Circle-RNN to eliminate the effect of null hidden input for\nthe first time-step. Our ResRNN is capable of obtaining accurate estimation of\ncardiac RWT with Mean Absolute Error of 1.44mm (less than 1-pixel error) when\nvalidated on cardiac MR sequences of 145 subjects, evidencing its great\npotential in clinical cardiac function assessment.\n",
"title": "Direct Estimation of Regional Wall Thicknesses via Residual Recurrent Neural Network"
}
| null | null | null | null | true | null |
6988
| null |
Default
| null | null |
null |
{
"abstract": " Using a three-dimensional semiclassical model, we study double ionization for\nstrongly-driven He fully accounting for magnetic field effects. For linearly\nand slightly elliptically polarized laser fields, we show that recollisions and\nthe magnetic field combined act as a gate. This gate favors more transverse -\nwith respect to the electric field - initial momenta of the tunneling electron\nthat are opposite to the propagation direction of the laser field. In the\nabsence of non-dipole effects, the transverse initial momentum is symmetric\nwith respect to zero. We find that this asymmetry in the transverse initial\nmomentum gives rise to an asymmetry in a double ionization observable. Finally,\nwe show that this asymmetry in the transverse initial momentum of the tunneling\nelectron accounts for a recently-reported unexpectedly large average sum of the\nelectron momenta parallel to the propagation direction of the laser field.\n",
"title": "Non-dipole recollision-gated double ionization and observable effects"
}
| null | null | null | null | true | null |
6989
| null |
Default
| null | null |
null |
{
"abstract": " Lately, Wireless Sensor Networks (WSNs) have become an emerging technology\nand can be utilized in some crucial circumstances like battlegrounds,\ncommercial applications, habitat observing, buildings, smart homes, traffic\nsurveillance and other different places. One of the foremost difficulties that\nWSN faces nowadays is protection from serious attacks. While organizing the\nsensor nodes in an abandoned environment makes network systems helpless against\nan assortment of strong assaults, intrinsic memory and power restrictions of\nsensor nodes make the traditional security arrangements impractical. The\nsensing knowledge combined with the wireless communication and processing power\nmakes it lucrative for being abused. The wireless sensor network technology\nalso obtains a big variety of security intimidations. This paper describes four\nbasic security threats and many active attacks on WSN with their possible\ncountermeasures proposed by different research scholars.\n",
"title": "A Survey of Active Attacks on Wireless Sensor Networks and their Countermeasures"
}
| null | null | null | null | true | null |
6990
| null |
Default
| null | null |
null |
{
"abstract": " We present a new Frank-Wolfe (FW) type algorithm that is applicable to\nminimization problems with a nonsmooth convex objective. We provide convergence\nbounds and show that the scheme yields so-called coreset results for various\nMachine Learning problems including 1-median, Balanced Development, Sparse PCA,\nGraph Cuts, and the $\\ell_1$-norm-regularized Support Vector Machine (SVM)\namong others. This means that the algorithm provides approximate solutions to\nthese problems in time complexity bounds that are not dependent on the size of\nthe input problem. Our framework, motivated by a growing body of work on\nsublinear algorithms for various data analysis problems, is entirely\ndeterministic and makes no use of smoothing or proximal operators. Apart from\nthese theoretical results, we show experimentally that the algorithm is very\npractical and in some cases also offers significant computational advantages on\nlarge problem instances. We provide an open source implementation that can be\nadapted for other problems that fit the overall structure.\n",
"title": "A Deterministic Nonsmooth Frank Wolfe Algorithm with Coreset Guarantees"
}
| null | null | null | null | true | null |
6991
| null |
Default
| null | null |
null |
{
"abstract": " The problem of choice of boundary conditions are discussed for the case of\nnumerical integration of the shallow water equations on a substantially\nirregular relief. In modeling of unsteady surface water flows has a dynamic\nboundary partitioning liquid and dry bottom. The situation is complicated by\nthe emergence of sub- and supercritical flow regimes for the problems of\nseasonal floodplain flooding, flash floods, tsunami landfalls. Analysis of the\nuse of various methods of setting conditions for the physical quantities of\nliquid when the settlement of the boundary shows the advantages of using the\nwaterfall type conditions in the presence of strong inhomogeneities landforms.\nWhen there is a waterfall on the border of the computational domain and\nheterogeneity of the relief in the vicinity of the boundary portion may occur,\nwhich is formed by the region of critical flow with the formation of a\nhydraulic jump, which greatly weakens the effect of the waterfall on the flow\npattern upstream.\n",
"title": "The problem of boundary conditions for the shallow water equations (Russian)"
}
| null | null | null | null | true | null |
6992
| null |
Default
| null | null |
null |
{
"abstract": " We introduce the exit time finite state projection (ETFSP) scheme, a\ntruncation-based method that yields approximations to the exit distribution and\noccupation measure associated with the time of exit from a domain (i.e., the\ntime of first passage to the complement of the domain) of time-homogeneous\ncontinuous-time Markov chains. We prove that: (i) the computed approximations\nbound the measures from below; (ii) the total variation distances between the\napproximations and the measures decrease monotonically as states are added to\nthe truncation; and (iii) the scheme converges, in the sense that, as the\ntruncation tends to the entire state space, the total variation distances tend\nto zero. Furthermore, we give a computable bound on the total variation\ndistance between the exit distribution and its approximation, and we delineate\nthe cases in which the bound is sharp. We also revisit the related finite state\nprojection scheme and give a comprehensive account of its theoretical\nproperties. We demonstrate the use of the ETFSP scheme by applying it to two\nbiological examples: the computation of the first passage time associated with\nthe expression of a gene, and the fixation times of competing species subject\nto demographic noise.\n",
"title": "The exit time finite state projection scheme: bounding exit distributions and occupation measures of continuous-time Markov chains"
}
| null | null | null | null | true | null |
6993
| null |
Default
| null | null |
null |
{
"abstract": " Data quality assessment and data cleaning are context-dependent activities.\nMotivated by this observation, we propose the Ontological Multidimensional Data\nModel (OMD model), which can be used to model and represent contexts as\nlogic-based ontologies. The data under assessment is mapped into the context,\nfor additional analysis, processing, and quality data extraction. The resulting\ncontexts allow for the representation of dimensions, and multidimensional data\nquality assessment becomes possible. At the core of a multidimensional context\nwe include a generalized multidimensional data model and a Datalog+/- ontology\nwith provably good properties in terms of query answering. These main\ncomponents are used to represent dimension hierarchies, dimensional\nconstraints, dimensional rules, and define predicates for quality data\nspecification. Query answering relies upon and triggers navigation through\ndimension hierarchies, and becomes the basic tool for the extraction of quality\ndata. The OMD model is interesting per se, beyond applications to data quality.\nIt allows for a logic-based, and computationally tractable representation of\nmultidimensional data, extending previous multidimensional data models with\nadditional expressive power and functionalities.\n",
"title": "Ontological Multidimensional Data Models and Contextual Data Qality"
}
| null | null | null | null | true | null |
6994
| null |
Default
| null | null |
null |
{
"abstract": " Authentication is the first step toward establishing a service provider and\ncustomer (C-P) association. In a mobile network environment, a lightweight and\nsecure authentication protocol is one of the most significant factors to\nenhance the degree of service persistence. This work presents a secure and\nlightweight keying and authentication protocol suite termed TAP (Time-Assisted\nAuthentication Protocol). TAP improves the security of protocols with the\nassistance of time-based encryption keys and scales down the authentication\ncomplexity by issuing a re-authentication ticket. While moving across the\nnetwork, a mobile customer node sends a re-authentication ticket to establish\nnew sessions with service-providing nodes. Consequently, this reduces the\ncommunication and computational complexity of the authentication process. In\nthe keying protocol suite, a key distributor controls the key generation\narguments and time factors, while other participants independently generate a\nkeychain based on key generation arguments. We undertake a rigorous security\nanalysis and prove the security strength of TAP using CSP and rank function\nanalysis.\n",
"title": "Time-Assisted Authentication Protocol"
}
| null | null | null | null | true | null |
6995
| null |
Default
| null | null |
null |
{
"abstract": " The online dominating set problem is an online variant of the minimum\ndominating set problem, which is one of the most important NP-hard problems on\ngraphs. This problem is defined as follows: Given an undirected graph $G = (V,\nE)$, in which $V$ is a set of vertices and $E$ is a set of edges. We say that a\nset $D \\subseteq V$ of vertices is a {\\em dominating set} of $G$ if for each $v\n\\in V \\setminus D$, there exists a vertex $u \\in D$ such that $\\{ u, v \\} \\in\nE$. The vertices are revealed to an online algorithm one by one over time. When\na vertex is revealed, edges between the vertex and vertices revealed in the\npast are also revealed. A revelaed subtree is connected at any time.\nImmediately after the revelation of each vertex, an online algorithm can choose\nvertices which were already revealed irrevocably and must maintain a dominating\nset of a graph revealed so far. The cost of an algorithm on a given tree is the\nnumber of vertices chosen by it, and its objective is to minimize the cost.\nEidenbenz (Technical report, Institute of Theoretical Computer Science, ETH\nZürich, 2002) and Boyar et al.\\ (SWAT 2016) studied the case in which given\ngraphs are trees. They designed a deterministic online algorithm whose\ncompetitive ratio is at most three, and proved that a lower bound on the\ncompetitive ratio of any deterministic algorithm is two. In this paper, we also\nfocus on trees. We establish a matching lower bound for any deterministic\nalgorithm. Moreover, we design a randomized online algorithm whose competitive\nratio is at most $5/2 = 2.5$, and show that the competitive ratio of any\nrandomized algorithm is at least $4/3 \\approx 1.333$.\n",
"title": "Improved Bounds for Online Dominating Sets of Trees"
}
| null | null | null | null | true | null |
6996
| null |
Default
| null | null |
null |
{
"abstract": " Shanghai Coherent Light Facility (SCLF) is a quasi-CW hard X-ray free\nelectron laser user facility which is recently proposed. Due to the high\nrepetition rate, high quality electron beams, it is straightforward to consider\nan X-ray free electron laser oscillator (XFELO) operation for SCLF. The main\nprocesses for XFELO design, and parameters optimization of the undulator, X-ray\ncavity and electron beam are described. The first three-dimensional X-ray\ncrystal Bragg diffraction code, named BRIGHT is built, which collaborates\nclosely with GENESIS and OPC for numerical simulations of XFELO. The XFELO\nperformances of SCLF is investigated and optimized by theoretical analysis and\nnumerical simulation.\n",
"title": "Systematical design and three-dimensional simulation of X-ray FEL oscillator for Shanghai Coherent Light Facility"
}
| null | null | null | null | true | null |
6997
| null |
Default
| null | null |
null |
{
"abstract": " What role do asymptomatically infected individuals play in the transmission\ndynamics? There are many diseases, such as norovirus and influenza, where some\ninfected hosts show symptoms of the disease while others are asymptomatically\ninfected, i.e. do not show any symptoms. The current paper considers a class of\nepidemic models following an SEIR (Susceptible $\\to$ Exposed $\\to$ Infectious\n$\\to$ Recovered) structure that allows for both symptomatic and asymptomatic\ncases. The following question is addressed: what fraction $\\rho$ of those\nindividuals getting infected are infected by symptomatic (asymptomatic) cases?\nThis is a more complicated question than the related question for the beginning\nof the epidemic: what fraction of the expected number of secondary cases of a\ntypical newly infected individual, i.e. what fraction of the basic reproduction\nnumber $R_0$, is caused by symptomatic individuals? The latter fraction only\ndepends on the type-specific reproduction numbers, while the former fraction\n$\\rho$ also depends on timing and hence on the probabilistic distributions of\nlatent and infectious periods of the two types (not only their means). Bounds\non $\\rho$ are derived for the situation where these distributions (and even\ntheir means) are unknown. Special attention is given to the class of Markov\nmodels and the class of continuous-time Reed-Frost models as two classes of\ndistribution functions. We show how these two classes of models can exhibit\nvery different behaviour.\n",
"title": "Who is the infector? Epidemic models with symptomatic and asymptomatic cases"
}
| null | null | null | null | true | null |
6998
| null |
Default
| null | null |
null |
{
"abstract": " Many methods for automatic music transcription involves a multi-pitch\nestimation method that estimates an activity score for each pitch. A second\nprocessing step, called note segmentation, has to be performed for each pitch\nin order to identify the time intervals when the notes are played. In this\nstudy, a pitch-wise two-state on/off firstorder Hidden Markov Model (HMM) is\ndeveloped for note segmentation. A complete parametrization of the HMM sigmoid\nfunction is proposed, based on its original regression formulation, including a\nparameter alpha of slope smoothing and beta? of thresholding contrast. A\ncomparative evaluation of different note segmentation strategies was performed,\ndifferentiated according to whether they use a fixed threshold, called \"Hard\nThresholding\" (HT), or a HMM-based thresholding method, called \"Soft\nThresholding\" (ST). This evaluation was done following MIREX standards and\nusing the MAPS dataset. Also, different transcription scenarios and recording\nnatures were tested using three units of the Degradation toolbox. Results show\nthat note segmentation through a HMM soft thresholding with a data-based\noptimization of the {alpha,beta} parameter couple significantly enhances\ntranscription performance.\n",
"title": "Calibration of a two-state pitch-wise HMM method for note segmentation in Automatic Music Transcription systems"
}
| null | null | null | null | true | null |
6999
| null |
Default
| null | null |
null |
{
"abstract": " Policy gradient methods have achieved remarkable successes in solving\nchallenging reinforcement learning problems. However, it still often suffers\nfrom the large variance issue on policy gradient estimation, which leads to\npoor sample efficiency during training. In this work, we propose a control\nvariate method to effectively reduce variance for policy gradient methods.\nMotivated by the Stein's identity, our method extends the previous control\nvariate methods used in REINFORCE and advantage actor-critic by introducing\nmore general action-dependent baseline functions. Empirical studies show that\nour method significantly improves the sample efficiency of the state-of-the-art\npolicy gradient approaches.\n",
"title": "Action-depedent Control Variates for Policy Optimization via Stein's Identity"
}
| null | null | null | null | true | null |
7000
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.