text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Fault localization is a popular research topic and many techniques have been\nproposed to locate faults in imperative code, e.g. C and Java. In this paper,\nwe focus on the problem of fault localization for declarative models in Alloy\n-- a first order relational logic with transitive closure. We introduce\nAlloyFL, the first set of fault localization techniques for faulty Alloy models\nwhich leverages multiple test formulas. AlloyFL is also the first set of fault\nlocalization techniques at the AST node granularity. We implements in AlloyFL\nboth spectrum-based and mutation-based fault localization techniques, as well\nas techniques that are based on Alloy's built-in unsat core. We introduce new\nmetrics to measure the accuracy of AlloyFL and systematically evaluate AlloyFL\non 38 real faulty models and 9000 mutant models. The results show that the\nmutation-based fault localization techniques are significantly more accurate\nthan other types of techniques.\n",
"title": "Fault Localization for Declarative Models in Alloy"
}
| null | null | null | null | true | null |
16301
| null |
Default
| null | null |
null |
{
"abstract": " The basins of convergence, associated with the roots (attractors) of a\ncomplex equation, are revealed in the Hill problem with oblateness and\nradiation, using a large variety of numerical methods. Three cases are\ninvestigated, regarding the values of the oblateness and radiation. In all\ncases, a systematic and thorough scan of the complex plane is performed in\norder to determine the basins of attraction of the several iterative schemes.\nThe correlations between the attracting domains and the corresponding required\nnumber of iterations are also illustrated and discussed. Our numerical analysis\nstrongly suggests that the basins of convergence, with the highly fractal basin\nboundaries, produce extraordinary and beautiful formations on the complex\nplane.\n",
"title": "Comparing the fractal basins of attraction in the Hill problem with oblateness and radiation"
}
| null | null |
[
"Physics"
] | null | true | null |
16302
| null |
Validated
| null | null |
null |
{
"abstract": " The irreducible representations of full support in the rational Cherednik\ncategory $\\mathcal{O}_c(W)$ attached to a Coxeter group $W$ are in bijection\nwith the irreducible representations of an associated Iwahori-Hecke algebra.\nRecent work has shown that the irreducible representations in\n$\\mathcal{O}_c(W)$ of arbitrary given support are similarly governed by certain\ngeneralized Hecke algebras. In this paper we compute the parameters for these\ngeneralized Hecke algebras in the remaining previously unknown cases,\ncorresponding to the parabolic subgroup $B_n \\times S_k$ in $B_{n+k}$ for $k\n\\geq 2$ and $n \\geq 0$.\n",
"title": "Parameters for Generalized Hecke Algebras in Type B"
}
| null | null | null | null | true | null |
16303
| null |
Default
| null | null |
null |
{
"abstract": " Models which postulate lognormal dynamics for interest rates which are\ncompounded according to market conventions, such as forward LIBOR or forward\nswap rates, can be constructed initially in a discrete tenor framework.\nInterpolating interest rates between maturities in the discrete tenor structure\nis equivalent to extending the model to continuous tenor. The present paper\nsets forth an alternative way of performing this extension; one which preserves\nthe Markovian properties of the discrete tenor models and guarantees the\npositivity of all interpolated rates.\n",
"title": "Arbitrage-Free Interpolation in Models of Market Observable Interest Rates"
}
| null | null | null | null | true | null |
16304
| null |
Default
| null | null |
null |
{
"abstract": " This paper is concerned with the simultaneous estimation of $k$ population\nmeans when one suspects that the $k$ means are nearly equal. As an alternative\nto the preliminary test estimator based on the test statistics for testing\nhypothesis of equal means, we derive Bayesian and minimax estimators which\nshrink individual sample means toward a pooled mean estimator given under the\nhypothesis. It is shown that both the preliminary test estimator and the\nBayesian minimax shrinkage estimators are further improved by shrinking the\npooled mean estimator. The performance of the proposed shrinkage estimators is\ninvestigated by simulation.\n",
"title": "Bayesian Simultaneous Estimation for Means in $k$ Sample Problems"
}
| null | null | null | null | true | null |
16305
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we review the recent progress in the (indefinite) string\ndensity problem and its applications to the Camassa--Holm equation.\n",
"title": "The Camassa--Holm Equation and The String Density Problem"
}
| null | null | null | null | true | null |
16306
| null |
Default
| null | null |
null |
{
"abstract": " This paper introduces a simple and efficient density estimator that enables\nfast systematic search. To show its advantage over commonly used kernel density\nestimator, we apply it to outlying aspects mining. Outlying aspects mining\ndiscovers feature subsets (or subspaces) that describe how a query stand out\nfrom a given dataset. The task demands a systematic search of subspaces. We\nidentify that existing outlying aspects miners are restricted to datasets with\nsmall data size and dimensions because they employ kernel density estimator,\nwhich is computationally expensive, for subspace assessments. We show that a\nrecent outlying aspects miner can run orders of magnitude faster by simply\nreplacing its density estimator with the proposed density estimator, enabling\nit to deal with large datasets with thousands of dimensions that would\notherwise be impossible.\n",
"title": "A simple efficient density estimator that enables fast systematic search"
}
| null | null | null | null | true | null |
16307
| null |
Default
| null | null |
null |
{
"abstract": " The \"Planning in the Early Medieval Landscape\" project (PEML)\n<this http URL>,\nfunded by the Leverhulme Trust, has organized and collated a substantial\nquantity of images, and has used this as evidence to support the hypothesis\nthat Anglo-Saxon building construction was based on grid-like planning\nstructures based on fixed modules or quanta of measurement. We report on the\ndevelopment of some statistical contributions to the debate concerning this\nhypothesis. In practice the PEML images correspond to data arising in a wide\nvariety of different forms. It does not seem feasible to produce a single\nautomatic method which can be applied uniformly to all such images; even the\ninitial chore of cleaning up an image (removing extraneous material such as\nlegends and physical features which do not bear on the planning hypothesis)\ntypically presents a separate and demanding challenge for each different image.\nMoreover care must be taken, even in the relatively straightforward cases of\nclearly defined ground-plans (for example for large ecclesiastical buildings of\nthe period), to consider exactly what measurements might be relevant. We report\non pilot statistical analyses concerning three different situations. These\nestablish not only the presence of underlying structure (which indeed is often\nvisually obvious), but also provide an account of the numerical evidence\nsupporting the deduction that such structure is present. We contend that\nstatistical methodology thus contributes to the larger historical debate and\nprovides useful input to the wide and varied range of evidence that has to be\ndebated.\n",
"title": "Perches, Post-holes and Grids"
}
| null | null |
[
"Statistics"
] | null | true | null |
16308
| null |
Validated
| null | null |
null |
{
"abstract": " The population recovery problem is a basic problem in noisy unsupervised\nlearning that has attracted significant research attention in recent years\n[WY12,DRWY12, MS13, BIMP13, LZ15,DST16]. A number of different variants of this\nproblem have been studied, often under assumptions on the unknown distribution\n(such as that it has restricted support size). In this work we study the sample\ncomplexity and algorithmic complexity of the most general version of the\nproblem, under both bit-flip noise and erasure noise model. We give essentially\nmatching upper and lower sample complexity bounds for both noise models, and\nefficient algorithms matching these sample complexity bounds up to polynomial\nfactors.\n",
"title": "Sharp bounds for population recovery"
}
| null | null | null | null | true | null |
16309
| null |
Default
| null | null |
null |
{
"abstract": " MultiBUGS (this https URL) is a new version of the general-purpose\nBayesian modelling software BUGS that implements a generic algorithm for\nparallelising Markov chain Monte Carlo (MCMC) algorithms to speed up posterior\ninference of Bayesian models. The algorithm parallelises evaluation of the\nproduct-form likelihoods formed when a parameter has many children in the\ndirected acyclic graph (DAG) representation; and parallelises sampling of\nconditionally-independent sets of parameters. A heuristic algorithm is used to\ndecide which approach to use for each parameter and to apportion computation\nacross computational cores. This enables MultiBUGS to automatically parallelise\nthe broad range of statistical models that can be fitted using BUGS-language\nsoftware, making the dramatic speed-ups of modern multi-core computing\naccessible to applied statisticians, without requiring any experience of\nparallel programming. We demonstrate the use of MultiBUGS on simulated data\ndesigned to mimic a hierarchical e-health linked-data study of methadone\nprescriptions including 425,112 observations and 20,426 random effects.\nPosterior inference for the e-health model takes several hours in existing\nsoftware, but MultiBUGS can perform inference in only 28 minutes using 48\ncomputational cores.\n",
"title": "MultiBUGS: A parallel implementation of the BUGS modelling framework for faster Bayesian inference"
}
| null | null | null | null | true | null |
16310
| null |
Default
| null | null |
null |
{
"abstract": " We present an overview of recently developed data-driven tools for safety\nanalysis of autonomous vehicles and advanced driver assist systems. The core\nalgorithms combine model-based, hybrid system reachability analysis with\nsensitivity analysis of components with unknown or inaccessible models. We\nillustrate the applicability of this approach with a new case study of\nemergency braking systems in scenarios with two or three vehicles. This problem\nis representative of the most common type of rear-end crashes, which is\nrelevant for safety analysis of automatic emergency braking (AEB) and forward\ncollision avoidance systems. We show that our verification tool can effectively\nprove the safety of certain scenarios (specified by several parameters like\nbraking profiles, initial velocities, uncertainties in position and reaction\ntimes), and also compute the severity of accidents for unsafe scenarios.\nThrough hundreds of verification experiments, we quantified the safety envelope\nof the system across relevant parameters. These results show that the approach\nis promising for design, debugging and certification. We also show how the\nreachability analysis can be combined with statistical information about the\nparameters, to assess the risk level of the control system, which in turn is\nessential, for example, for determining Automotive Safety Integrity Levels\n(ASIL) for the ISO26262 standard.\n",
"title": "Road to safe autonomy with data and formal reasoning"
}
| null | null | null | null | true | null |
16311
| null |
Default
| null | null |
null |
{
"abstract": " The large-scale study of human mobility has been significantly enhanced over\nthe last decade by the massive use of mobile phones in urban populations.\nStudying the activity of mobile phones allows us, not only to infer social\nnetworks between individuals, but also to observe the movements of these\nindividuals in space and time. In this work, we investigate how these two\nrelated sources of information can be integrated within the context of\ndetecting and analyzing large social events. We show that large social events\ncan be characterized not only by an anomalous increase in activity of the\nantennas in the neighborhood of the event, but also by an increase in social\nrelationships of the attendants present in the event. Moreover, having detected\na large social event via increased antenna activity, we can use the network\nconnections to infer whether an unobserved user was present at the event. More\nprecisely, we address the following three challenges: (i) automatically\ndetecting large social events via increased antenna activity; (ii)\ncharacterizing the social cohesion of the detected event; and (iii) analyzing\nthe feasibility of inferring whether unobserved users were in the event.\n",
"title": "Social Events in a Time-Varying Mobile Phone Graph"
}
| null | null | null | null | true | null |
16312
| null |
Default
| null | null |
null |
{
"abstract": " We study the problem of assigning non-overlapping geometric objects centered\nat a given set of points such that the sum of area covered by them is\nmaximized. If the points are placed on a straight-line and the objects are\ndisks, then the problem is solvable in polynomial time. However, we show that\nthe problem is NP-hard even for simplest objects like disks or squares in\n${\\mathbb{R}}^2$. Eppstein [CCCG, pages 260--265, 2016] proposed a polynomial\ntime algorithm for maximizing the sum of radii (or perimeter) of\nnon-overlapping balls or disks when the points are arbitrarily placed on a\nplane. We show that Eppstein's algorithm for maximizing sum of perimeter of the\ndisks in ${\\mathbb{R}}^2$ gives a $2$-approximation solution for the sum of\narea maximization problem. We propose a PTAS for our problem. These\napproximation results are extendible to higher dimensions. All these\napproximation results hold for the area maximization problem by regular convex\npolygons with even number of edges centered at the given points.\n",
"title": "Range Assignment of Base-Stations Maximizing Coverage Area without Interference"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16313
| null |
Validated
| null | null |
null |
{
"abstract": " Let $f$ be a continuous real function defined in a subset of the real line.\nThe standard definition of continuity at a point $x$ allow us to correlate any\ngiven epsilon with a (possibly depending of $x$) delta value. This pairing is\nknown as the epsilon--delta relation of $f$. In this work, we demonstrate the\nexistence of a privileged choice of delta in the sense that it is continuous,\ninvertible, maximal and it is the solution of a simple functional equation. We\nalso introduce an algorithm that can be used to numerically calculate this map\nin polylogarithm time, proving the computability of the epsilon--delta\nrelation. Finally, some examples are analyzed in order to showcase the accuracy\nand effectiveness of these methods, even when the explicit formula for the\naforementioned privileged function is unknown due to the lack of analytical\ntools for solving the functional equation.\n",
"title": "A Polylogarithm Solution to the Epsilon--Delta Problem"
}
| null | null | null | null | true | null |
16314
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we present formulas for the valuation of debt and equity of\nfirms in a financial network under comonotonic endowments. We demonstrate that\nthe comonotonic setting provides a lower bound to the price of debt under\nEisenberg-Noe financial networks with consistent marginal endowments. Such\nfinancial networks encode the interconnection of firms through debt claims. The\nproposed pricing formulas consider the realized, endogenous, recovery rate on\ndebt claims. Special consideration will be given to the setting in which firms\nonly invest in a risk-free bond and a common risky asset following a geometric\nBrownian motion.\n",
"title": "Pricing of debt and equity in a financial network with comonotonic endowments"
}
| null | null |
[
"Quantitative Finance"
] | null | true | null |
16315
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the Schrödinger operator on a combinatorial graph consisting of\na finite graph and a finite number of discrete half-lines, all jointed\ntogether, and compute an asymptotic expansion of its resolvent around the\nthreshold $0$. Precise expressions are obtained for the first few coefficients\nof the expansion in terms of the generalized eigenfunctions. This result\njustifies the classification of threshold types solely by growth properties of\nthe generalized eigenfunctions. By choosing an appropriate free operator a\npriori possessing no zero eigenvalue or zero resonance we can simplify the\nexpansion procedure as much as that on the single discrete half-line.\n",
"title": "Resolvent expansion for the Schrödinger operator on a graph with infinite rays"
}
| null | null | null | null | true | null |
16316
| null |
Default
| null | null |
null |
{
"abstract": " Anions of the molecules ZnO, O2 and atomic Zn and O constitute mass spectra\nof the species sputtered from pellets of molecular solid of ZnO under Cs+\nirradiation. Their normalized yields are independent of energy of the\nirradiating Cs+. Collision cascades cannot explain the simultaneous sputtering\nof atoms and molecules. We propose that the origin of the molecular\nsublimation, dissociation and subsequent emission is the result of localized\nthermal spikes induced by individual Cs+ ions. The fractal dimension of binary\ncollision cascades of atomic recoils in the irradiated ZnO solid increases with\nreduction in the energy of recoils. Upon reaching the collision diameters of\natomic dimensions, the space-filling fractal-like transition occurs where\ncascades transform into thermal spikes. These localized thermal spikes induce\nsublimation, dissociation and sputtering from the region. The calculated rates\nof the subliming and dissociating species due to localized thermal spikes agree\nwell with the experimental results.\n",
"title": "Space-Filling Fractal Description of Ion-induced Local Thermal Spikes in Molecular Solid of ZnO"
}
| null | null | null | null | true | null |
16317
| null |
Default
| null | null |
null |
{
"abstract": " We describe the category of integrable sl(1|n)^ -modules with the positive\ncentral charge and show that the irreducible modules provide the full set of\nirreducible representations for the corresponding simple vertex algebra.\n",
"title": "Integrable modules over affine Lie superalgebras sl(1|n)^"
}
| null | null | null | null | true | null |
16318
| null |
Default
| null | null |
null |
{
"abstract": " How to self-localize large teams of underwater nodes using only noisy range\nmeasurements? How to do it in a distributed way, and incorporating dynamics\ninto the problem? How to reject outliers and produce trustworthy position\nestimates? The stringent acoustic communication channel and the accuracy needs\nof our geophysical survey application demand faster and more accurate\nlocalization methods. We approach dynamic localization as a MAP estimation\nproblem where the prior encodes dynamics, and we devise a convex relaxation\nmethod that takes advantage of previous estimates at each measurement\nacquisition step; The algorithm converges at an optimal rate for first order\nmethods. LocDyn is distributed: there is no fusion center responsible for\nprocessing acquired data and the same simple computations are performed for\neach node. LocDyn is accurate: experiments attest to a smaller positioning\nerror than a comparable Kalman filter. LocDyn is robust: it rejects outlier\nnoise, while the comparing methods succumb in terms of positioning error.\n",
"title": "LocDyn: Robust Distributed Localization for Mobile Underwater Networks"
}
| null | null | null | null | true | null |
16319
| null |
Default
| null | null |
null |
{
"abstract": " We have developed a system combining a back-illuminated\nComplementary-Metal-Oxide-Semiconductor (CMOS) imaging sensor and Xilinx Zynq\nSystem-on-Chip (SoC) device for a soft X-ray (0.5-10 keV) imaging spectroscopy\nobservation of the Sun to investigate the dynamics of the solar corona. Because\ntypical timescales of energy release phenomena in the corona span a few minutes\nat most, we aim to obtain the corresponding energy spectra and derive the\nphysical parameters, i.e., temperature and emission measure, every few tens of\nseconds or less for future solar X-ray observations. An X-ray photon-counting\ntechnique, with a frame rate of a few hundred frames per second or more, can\nachieve such results. We used the Zynq SoC device to achieve the requirements.\nZynq contains an ARM processor core, which is also known as the Processing\nSystem (PS) part, and a Programmable Logic (PL) part in a single chip. We use\nthe PL and PS to control the sensor and seamless recording of data to a storage\nsystem, respectively. We aim to use the system for the third flight of the\nFocusing Optics Solar X-ray Imager (FOXSI-3) sounding rocket experiment for the\nfirst photon-counting X-ray imaging and spectroscopy of the Sun.\n",
"title": "High-speed X-ray imaging spectroscopy system with Zynq SoC for solar observations"
}
| null | null | null | null | true | null |
16320
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a new coding scheme and establish new bounds on the\ncapacity region for the multi-sender unicast index-coding problem. We revisit\nexisting partitioned Distributed Composite Coding (DCC) proposed by Sadeghi et\nal. and identify its limitations in the implementation of multi-sender\ncomposite coding and in the strategy of sender partitioning. We then propose\ntwo new coding components to overcome these limitations and develop a\nmulti-sender Cooperative Composite Coding (CCC). We show that CCC can strictly\nimprove upon partitioned DCC, and is the key to achieve optimality for a number\nof index-coding instances. The usefulness of CCC and its special cases is\nilluminated via non-trivial examples, and the capacity region is established\nfor each example. Comparisons between CCC and other non-cooperative schemes in\nrecent works are also provided to further demonstrate the advantage of CCC.\n",
"title": "Cooperative Multi-Sender Index Coding"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
16321
| null |
Validated
| null | null |
null |
{
"abstract": " The aim of this comment is to show that anisotropic effects and image fields\nshould not be omitted as they are in the publication of A. Leonardi, S. Ryu, N.\nM. Pugno, and P. Scardi (LRPS) [J. Appl. Phys. 117, 164304 (2015)] on Pd <011>\ncylindrical nanowires containing an axial screw dislocation. Indeed, according\nto our previous study [Phys. Rev. B 88, 224101 (2013)], the axial displacement\nfield along the nanowire exhibits both a radial and an azimuthal dependence\nwith a twofold symmetry due the <011> orientation. As a consequence, the\ndeviatoric strain term used by LRPS is not suitable to analyze the anisotropic\nstrain fields that should be observed in their atomistic simulations. In this\ncomment, we first illustrate the importance of anisotropy in <011> Pd nanowire\nby calculating the azimuthal dependence of the deviatoric strain term. Then the\nexpression of the anisotropic elastic field is recalled in term of strain\ntensor components to show that image fields should be also considered. The\nother aspect of this comment concerns the supposedly loss of correlation along\nthe nanorod caused by the twist. It is claimed for instance by LRPS that : \"As\nan effect of the dislocation strain and twist, if the cylinder is long enough,\nupper/lower regions tend to lose correlation, as if the rod were made of\ndifferent sub-domains.\". This assertion appears to us misleading since for any\ntwist the position of all the atoms in the nanorod is perfectly defined and\ntherefore prevents any loss of correlation. To clarify this point, it should be\nspecified that this apparent loss of correlation can not be ascribed to the\ntwisted state of the nanowire but is rather due to a limitation of the X-ray\npowder diffraction. Considering for instance coherent X-ray diffraction, we\nshow an example of high twist where the simulated diffractogram presents a\nclear signature of the perfect correlation.\n",
"title": "Comment on \"Eshelby twist and correlation effects in diffraction from nanocrystals\" [J. Appl. Phys. 117, 164304 (2015)]"
}
| null | null | null | null | true | null |
16322
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we proposed a novel two-stage optimization method for network\ncommunity partition, which is based on inherent network structure information.\nThe introduced optimization approach utilizes the new network centrality\nmeasure of both links and vertices to construct the key affinity description of\nthe given network, where the direct similarities between graph nodes or nodal\nfeatures are not available to obtain the classical affinity matrix. Indeed,\nsuch calculated network centrality information presents the essential structure\nof network, hence, the proper measure for detecting network communities, which\nalso introduces a `confidence' criterion for referencing new labeled benchmark\nnodes. For the resulted challenging combinatorial optimization problem of graph\nclustering, the proposed optimization method iteratively employs an efficient\nconvex optimization algorithm which is developed based under a new variational\nperspective of primal and dual. Experiments over both artificial and real-world\nnetwork datasets demonstrate that the proposed optimization strategy of\ncommunity detection significantly improves result accuracy and outperforms the\nstate-of-the-art algorithms in terms of accuracy and reliability.\n",
"title": "Variational Community Partition with Novel Network Structure Centrality Prior"
}
| null | null | null | null | true | null |
16323
| null |
Default
| null | null |
null |
{
"abstract": " We investigate possible pathways for the formation of the low density\nNeptune-mass planet HAT-P-26b. We use two formation different models based on\npebbles and planetesimals accretion, and includes gas accretion, disk migration\nand simple photoevaporation. The models tracks the atmospheric oxygen\nabundance, in addition to the orbital period, and mass of the forming planets,\nthat we compare to HAT-P-26b. We find that pebbles accretion can explain this\nplanet more naturally than planetesimals accretion that fails completely unless\nwe artificially enhance the disk metallicity significantly. Pebble accretion\nmodels can reproduce HAT-P-26b with either a high initial core mass and low\namount of envelope enrichment through core erosion or pebbles dissolution, or\nthe opposite, with both scenarios being possible. Assuming a low envelope\nenrichment factor as expected from convection theory and comparable to the\nvalues we can infer from the D/H measurements in Uranus and Neptune, our most\nprobable formation pathway for HAT-P-26b is through pebble accretion starting\naround 10 AU early in the disk's lifetime.\n",
"title": "Possible formation pathways for the low density Neptune-mass planet HAT-P-26b"
}
| null | null | null | null | true | null |
16324
| null |
Default
| null | null |
null |
{
"abstract": " Spectral shape descriptors have been used extensively in a broad spectrum of\ngeometry processing applications ranging from shape retrieval and segmentation\nto classification. In this pa- per, we propose a spectral graph wavelet\napproach for 3D shape classification using the bag-of-features paradigm. In an\neffort to capture both the local and global geometry of a 3D shape, we present\na three-step feature description framework. First, local descriptors are\nextracted via the spectral graph wavelet transform having the Mexican hat\nwavelet as a generating ker- nel. Second, mid-level features are obtained by\nembedding lo- cal descriptors into the visual vocabulary space using the soft-\nassignment coding step of the bag-of-features model. Third, a global descriptor\nis constructed by aggregating mid-level fea- tures weighted by a geodesic\nexponential kernel, resulting in a matrix representation that describes the\nfrequency of appearance of nearby codewords in the vocabulary. Experimental\nresults on two standard 3D shape benchmarks demonstrate the effective- ness of\nthe proposed classification approach in comparison with state-of-the-art\nmethods.\n",
"title": "Shape Classification using Spectral Graph Wavelets"
}
| null | null | null | null | true | null |
16325
| null |
Default
| null | null |
null |
{
"abstract": " We consider the inverse dynamical problem for the dynamical system with\ndiscrete time associated with the semi-infinite Jacobi matrix. We solve the\ninverse problem for such a system and answer a question on the characterization\nof the inverse data. As a by-product we give a necessary and sufficient\ncondition for the measure on the real line line to be the spectral measure of\nsemi-infinite discrete Schrodinger operator.\n",
"title": "Dynamical inverse problem for Jacobi matrices"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16326
| null |
Validated
| null | null |
null |
{
"abstract": " Jacobsthal's function was recently generalised for the case of paired\nprogressions. It was proven that a specific bound of this function is\nsufficient for the truth of Goldbach's conjecture and of the prime pairs\nconjecture as well. We extended and adapted algorithms described for the\ncomputation of the common Jacobsthal function, and computed respective function\nvalues of the paired Jacobsthal function for primorial numbers for primes up to\n73. All these values fulfil the conjectured specific bound. In addition to this\nnote, we provide a detailed review of the algorithmic approaches and the\ncomplete computational results in ancillary files.\n",
"title": "A short note on the computation of the generalised Jacobsthal function for paired progressions"
}
| null | null | null | null | true | null |
16327
| null |
Default
| null | null |
null |
{
"abstract": " Due to the iterative nature of most nonnegative matrix factorization\n(\\textsc{NMF}) algorithms, initialization is a key aspect as it significantly\ninfluences both the convergence and the final solution obtained. Many\ninitialization schemes have been proposed for NMF, among which one of the most\npopular class of methods are based on the singular value decomposition (SVD).\nHowever, these SVD-based initializations do not satisfy a rather natural\ncondition, namely that the error should decrease as the rank of factorization\nincreases. In this paper, we propose a novel SVD-based \\textsc{NMF}\ninitialization to specifically address this shortcoming by taking into account\nthe SVD factors that were discarded to obtain a nonnegative initialization.\nThis method, referred to as nonnegative SVD with low-rank correction\n(NNSVD-LRC), allows us to significantly reduce the initial error at a\nnegligible additional computational cost using the low-rank structure of the\ndiscarded SVD factors. NNSVD-LRC has two other advantages compared to previous\nSVD-based initializations: (1) it provably generates sparse initial factors,\nand (2) it is faster as it only requires to compute a truncated SVD of rank\n$\\lceil r/2 + 1 \\rceil$ where $r$ is the factorization rank of the sought NMF\ndecomposition (as opposed to a rank-$r$ truncated SVD for other methods). We\nshow on several standard dense and sparse data sets that our new method\ncompetes favorably with state-of-the-art SVD-based initializations for NMF.\n",
"title": "Improved SVD-based Initialization for Nonnegative Matrix Factorization using Low-Rank Correction"
}
| null | null | null | null | true | null |
16328
| null |
Default
| null | null |
null |
{
"abstract": " Because of the open access nature of wireless communications, wireless\nnetworks can suffer from malicious activity, such as jamming attacks, aimed at\nundermining the network's ability to sustain communication links and acceptable\nthroughput. One important consideration when designing networks is to\nappropriately tune the network topology and its connectivity so as to support\nthe communication needs of those participating in the network. This paper\nexamines the problem of interference attacks that are intended to harm\nconnectivity and throughput, and illustrates the method of mapping network\nperformance parameters into the metric of topographic connectivity.\nSpecifically, this paper arrives at anti-jamming strategies aimed at coping\nwith interference attacks through a unified stochastic game. In such a\nframework, an entity trying to protect a network faces a dilemma: (i) the\nunderlying motivations for the adversary can be quite varied, which depends\nlargely on the network's characteristics such as power and distance; (ii) the\nmetrics for such an attack can be incomparable (e.g., network connectivity and\ntotal throughput). To deal with the problem of such incomparable metrics, this\npaper proposes using the attack's expected duration as a unifying metric to\ncompare distinct attack metrics because a longer-duration of unsuccessful\nattack assumes a higher cost. Based on this common metric, a mechanism of\nmaxmin selection for an attack prevention strategy is suggested.\n",
"title": "Connectivity jamming game for physical layer attack in peer to peer networks"
}
| null | null | null | null | true | null |
16329
| null |
Default
| null | null |
null |
{
"abstract": " We study three-dimensional gauge theories based on orthogonal groups.\nDepending on the global form of the group these theories admit discrete\n$\\theta$-parameters, which control the weights in the sum over topologically\ndistinct gauge bundles. We derive level-rank duality for these topological\nfield theories. Our results may also be viewed as level-rank duality for\n$SO(N)_{K}$ Chern-Simons theory in the presence of background fields for\ndiscrete global symmetries. In particular, we include the required counterterms\nand analysis of the anomalies. We couple our theories to charged matter and\ndetermine how these counterterms are shifted by integrating out massive\nfermions. By gauging discrete global symmetries we derive new boson-fermion\ndualities for vector matter, and present the phase diagram of theories with\ntwo-index tensor fermions, thus extending previous results for $SO(N)$ to other\nglobal forms of the gauge group.\n",
"title": "Global Symmetries, Counterterms, and Duality in Chern-Simons Matter Theories with Orthogonal Gauge Groups"
}
| null | null |
[
"Physics"
] | null | true | null |
16330
| null |
Validated
| null | null |
null |
{
"abstract": " Consider the classical Gaussian unitary ensemble of size $N$ and the real\nWishart ensemble $W_N(n,I)$. In the limits as $N \\to \\infty$ and $N/n \\to\n\\gamma > 0$, the expected number of eigenvalues that exit the upper bulk edge\nis less than one, 0.031 and 0.170 respectively, the latter number being\nindependent of $\\gamma$. These statements are consequences of quantitative\nbounds on tail sums of eigenvalues outside the bulk which are established here\nfor applications in high dimensional covariance matrix estimation.\n",
"title": "Tail sums of Wishart and GUE eigenvalues beyond the bulk edge"
}
| null | null | null | null | true | null |
16331
| null |
Default
| null | null |
null |
{
"abstract": " We derive equations of motion for the reduced density matrix of a molecular\nsystem which undergoes energy transfer dynamics competing with fast internal\nconversion channels. Environmental degrees of freedom of such a system have no\ntime to relax to quasi-equilibrium in the electronic excited state of the donor\nmolecule, and thus the conditions of validity of Foerster and Modified Redfield\ntheories in their standard formulations do not apply. We derive non-equilibrium\nversions of the two well-known rate theories and apply them to the case of\ncarotenoid-chlorophyll energy transfer. Although our reduced density matrix\napproach does not account for the formation of vibronic excitons, it still\nconfirms the important role of the donor ground-state vibrational states in\nestablishing the resonance energy transfer conditions. We show that it is\nessential to work with a theory valid in strong system-bath interaction regime\nto obtain correct dependence of the rates on donor-acceptor energy gap.\n",
"title": "Ultrafast Energy Transfer with Competing Channels: Non-equilibrium Foerster and Modified Redfield Theories"
}
| null | null |
[
"Physics"
] | null | true | null |
16332
| null |
Validated
| null | null |
null |
{
"abstract": " The Weyl semimetallic compound Eu2Ir2O7 along with its hole doped derivatives\n(which is achieved by substituting trivalent Eu by divalent Sr) are\ninvestigated through transport, magnetic and calorimetric studies. The\nmetal-insulator transition (MIT) temperature is found to get substantially\nreduced with hole doping and for 10% Sr doping the composition is metallic down\nto temperature as low as 5 K. These doped compounds are found to violate the\nMott-Ioffe-Regel condition for minimum electrical conductivity and show\ndistinct signature of non-Fermi liquid behavior at low temperature. The MIT in\nthe doped compounds does not correlate with the magnetic transition point and\nAnderson-Mott type disorder induced localization may be attributed to the\nground state insulating phase. The observed non-Fermi liquid behavior can be\nunderstood on the basis of disorder induced distribution of spin orbit coupling\nparameter which is markedly different in case of Ir4+ and Ir5+ ions.\n",
"title": "Observation of non-Fermi liquid behavior in hole doped Eu2Ir2O7"
}
| null | null | null | null | true | null |
16333
| null |
Default
| null | null |
null |
{
"abstract": " Winds are predicted to be ubiquitous in low-mass, actively star-forming\ngalaxies. Observationally, winds have been detected in relatively few local\ndwarf galaxies, with even fewer constraints placed on their timescales. Here,\nwe compare galactic outflows traced by diffuse, soft X-ray emission from\nChandra Space Telescope archival observations to the star formation histories\nderived from Hubble Space Telescope imaging of the resolved stellar populations\nin six starburst dwarfs. We constrain the longevity of a wind to have an upper\nlimit of 25 Myr based on galaxies whose starburst activity has already\ndeclined, although a larger sample is needed to confirm this result. We find an\naverage 16% efficiency for converting the mechanical energy of stellar feedback\nto thermal, soft X-ray emission on the 25 Myr timescale, somewhat higher than\nsimulations predict. The outflows have likely been sustained for timescales\ncomparable to the duration of the starbursts (i.e., 100's Myr), after taking\ninto account the time for the development and cessation of the wind. The wind\ntimescales imply that material is driven to larger distances in the\ncircumgalactic medium than estimated by assuming short, 5-10 Myr starburst\ndurations, and that less material is recycled back to the host galaxy on short\ntimescales. In the detected outflows, the expelled hot gas shows various\nmorphologies which are not consistent with a simple biconical outflow\nstructure. The sample and analysis are part of a larger program, the STARBurst\nIRregular Dwarf Survey (STARBIRDS), aimed at understanding the lifecycle and\nimpact of starburst activity in low-mass systems.\n",
"title": "Galactic Outflows, Star Formation Histories, and Timescales in Starburst Dwarf Galaxies from STARBIRDS"
}
| null | null | null | null | true | null |
16334
| null |
Default
| null | null |
null |
{
"abstract": " We present a compact current sensor based on a superconducting microwave\nlumped-element resonator with a nanowire kinetic inductor, operating at 4.2 K.\nThe sensor is suitable for multiplexed readout in GHz range of large-format\narrays of cryogenic detectors. The device consists of a lumped-element resonant\ncircuit, fabricated from a single 4-nm-thick superconducting layer of niobium\nnitride. Thus, the fabrication and operation is significantly simplified in\ncomparison to state-of-the-art approaches. Because the resonant circuit is\ninductively coupled to the feed line the current to be measured can directly be\ninjected without having the need of an impedance matching circuit, reducing the\nsystem complexity. With the proof-of-concept device we measured a current noise\nfloor {\\delta}Imin of 10 pA/Hz1/2 at 10 kHz. Furthermore, we demonstrate the\nability of our sensor to amplify a pulsed response of a superconducting\nnanowire single-photon detector using a GHz-range carrier for effective\nfrequency-division multiplexing.\n",
"title": "Compact microwave kinetic inductance nanowire galvanometer for cryogenic detectors at 4.2 K"
}
| null | null | null | null | true | null |
16335
| null |
Default
| null | null |
null |
{
"abstract": " To understand the evolution of extinction curve, we calculate the dust\nevolution in a galaxy using smoothed particle hydrodynamics simulations\nincorporating stellar dust production, dust destruction in supernova shocks,\ngrain growth by accretion and coagulation, and grain disruption by shattering.\nThe dust species are separated into carbonaceous dust and silicate. The\nevolution of grain size distribution is considered by dividing grain population\ninto large and small gains, which allows us to estimate extinction curves. We\nexamine the dependence of extinction curves on the position, gas density, and\nmetallicity in the galaxy, and find that extinction curves are flat at $t\n\\lesssim 0.3$ Gyr because stellar dust production dominates the total dust\nabundance. The 2175 \\AA\\ bump and far-ultraviolet (FUV) rise become prominent\nafter dust growth by accretion. At $t \\gtrsim 3$ Gyr, shattering works\nefficiently in the outer disc and low density regions, so extinction curves\nshow a very strong 2175 \\AA\\ bump and steep FUV rise. The extinction curves at\n$t\\gtrsim 3$ Gyr are consistent with the Milky Way extinction curve, which\nimplies that we successfully included the necessary dust processes in the\nmodel. The outer disc component caused by stellar feedback has an extinction\ncurves with a weaker 2175 \\AA\\ bump and flatter FUV slope. The strong\ncontribution of carbonaceous dust tends to underproduce the FUV rise in the\nSmall Magellanic Cloud extinction curve, which supports selective loss of small\ncarbonaceous dust in the galaxy. The snapshot at young ages also explain the\nextinction curves in high-redshift quasars.\n",
"title": "Evolution of dust extinction curves in galaxy simulation"
}
| null | null | null | null | true | null |
16336
| null |
Default
| null | null |
null |
{
"abstract": " This is a simple reading report of professor Weiping Zhang's lectures. In\nthis article we will mainly introduce the basic ideas of Witten deformation,\nwhich were first introduced by Edward Witten on, and some applications of it.\nThe first part of this article mainly focuses on deformation of Dirac operators\nand some important analytic facts about the deformed Dirac operators. In the\nsecond part of this article some applications of Witten deformation will be\ngiven, to be more specific, an analytic proof of Poincar$\\acute{e}$-Hopf index\ntheorem and Real Morse Inequilities will be given. Also we will use Witten\ndeformation to prove that the Thom Smale complex is quasi-isomorphism to the\nde-Rham complex (Witten suggested that Thom Smale complex can be recovered from\nhis deformation and his suggestion was first realized by Helffer and\nSj$\\ddot{o}$strand, the proof in this article is given by Bismut and Zhang).\nAnd in the last part an analytic proof of Atiyah vanishing theorem via Witten\ndeformation will be given.\n",
"title": "Witten Deformation And Some Topics Relating To It"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16337
| null |
Validated
| null | null |
null |
{
"abstract": " Recent work using plasmonic nanosensors in a clinically relevant detection\nassay reports extreme sensitivity based upon a mechanism termed 'inverse\nsensitivity', whereby reduction of substrate concentration increases reaction\nrate, even at the single-molecule limit. This near-homoeopathic mechanism\ncontradicts the law of mass action. The assay involves deposition of silver\natoms upon gold nanostars, changing their absorption spectrum. Multiple\nadditional aspects of the assay appear to be incompatible with settled chemical\nknowledge, in particular the detection of tiny numbers of silver atoms on a\nbackground of the classic 'silver mirror reaction'. Finally, it is estimated\nhere that the reported spectral changes require some 2.5E11 times more silver\natoms than are likely to be produced. It is suggested that alternative\nexplanations must be sought for the original observations.\n",
"title": "Inverse sensitivity of plasmonic nanosensors at the single-molecule limit"
}
| null | null | null | null | true | null |
16338
| null |
Default
| null | null |
null |
{
"abstract": " We consider a two-dimensional Ginzburg-Landau problem on an arbitrary domain\nwith a finite number of vanishingly small circular holes. A special choice of\nscaling relation between the material and geometric parameters (Ginzburg-Landau\nparameter vs hole radius) is motivated by a recently dsicovered phenomenon of\nvortex phase separation in superconducting composites. We show that, for each\nhole, the degrees of minimizers of the Ginzburg-Landau problems in the classes\nof $\\mathbb S^1$-valued and $\\mathbb C$-valued maps, respectively, are the\nsame. The presence of two parameters that are widely separated on a logarithmic\nscale constitutes the principal difficulty of the analysis that is based on\nenergy decomposition techniques.\n",
"title": "On approximation of Ginzburg-Landau minimizers by $\\mathbb S^1$-valued maps in domains with vanishingly small holes"
}
| null | null | null | null | true | null |
16339
| null |
Default
| null | null |
null |
{
"abstract": " This two-part paper details a theory of solvability for the power flow\nequations in lossless power networks. In Part I, we derived a new formulation\nof the lossless power flow equations, which we term the fixed-point power flow.\nThe model is parameterized by several graph-theoretic matrices -- the power\nnetwork stiffness matrices -- which quantify the internal coupling strength of\nthe network. In Part II, we leverage the fixed-point power flow to study power\nflow solvability. For radial networks, we derive parametric conditions which\nguarantee the existence and uniqueness of a high-voltage power flow solution,\nand construct examples for which the conditions are also necessary. Our\nconditions (i) imply convergence of the fixed-point power flow iteration, (ii)\nunify and extend recent results on solvability of decoupled power flow, (iii)\ndirectly generalize the textbook two-bus system results, and (iv) provide new\ninsights into how the structure and parameters of the grid influence power flow\nsolvability.\n",
"title": "A Theory of Solvability for Lossless Power Flow Equations -- Part II: Conditions for Radial Networks"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16340
| null |
Validated
| null | null |
null |
{
"abstract": " We have introduced evolutionary game dynamics to a one-dimensional\ncellular-automaton to investigate evolution and maintenance of cooperative\navoiding behavior of self-driven particles in bidirectional flow. In our model,\nthere are two kinds of particles, which are right-going particles and\nleft-going particles. They often face opponent particles, so that they swerve\nto the right or left stochastically in order to avoid conflicts. The particles\nreinforce their preferences of the swerving direction after their successful\navoidance. The preference is also weakened by memory-loss effect.\nResult of our simulation indicates that cooperative avoiding behavior is\nachieved, i.e., swerving directions of the particles are unified, when the\ndensity of particles is close to 1/2 and the memory-loss rate is small.\nFurthermore, when the right-going particles occupy the majority of the system,\nwe observe that their flow increases when the number of left-going particles,\nwhich prevent the smooth movement of right-going particles, becomes large. It\nis also investigated that the critical memory-loss rate of the cooperative\navoiding behavior strongly depends on the size of the system. Small system can\nprolong the cooperative avoiding behavior in wider range of memory-loss rate\nthan large system.\n",
"title": "Coordination game in bidirectional flow"
}
| null | null | null | null | true | null |
16341
| null |
Default
| null | null |
null |
{
"abstract": " Quantum Moves is a citizen science game that investigates the ability of\nhumans to solve complex physics challenges that are intractable for computers.\nDuring the launch of Quantum Moves in April 2016 the game's leaderboard\nfunction broke down resulting in a \"no leaderboard\" game experience for some\nplayers for a couple of days (though their scores were still displayed). The\nsubsequent quick fix of an all-time Top 5 leaderboard, and the following\nlong-term implementation of a personalized relative-position (infinite)\nleaderboard provided us with a unique opportunity to compare and investigate\nthe effect of different leaderboard implementations on player performance in a\npoints-driven citizen science game.\nAll three conditions were live sequentially during the game's initial influx\nof more than 150.000 players that stemmed from global press attention on\nQuantum Moves due the publication of a Nature paper about the use of Quantum\nMoves in solving a specific quantum physics problem. Thus, it has been possible\nto compare the three conditions and their influence on the performance (defined\nas a player's quality of game play related to a high-score) of over 4500 new\nplayers. These 4500 odd players in our three leaderboard-conditions have a\nsimilar demographic background based upon the time-window over which the\nimplementations occurred and controlled against Player ID tags. Our results\nplaced Condition 1 experience over condition 3 and in some cases even over\ncondition 2 which goes against the general assumption that leaderboards enhance\ngameplay and its subsequent overuse as a an oft-relied upon element that\ndesigners slap onto a game to enhance said appeal. Our study thus questions the\nuse of leaderboards as general performance enhancers in gamification contexts\nand brings some empirical rigor to an often under-reported but overused\nphenomenon.\n",
"title": "Leaderboard Effects on Player Performance in a Citizen Science Game"
}
| null | null | null | null | true | null |
16342
| null |
Default
| null | null |
null |
{
"abstract": " In seismic monitoring one is usually interested in the response of a changing\ntarget zone, embedded in a static inhomogeneous medium. We introduce an\nefficient method which predicts reflection responses at the earth's surface for\ndifferent target-zone scenarios, from a single reflection response at the\nsurface and a model of the changing target zone. The proposed process consists\nof two main steps. In the first step, the response of the original target zone\nis removed from the reflection response, using the Marchenko method. In the\nsecond step, the modelled response of a new target zone is inserted between the\noverburden and underburden responses. The method fully accounts for all orders\nof multiple scattering and, in the elastodynamic case, for wave conversion. For\nmonitoring purposes, only the second step needs to be repeated for each\ntarget-zone model. Since the target zone covers only a small part of the entire\nmedium, the proposed method is much more efficient than repeated modelling of\nthe entire reflection response.\n",
"title": "Marchenko-based target replacement, accounting for all orders of multiple reflections"
}
| null | null |
[
"Physics"
] | null | true | null |
16343
| null |
Validated
| null | null |
null |
{
"abstract": " Two procedures for checking Bayesian models are compared using a simple test\nproblem based on the local Hubble expansion. Over four orders of magnitude,\np-values derived from a global goodness-of-fit criterion for posterior\nprobability density functions (Lucy 2017) agree closely with posterior\npredictive p-values. The former can therefore serve as an effective proxy for\nthe difficult-to-calculate posterior predictive p-values.\n",
"title": "Bayesian model checking: A comparison of tests"
}
| null | null | null | null | true | null |
16344
| null |
Default
| null | null |
null |
{
"abstract": " Let $K$ be a field of characteristic zero, $\\mathcal A$ a $K$-algebra and\n$\\delta$ a $K$-derivation of $\\mathcal A$ or $K$-$\\mathcal E$-derivation of\n$\\mathcal A$ (i.e., $\\delta=\\operatorname{Id}_A-\\phi$ for some $K$-algebra\nendomorphism $\\phi$ of $\\mathcal A$). Motivated by the Idempotent conjecture\nproposed in [Z4], we first show that for every idempotent $e$ lying in both the\nkernel ${\\mathcal A}^\\delta$ and the image $\\operatorname{Im}\\delta \\!:=\\delta\n({\\mathcal A})$ of $\\delta$, the principal ideal $(e)\\subseteq\n\\operatorname{Im} \\delta$ if $\\delta$ is a locally finite $K$-derivation or a\nlocally nilpotent $K$-$\\mathcal E$-derivation of $\\mathcal A$; and $e{\\mathcal\nA}, {\\mathcal A}e \\subseteq \\operatorname{Im} \\delta$ if $\\delta$ is a locally\nfinite $K$-$\\mathcal E$-derivation of $\\mathcal A$. Consequently, the\nIdempotent conjecture holds for all locally finite $K$-derivations and all\nlocally nilpotent $K$-$\\mathcal E$-derivations of $\\mathcal A$. We then show\nthat $1_{\\mathcal A} \\in \\operatorname{Im} \\delta$, (if and) only if $\\delta$\nis surjective, which generalizes the same result [GN, W] for locally nilpotent\n$K$-derivations of commutative $K$-algebras to locally finite $K$-derivations\nand $K$-$\\mathcal E$-derivations $\\delta$ of all $K$-algebras $\\mathcal A$.\n",
"title": "Idempotents in Intersection of the Kernel and the Image of Locally Finite Derivations and $\\mathcal E$-derivations"
}
| null | null | null | null | true | null |
16345
| null |
Default
| null | null |
null |
{
"abstract": " Following Roos, we say that a local ring $R$ is good if all finitely\ngenerated $R$-modules have rational Poincaré series over $R$, sharing a\ncommon denominator. Rings with the Backelin-Roos property and generalised Golod\nrings are good due to results of Levin and Avramov respectively. Let $R$ be an\nArtinian Gorenstein local ring. The ring $R$ is shown to have the Backelin-Roos\nproperty if $R/ soc(R)$ is a Golod ring. Furthermore the ring $R$ is\ngeneralised Golod if and only if $R/ soc(R)$ is so.\nWe explore when connected sums of Artinian Gorenstein local rings are good.\nWe provide a uniform argument to show that stretched, almost stretched\nGorenstein rings are good and show further that the Auslander-Reiten conjecture\nholds true for such rings. We prove that Gorenstein rings of multiplicity at\nmost eleven are good. We recover a result of Rossi-Şega on the good\nproperty of compressed Gorenstein local rings in a stronger form by a shorter\nargument.\n",
"title": "A connection between the good property of an Artinian Gorenstein local ring and that of its quotient modulo socle"
}
| null | null | null | null | true | null |
16346
| null |
Default
| null | null |
null |
{
"abstract": " Population protocols are a well established model of computation by\nanonymous, identical finite state agents. A protocol is well-specified if from\nevery initial configuration, all fair executions reach a common consensus. The\ncentral verification question for population protocols is the\nwell-specification problem: deciding if a given protocol is well-specified.\nEsparza et al. have recently shown that this problem is decidable, but with\nvery high complexity: it is at least as hard as the Petri net reachability\nproblem, which is EXPSPACE-hard, and for which only algorithms of non-primitive\nrecursive complexity are currently known.\nIn this paper we introduce the class WS3 of well-specified strongly-silent\nprotocols and we prove that it is suitable for automatic verification. More\nprecisely, we show that WS3 has the same computational power as general\nwell-specified protocols, and captures standard protocols from the literature.\nMoreover, we show that the membership problem for WS3 reduces to solving\nboolean combinations of linear constraints over N. This allowed us to develop\nthe first software able to automatically prove well-specification for all of\nthe infinitely many possible inputs.\n",
"title": "Towards Efficient Verification of Population Protocols"
}
| null | null | null | null | true | null |
16347
| null |
Default
| null | null |
null |
{
"abstract": " The purpose of this note is to point out that simplicial methods and the\nwell-known Dold-Kan construction in simplicial homotopy theory can be\nfruitfully applied to convert link homology theories into homotopy theories.\nDold and Kan prove that there is a functor from the category of chain complexes\nover a commutative ring with unit to the category of simplicial objects over\nthat ring such that chain homotopic maps go to homotopic maps in the simplicial\ncategory. Furthermore, this is an equivalence of categories. In this way, given\na link homology theory, we construct a mapping taking link diagrams to a\ncategory of simplicial objects such that up to looping or delooping, link\ndiagrams related by Reidemeister moves will give rise to homotopy equivalent\nsimplicial objects, and the homotopy groups of these objects will be equal to\nthe link homology groups of the original link homology theory. The construction\nis independent of the particular link homology theory. A simplifying point in\nproducing a homotopy simplicial object in relation to a chain complex occurs\nwhen the chain complex is itself derived (via face maps) from a simplicial\nobject that satisfies the Kan extension condition. Under these circumstances\none can use that simplicial object rather than apply the Dold-Kan functor to\nthe chain complex. We will give examples of this situation in regard to\nKhovanov homology. We will investigate detailed working out of this\ncorrespondence in separate papers. The purpose of this note is to announce the\nbasic relationships for using simplicial methods in this domain. Thus we do\nmore than just quote the Dold-Kan Theorem. We give a review of simplicial\ntheory and we point to specific constructions, particularly in relation to\nKhovanov homology, that can be used to make simplicial homotopy types directly.\n",
"title": "Simplicial Homotopy Theory, Link Homology and Khovanov Homology"
}
| null | null | null | null | true | null |
16348
| null |
Default
| null | null |
null |
{
"abstract": " To efficiently answer queries, datalog systems often materialise all\nconsequences of a datalog program, so the materialisation must be updated\nwhenever the input facts change. Several solutions to the materialisation\nupdate problem have been proposed. The Delete/Rederive (DRed) and the\nBackward/Forward (B/F) algorithms solve this problem for general datalog, but\nboth contain steps that evaluate rules 'backwards' by matching their heads to a\nfact and evaluating the partially instantiated rule bodies as queries. We show\nthat this can be a considerable source of overhead even on very small updates.\nIn contrast, the Counting algorithm does not evaluate the rules 'backwards',\nbut it can handle only nonrecursive rules. We present two hybrid approaches\nthat combine DRed and B/F with Counting so as to reduce or even eliminate\n'backward' rule evaluation while still handling arbitrary datalog programs. We\nshow empirically that our hybrid algorithms are usually significantly faster\nthan existing approaches, sometimes by orders of magnitude.\n",
"title": "Optimised Maintenance of Datalog Materialisations"
}
| null | null | null | null | true | null |
16349
| null |
Default
| null | null |
null |
{
"abstract": " Let (M,g) be a complete noncompact riemannian manifold with bounded geometry\nand parallel Ricci curvature. We show that some operators, \"affine\" relatively\nto the Ricci curvature, are locally invertible, in some classical Sobolev\nspaces, near the metric g.\n",
"title": "Inversion of some curvature operators near a parallel Ricci metric II: Non-compact manifold with bounded geometry"
}
| null | null | null | null | true | null |
16350
| null |
Default
| null | null |
null |
{
"abstract": " Bizarrely shaped voting districts are frequently lambasted as likely\ninstances of gerrymandering. In order to systematically identify such\ninstances, researchers have devised several tests for so-called geographic\ncompactness (i.e., shape niceness). We demonstrate that under certain\nconditions, a party can gerrymander a competitive state into geographically\ncompact districts to win an average of over 70% of the districts. Our results\nsuggest that geometric features alone may fail to adequately combat partisan\ngerrymandering.\n",
"title": "Partisan gerrymandering with geographically compact districts"
}
| null | null | null | null | true | null |
16351
| null |
Default
| null | null |
null |
{
"abstract": " Disordered many-particle hyperuniform systems are exotic amorphous states of\nmatter that lie between crystals and liquids. Hyperuniform systems have\nattracted recent attention because they are endowed with novel transport and\noptical properties. Recently, the hyperuniformity concept has been generalized\nto characterize scalar fields, two-phase media and random vector fields. In\nthis paper, we devise methods to explicitly construct hyperuniform scalar\nfields. We investigate explicitly spatial patterns generated from Gaussian\nrandom fields, which have been used to model the microwave background radiation\nand heterogeneous materials, the Cahn-Hilliard equation for spinodal\ndecomposition, and Swift-Hohenberg equations that have been used to model\nemergent pattern formation, including Rayleigh-B{\\' e}nard convection. We show\nthat the Gaussian random scalar fields can be constructed to be hyperuniform.\nWe also numerically study the time evolution of spinodal decomposition patterns\nand demonstrate that these patterns are hyperuniform in the scaling regime.\nMoreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg\nequation are effectively hyperuniform. We show that thresholding a hyperuniform\nGaussian random field to produce a two-phase random medium tends to destroy the\nhyperuniformity of the progenitor scalar field. We then propose guidelines to\nachieve effectively hyperuniform two-phase media derived from thresholded\nnon-Gaussian fields. Our investigation paves the way for new research\ndirections to characterize the large-structure spatial patterns that arise in\nphysics, chemistry, biology and ecology. Moreover, our theoretical results are\nexpected to guide experimentalists to synthesize new classes of hyperuniform\nmaterials with novel physical properties via coarsening processes and using\nstate-of-the-art techniques, such as stereolithography and 3D printing.\n",
"title": "Random Scalar Fields and Hyperuniformity"
}
| null | null |
[
"Physics"
] | null | true | null |
16352
| null |
Validated
| null | null |
null |
{
"abstract": " Modern industrial automatic machines and robotic cells are equipped with\nhighly complex human-machine interfaces (HMIs) that often prevent human\noperators from an effective use of the automatic systems. In particular, this\napplies to vulnerable users, such as those with low experience or education\nlevel, the elderly and the disabled. To tackle this issue, it becomes necessary\nto design user-oriented HMIs, which adapt to the capabilities and skills of\nusers, thus compensating their limitations and taking full advantage of their\nknowledge. In this paper, we propose a methodological approach to the design of\ncomplex adaptive human-machine systems that might be inclusive of all users, in\nparticular the vulnerable ones. The proposed approach takes into account both\nthe technical requirements and the requirements for ethical, legal and social\nimplications (ELSI) for the design of automatic systems. The technical\nrequirements derive from a thorough analysis of three use cases taken from the\nEuropean project INCLUSIVE. To achieve the ELSI requirements, the MEESTAR\napproach is combined with the specific legal issues for occupational systems\nand requirements of the target users.\n",
"title": "Methodological Approach for the Design of a Complex Inclusive Human-Machine System"
}
| null | null | null | null | true | null |
16353
| null |
Default
| null | null |
null |
{
"abstract": " Most approaches to machine learning from electronic health data can only\npredict a single endpoint. Here, we present an alternative that uses\nunsupervised deep learning to simulate detailed patient trajectories. We use\ndata comprising 18-month trajectories of 44 clinical variables from 1908\npatients with Mild Cognitive Impairment or Alzheimer's Disease to train a model\nfor personalized forecasting of disease progression. We simulate synthetic\npatient data including the evolution of each sub-component of cognitive exams,\nlaboratory tests, and their associations with baseline clinical\ncharacteristics, generating both predictions and their confidence intervals.\nOur unsupervised model predicts changes in total ADAS-Cog scores with the same\naccuracy as specifically trained supervised models and identifies\nsub-components associated with word recall as predictive of progression. The\nability to simultaneously simulate dozens of patient characteristics is a\ncrucial step towards personalized medicine for Alzheimer's Disease.\n",
"title": "Deep learning for comprehensive forecasting of Alzheimer's Disease progression"
}
| null | null |
[
"Statistics"
] | null | true | null |
16354
| null |
Validated
| null | null |
null |
{
"abstract": " We present new viscosity measurements of a synthetic silicate system\nconsidered an analogue for the lava erupted on the surface of Mercury. In\nparticular, we focus on the northern volcanic plains (NVP), which correspond to\nthe largest lava flows on Mercury and possibly in the Solar System.\nHigh-temperature viscosity measurements were performed at both superliquidus\n(up to 1736 K) and subliquidus conditions (1569-1502 K) to constrain the\nviscosity variations as a function of crystallinity (from 0 to 28\\%) and shear\nrate (from 0.1 to 5 s 1). Melt viscosity shows moderate variations (4-16 Pa s)\nin the temperature range of 1736-1600 K. Experiments performed below the\nliquidus temperature show an increase in viscosity as shear rate decreases from\n5 to 0.1 s 1, resulting in a shear thinning behavior, with a decrease in\nviscosity of 1 log unit. The low viscosity of the studied composition may\nexplain the ability of NVP lavas to cover long distances, on the order of\nhundreds of kilometers in a turbulent flow regime. Using our experimental data\nwe estimate that lava flows with thickness of 1, 5, and 10 m are likely to have\nvelocities of 4.8, 6.5, and 7.2 m/s, respectively, on a 5 degree ground slope.\nNumerical modeling incorporating both the heat loss of the lavas and its\npossible crystallization during emplacement allows us to infer that high\neffusion rates (>10,000 m3/s) are necessary to cover the large distances\nindicated by satellite data from the MErcury Surface, Space ENvironment,\nGEochemistry, and Ranging spacecraft.\n",
"title": "Experimental constraints on the rheology, eruption and emplacement dynamics of analog lavas comparable to Mercury's northern volcanic plains"
}
| null | null |
[
"Physics"
] | null | true | null |
16355
| null |
Validated
| null | null |
null |
{
"abstract": " The problem of estimating a high-dimensional sparse vector\n$\\boldsymbol{\\theta} \\in \\mathbb{R}^n$ from an observation in i.i.d. Gaussian\nnoise is considered. The performance is measured using squared-error loss. An\nempirical Bayes shrinkage estimator, derived using a Bernoulli-Gaussian prior,\nis analyzed and compared with the well-known soft-thresholding estimator. We\nobtain concentration inequalities for the Stein's unbiased risk estimate and\nthe loss function of both estimators. The results show that for large $n$, both\nthe risk estimate and the loss function concentrate on deterministic values\nclose to the true risk.\nDepending on the underlying $\\boldsymbol{\\theta}$, either the proposed\nempirical Bayes (eBayes) estimator or soft-thresholding may have smaller loss.\nWe consider a hybrid estimator that attempts to pick the better of the\nsoft-thresholding estimator and the eBayes estimator by comparing their risk\nestimates. It is shown that: i) the loss of the hybrid estimator concentrates\non the minimum of the losses of the two competing estimators, and ii) the risk\nof the hybrid estimator is within order $\\frac{1}{\\sqrt{n}}$ of the minimum of\nthe two risks. Simulation results are provided to support the theoretical\nresults. Finally, we use the eBayes and hybrid estimators as denoisers in the\napproximate message passing (AMP) algorithm for compressed sensing, and show\nthat their performance is superior to the soft-thresholding denoiser in a wide\nrange of settings.\n",
"title": "Empirical Bayes Estimators for High-Dimensional Sparse Vectors"
}
| null | null | null | null | true | null |
16356
| null |
Default
| null | null |
null |
{
"abstract": " We implemented various DFT+U schemes, including the ACBN0 self-consistent\ndensity-functional version of the DFT+U method [Phys. Rev. X 5, 011006 (2015)]\nwithin the massively parallel real-space time-dependent density functional\ntheory (TDDFT) code Octopus. We further extended the method to the case of the\ncalculation of response functions with real-time TDDFT+U and to the description\nof non-collinear spin systems. The implementation is tested by investigating\nthe ground-state and optical properties of various transition metal oxides,\nbulk topological insulators, and molecules. Our results are found to be in good\nagreement with previously published results for both the electronic band\nstructure and structural properties. The self consistent calculated values of U\nand J are also in good agreement with the values commonly used in the\nliterature. We found that the time-dependent extension of the self-consistent\nDFT+U method yields improved optical properties when compared to the empirical\nTDDFT+U scheme. This work thus opens a different theoretical framework to\naddress the non equilibrium properties of correlated systems.\n",
"title": "Self-consistent DFT+U method for real-space time-dependent density functional theory calculations"
}
| null | null | null | null | true | null |
16357
| null |
Default
| null | null |
null |
{
"abstract": " Tests on $B-L$ symmetry breaking models are important probes to search for\nnew physics. One proposed model with $\\Delta(B-L)=2$ involves the oscillations\nof a neutron to an antineutron. In this paper a new limit on this process is\nderived for the data acquired from all three operational phases of the Sudbury\nNeutrino Observatory experiment. The search was concentrated in oscillations\noccurring within the deuteron, and 23 events are observed against a background\nexpectation of 30.5 events. These translate to a lower limit on the nuclear\nlifetime of $1.48\\times 10^{31}$ years at 90% confidence level (CL) when no\nrestriction is placed on the signal likelihood space (unbounded).\nAlternatively, a lower limit on the nuclear lifetime was found to be\n$1.18\\times 10^{31}$ years at 90% CL when the signal was forced into a positive\nlikelihood space (bounded). Values for the free oscillation time derived from\nvarious models are also provided in this article. This is the first search for\nneutron-antineutron oscillation with the deuteron as a target.\n",
"title": "The search for neutron-antineutron oscillations at the Sudbury Neutrino Observatory"
}
| null | null | null | null | true | null |
16358
| null |
Default
| null | null |
null |
{
"abstract": " Gradient boosted decision trees are a popular machine learning technique, in\npart because of their ability to give good accuracy with small models. We\ndescribe two extensions to the standard tree boosting algorithm designed to\nincrease this advantage. The first improvement extends the boosting formalism\nfrom scalar-valued trees to vector-valued trees. This allows individual trees\nto be used as multiclass classifiers, rather than requiring one tree per class,\nand drastically reduces the model size required for multiclass problems. We\nalso show that some other popular vector-valued gradient boosted trees\nmodifications fit into this formulation and can be easily obtained in our\nimplementation. The second extension, layer-by-layer boosting, takes smaller\nsteps in function space, which is empirically shown to lead to a faster\nconvergence and to a more compact ensemble. We have added both improvements to\nthe open-source TensorFlow Boosted trees (TFBT) package, and we demonstrate\ntheir efficacy on a variety of multiclass datasets. We expect these extensions\nwill be of particular interest to boosted tree applications that require small\nmodels, such as embedded devices, applications requiring fast inference, or\napplications desiring more interpretable models.\n",
"title": "Compact Multi-Class Boosted Trees"
}
| null | null | null | null | true | null |
16359
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, the problem of road friction prediction from a fleet of\nconnected vehicles is investigated. A framework is proposed to predict the road\nfriction level using both historical friction data from the connected cars and\ndata from weather stations, and comparative results from different methods are\npresented. The problem is formulated as a classification task where the\navailable data is used to train three machine learning models including\nlogistic regression, support vector machine, and neural networks to predict the\nfriction class (slippery or non-slippery) in the future for specific road\nsegments. In addition to the friction values, which are measured by moving\nvehicles, additional parameters such as humidity, temperature, and rainfall are\nused to obtain a set of descriptive feature vectors as input to the\nclassification methods. The proposed prediction models are evaluated for\ndifferent prediction horizons (0 to 120 minutes in the future) where the\nevaluation shows that the neural networks method leads to more stable results\nin different conditions.\n",
"title": "Road Friction Estimation for Connected Vehicles using Supervised Machine Learning"
}
| null | null | null | null | true | null |
16360
| null |
Default
| null | null |
null |
{
"abstract": " Collective animal behaviors are paradigmatic examples of fully decentralized\noperations involving complex collective computations such as collective turns\nin flocks of birds or collective harvesting by ants. These systems offer a\nunique source of inspiration for the development of fault-tolerant and\nself-healing multi-robot systems capable of operating in dynamic environments.\nSpecifically, swarm robotics emerged and is significantly growing on these\npremises. However, to date, most swarm robotics systems reported in the\nliterature involve basic computational tasks---averages and other algebraic\noperations. In this paper, we introduce a novel Collective computing framework\nbased on the swarming paradigm, which exhibits the key innate features of\nswarms: robustness, scalability and flexibility. Unlike Edge computing, the\nproposed Collective computing framework is truly decentralized and does not\nrequire user intervention or additional servers to sustain its operations. This\nCollective computing framework is applied to the complex task of collective\nmapping, in which multiple robots aim at cooperatively map a large area. Our\nresults confirm the effectiveness of the cooperative strategy, its robustness\nto the loss of multiple units, as well as its scalability. Furthermore, the\ntopology of the interconnecting network is found to greatly influence the\nperformance of the collective action.\n",
"title": "A Decentralized Mobile Computing Network for Multi-Robot Systems Operations"
}
| null | null | null | null | true | null |
16361
| null |
Default
| null | null |
null |
{
"abstract": " It has been shown that increasing model depth improves the quality of neural\nmachine translation. However, different architectural variants to increase\nmodel depth have been proposed, and so far, there has been no thorough\ncomparative study.\nIn this work, we describe and evaluate several existing approaches to\nintroduce depth in neural machine translation. Additionally, we explore novel\narchitectural variants, including deep transition RNNs, and we vary how\nattention is used in the deep decoder. We introduce a novel \"BiDeep\" RNN\narchitecture that combines deep transition RNNs and stacked RNNs.\nOur evaluation is carried out on the English to German WMT news translation\ndataset, using a single-GPU machine for both training and inference. We find\nthat several of our proposed architectures improve upon existing approaches in\nterms of speed and translation quality. We obtain best improvements with a\nBiDeep RNN of combined depth 8, obtaining an average improvement of 1.5 BLEU\nover a strong shallow baseline.\nWe release our code for ease of adoption.\n",
"title": "Deep Architectures for Neural Machine Translation"
}
| null | null | null | null | true | null |
16362
| null |
Default
| null | null |
null |
{
"abstract": " Speech recognition systems have achieved high recognition performance for\nseveral tasks. However, the performance of such systems is dependent on the\ntremendously costly development work of preparing vast amounts of task-matched\ntranscribed speech data for supervised training. The key problem here is the\ncost of transcribing speech data. The cost is repeatedly required to support\nnew languages and new tasks. Assuming broad network services for transcribing\nspeech data for many users, a system would become more self-sufficient and more\nuseful if it possessed the ability to learn from very light feedback from the\nusers without annoying them. In this paper, we propose a general reinforcement\nlearning framework for speech recognition systems based on the policy gradient\nmethod. As a particular instance of the framework, we also propose a hypothesis\nselection-based reinforcement learning method. The proposed framework provides\na new view for several existing training and adaptation methods. The\nexperimental results show that the proposed method improves the recognition\nperformance compared to unsupervised adaptation.\n",
"title": "Reinforcement Learning of Speech Recognition System Based on Policy Gradient and Hypothesis Selection"
}
| null | null | null | null | true | null |
16363
| null |
Default
| null | null |
null |
{
"abstract": " Ad-hoc Social Networks have become popular to support novel applications\nrelated to location-based mobile services that are of great importance to users\nand businesses. Unlike traditional social services using a centralized server\nto fetch location, ad-hoc social network services support infrastructure less\nreal-time social networking. It allows users to collaborate and share views\nanytime anywhere. However, current ad-hoc social network applications are\neither not available without rooting the mobile phones or don't filter the\nnearby users based on common interests without a centralized server. This paper\npresents an architecture and implementation of social networks on commercially\navailable mobile devices that allow broadcasting name and a limited number of\nkeywords representing users' interests without any connection in a nearby\nregion to facilitate matching of interests. The broadcasting region creates a\ndigital aura and is limited by WiFi region that is around 200 meters. The\napplication connects users to form a group based on their profile or interests\nusing peer-to-peer communication mode without using any centralized networking\nor profile matching infrastructure. The peer-to-peer group can be used for\nprivate communication when the network is not available.\n",
"title": "Profile-Based Ad Hoc Social Networking Using Wi-Fi Direct on the Top of Android"
}
| null | null | null | null | true | null |
16364
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents an a posteriori error analysis for a coupled continuum\npipe-flow/Darcy model in karst aquifers. We consider a unified anisotropic\nfinite element discretization (i.e. elements with very large aspect ratio). Our\nanalysis covers two-dimensional domains, conforming and nonconforming\ndiscretizations as well as different elements. Many examples of finite elements\nthat are covered by analysis are presented. From the finite element solution,\nthe error estimators are constructed and based on the residual of model\nequations. Lower and upper error bounds form the main result with minimal\nassumptions on the elements. The lower error bound is uniform with respect to\nthe mesh anisotropy in the entire domain. The upper error bound depends on a\nproper alignment of the anisotropy of the mesh which is a common feature of\nanisotropic error estimation. In the special case of isotropic meshes, the\nresults simplify, and upper and lower error bounds hold unconditionally.\n",
"title": "An a posteriori error analysis for a coupled continuum pipe-flow/Darcy model in Karst aquifers: anisotropic and isotropic discretizations"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16365
| null |
Validated
| null | null |
null |
{
"abstract": " Security is a critical and vital task in wireless sensor networks, therefore\ndifferent key management systems have been proposed, many of which are based on\nsymmetric cryptography. Such systems are very energy efficient, but they lack\nsome other desirable characteristics. On the other hand, systems based on\npublic key cryptography have those desirable characteristics, but they consume\nmore energy. Recently based on authenticated messages from base station a new\nPKC based key agreement protocol was proposed. We show this method is\nsusceptible to a form of denial of service attack where resources of the\nnetwork can be exhausted with bogus messages. Then, we propose two different\nimprovements to solve this vulnerability. Simulation results show that these\nnew protocols retain desirable characteristics of the basic method and solve\nits deficiencies.\n",
"title": "A Hybrid DOS-Tolerant PKC-Based Key Management System for WSNs"
}
| null | null | null | null | true | null |
16366
| null |
Default
| null | null |
null |
{
"abstract": " The geologic activity at Enceladus's south pole remains unexplained, though\ntidal deformations are probably the ultimate cause. Recent gravity and\nlibration data indicate that Enceladus's icy crust floats on a global ocean, is\nrather thin, and has a strongly non-uniform thickness. Tidal effects are\nenhanced by crustal thinning at the south pole, so that realistic models of\ntidal tectonics and dissipation should take into account the lateral variations\nof shell structure. I construct here the theory of non-uniform viscoelastic\nthin shells, allowing for depth-dependent rheology and large lateral variations\nof shell thickness and rheology. Coupling to tides yields two 2D linear partial\ndifferential equations of the 4th order on the sphere which take into account\nself-gravity, density stratification below the shell, and core viscoelasticity.\nIf the shell is laterally uniform, the solution agrees with analytical formulas\nfor tidal Love numbers; errors on displacements and stresses are less than 5%\nand 15%, respectively, if the thickness is less than 10% of the radius. If the\nshell is non-uniform, the tidal thin shell equations are solved as a system of\ncoupled linear equations in a spherical harmonic basis. Compared to finite\nelement models, thin shell predictions are similar for the deformations due to\nEnceladus's pressurized ocean, but differ for the tides of Ganymede. If\nEnceladus's shell is conductive with isostatic thickness variations, surface\nstresses are approximately inversely proportional to the local shell thickness.\nThe radial tide is only moderately enhanced at the south pole. The combination\nof crustal thinning and convection below the poles can amplify south polar\nstresses by a factor of 10, but it cannot explain the apparent time lag between\nthe maximum plume brightness and the opening of tiger stripes. In a second\npaper, I will study tidal dissipation in a non-uniform crust.\n",
"title": "Enceladus's crust as a non-uniform thin shell: I Tidal deformations"
}
| null | null | null | null | true | null |
16367
| null |
Default
| null | null |
null |
{
"abstract": " There has been a recent surge of interest in studying permutation-based\nmodels for ranking from pairwise comparison data. Despite being structurally\nricher and more robust than parametric ranking models, permutation-based models\nare less well understood statistically and generally lack efficient learning\nalgorithms. In this work, we study a prototype of permutation-based ranking\nmodels, namely, the noisy sorting model. We establish the optimal rates of\nlearning the model under two sampling procedures. Furthermore, we provide a\nfast algorithm to achieve near-optimal rates if the observations are sampled\nindependently. Along the way, we discover properties of the symmetric group\nwhich are of theoretical interest.\n",
"title": "Minimax Rates and Efficient Algorithms for Noisy Sorting"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
16368
| null |
Validated
| null | null |
null |
{
"abstract": " System development often involves decisions about how a high-level design is\nto be implemented using primitives from a low-level platform. Certain\ndecisions, however, may introduce undesirable behavior into the resulting\nimplementation, possibly leading to a violation of a desired property that has\nalready been established at the design level. In this paper, we introduce the\nproblem of synthesizing a property-preserving platform mapping: A set of\nimplementation decisions ensuring that a desired property is preserved from a\nhigh-level design into a low-level platform implementation. We provide a\nformalization of the synthesis problem and propose a technique for synthesizing\na mapping based on symbolic constraint search. We describe our prototype\nimplementation, and a real-world case study demonstrating the application of\nour technique to synthesizing secure mappings for the popular web authorization\nprotocols OAuth 1.0 and 2.0.\n",
"title": "Automated Synthesis of Secure Platform Mappings"
}
| null | null | null | null | true | null |
16369
| null |
Default
| null | null |
null |
{
"abstract": " Cyber defence exercises are intensive, hands-on learning events for teams of\nprofessionals who gain or develop their skills to successfully prevent and\nrespond to cyber attacks. The exercises mimic the real-life, routine operation\nof an organization which is being attacked by an unknown offender. Teams of\nlearners receive very limited immediate feedback from the instructors during\nthe exercise; they can usually see only a scoreboard showing the aggregated\ngain or loss of points for particular tasks. An in-depth analysis of learners'\nactions requires considerable human effort, which results in days or weeks of\ndelay. The intensive experience is thus not followed by proper feedback\nfacilitating actual learning, and this diminishes the effect of the exercise.\nIn this initial work, we investigate how to provide valuable feedback to\nlearners right after the exercise without any unnecessary delay. Based on the\nscoring system of a cyber defence exercise, we have developed a new feedback\ntool that presents an interactive, personalized timeline of exercise events. We\ndeployed this tool during an international exercise, where we monitored\nparticipants' interactions and gathered their reflections. The results show\nthat learners did use the new tool and rated it positively. Since this new\nfeature is not bound to a particular defence exercise, it can be applied to all\nexercises that employ scoring based on the evaluation of individual exercise\nobjectives. As a result, it enables the learner to immediately reflect on the\nexperience gained.\n",
"title": "Timely Feedback in Unstructured Cybersecurity Exercises"
}
| null | null | null | null | true | null |
16370
| null |
Default
| null | null |
null |
{
"abstract": " We develop a $^*$-continuous Kleene $\\omega$-algebra of real-time energy\nfunctions. Together with corresponding automata, these can be used to model\nsystems which can consume and regain energy (or other types of resources)\ndepending on available time. Using recent results on $^*$-continuous Kleene\n$\\omega$-algebras and computability of certain manipulations on real-time\nenergy functions, it follows that reachability and Büchi acceptance in\nreal-time energy automata can be decided in a static way which only involves\nmanipulations of real-time energy functions.\n",
"title": "An $ω$-Algebra for Real-Time Energy Problems"
}
| null | null | null | null | true | null |
16371
| null |
Default
| null | null |
null |
{
"abstract": " Recently, researchers proposed various low-precision gradient compression,\nfor efficient communication in large-scale distributed optimization. Based on\nthese work, we try to reduce the communication complexity from a new direction.\nWe pursue an ideal bijective mapping between two spaces of gradient\ndistribution, so that the mapped gradient carries greater information entropy\nafter the compression. In our setting, all servers should share a reference\ngradient in advance, and they communicate via the normalized gradients, which\nare the subtraction or quotient, between current gradients and the reference.\nTo obtain a reference vector that yields a stronger signal-to-noise ratio,\ndynamically in each iteration, we extract and fuse information from the past\ntrajectory in hindsight, and search for an optimal reference for compression.\nWe name this to be the trajectory-based normalized gradients (TNG). It bridges\nthe research from different societies, like coding, optimization, systems, and\nlearning. It is easy to implement and can universally combine with existing\nalgorithms. Our experiments on benchmarking hard non-convex functions, convex\nproblems like logistic regression demonstrate that TNG is more\ncompression-efficient for communication of distributed optimization of general\nfunctions.\n",
"title": "Trajectory Normalized Gradients for Distributed Optimization"
}
| null | null | null | null | true | null |
16372
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we examine the convergence of mirror descent in a class of\nstochastic optimization problems that are not necessarily convex (or even\nquasi-convex), and which we call variationally coherent. Since the standard\ntechnique of \"ergodic averaging\" offers no tangible benefits beyond convex\nprogramming, we focus directly on the algorithm's last generated sample (its\n\"last iterate\"), and we show that it converges with probabiility $1$ if the\nunderlying problem is coherent. We further consider a localized version of\nvariational coherence which ensures local convergence of stochastic mirror\ndescent (SMD) with high probability. These results contribute to the landscape\nof non-convex stochastic optimization by showing that (quasi-)convexity is not\nessential for convergence to a global minimum: rather, variational coherence, a\nmuch weaker requirement, suffices. Finally, building on the above, we reveal an\ninteresting insight regarding the convergence speed of SMD: in problems with\nsharp minima (such as generic linear programs or concave minimization\nproblems), SMD reaches a minimum point in a finite number of steps (a.s.), even\nin the presence of persistent gradient noise. This result is to be contrasted\nwith existing black-box convergence rate estimates that are only asymptotic.\n",
"title": "On the convergence of mirror descent beyond stochastic convex programming"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
16373
| null |
Validated
| null | null |
null |
{
"abstract": " -Multipath communications at the Internet scale have been a myth for a long\ntime, with no actual protocol being deployed so that multiple paths could be\ntaken by a same connection on the way towards an Internet destination.\nRecently, the Multipath Transport Control Protocol (MPTCP) extension was\nstandardized and is undergoing a quick adoption in many use-cases, from mobile\nto fixed access networks, from data-centers to core networks. Among its major\nbenefits -- i.e., reliability thanks to backup path rerouting; throughput\nincrease thanks to link aggregation; and confidentiality thanks to harder\ncapacity to intercept a full connection -- the latter has attracted lower\nattention. How interesting would it be using MPTCP to exploit multiple\nInternet-scale paths hence decreasing the probability of man-in-the-middle\n(MITM) attacks is a question to which we try to answer. By analyzing the\nAutonomous System (AS) level graph, we identify which countries and regions\nshow a higher level of robustness against MITM AS-level attacks, for example\ndue to core cable tapping or route hijacking practices.\n",
"title": "Can MPTCP Secure Internet Communications from Man-in-the-Middle Attacks?"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16374
| null |
Validated
| null | null |
null |
{
"abstract": " The present paper reports on our effort to characterize vortical interactions\nin complex fluid flows through the use of network analysis. In particular, we\nexamine the vortex interactions in two-dimensional decaying isotropic\nturbulence and find that the vortical interaction network can be characterized\nby a weighted scale-free network. It is found that the turbulent flow network\nretains its scale-free behavior until the characteristic value of circulation\nreaches a critical value. Furthermore, we show that the two-dimensional\nturbulence network is resilient against random perturbations but can be greatly\ninfluenced when forcing is focused towards the vortical structures that are\ncategorized as network hubs. These findings can serve as a network-analytic\nfoundation to examine complex geophysical and thin-film flows and take\nadvantage of the rapidly growing field of network theory, which complements\nongoing turbulence research based on vortex dynamics, hydrodynamic stability,\nand statistics. While additional work is essential to extend the mathematical\ntools from network analysis to extract deeper physical insights of turbulence,\nan understanding of turbulence based on the interaction-based network-theoretic\nframework presents a promising alternative in turbulence modeling and control\nefforts.\n",
"title": "Network Structure of Two-Dimensional Decaying Isotropic Turbulence"
}
| null | null | null | null | true | null |
16375
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we extend state of the art Model Predictive Control (MPC)\napproaches to generate safe bipedal walking on slippery surfaces. In this\nsetting, we formulate walking as a trade off between realizing a desired\nwalking velocity and preserving robust foot-ground contact. Exploiting this\nformulation inside MPC, we show that safe walking on various flat terrains can\nbe achieved by compromising three main attributes, i. e. walking velocity\ntracking, the Zero Moment Point (ZMP) modulation, and the Required Coefficient\nof Friction (RCoF) regulation. Simulation results show that increasing the\nwalking velocity increases the possibility of slippage, while reducing the\nslippage possibility conflicts with reducing the tip-over possibility of the\ncontact and vice versa.\n",
"title": "Pattern Generation for Walking on Slippery Terrains"
}
| null | null | null | null | true | null |
16376
| null |
Default
| null | null |
null |
{
"abstract": " We study the relationship between geometry and capacity measures for deep\nneural networks from an invariance viewpoint. We introduce a new notion of\ncapacity --- the Fisher-Rao norm --- that possesses desirable invariance\nproperties and is motivated by Information Geometry. We discover an analytical\ncharacterization of the new capacity measure, through which we establish\nnorm-comparison inequalities and further show that the new measure serves as an\numbrella for several existing norm-based complexity measures. We discuss upper\nbounds on the generalization error induced by the proposed measure. Extensive\nnumerical experiments on CIFAR-10 support our theoretical findings. Our\ntheoretical analysis rests on a key structural lemma about partial derivatives\nof multi-layer rectifier networks.\n",
"title": "Fisher-Rao Metric, Geometry, and Complexity of Neural Networks"
}
| null | null | null | null | true | null |
16377
| null |
Default
| null | null |
null |
{
"abstract": " We developed control and visualization programs, YUI and HANA, for High-\nResolution Chopper spectrometer (HRC) installed at BL12 in MLF, J-PARC. YUI is\na comprehensive program to control DAQ-middleware, the accessories, and sample\nenvironment devices. HANA is a program for the data transformation and\nvisualization of inelastic neutron scattering spectra. In this paper, we\ndescribe the basic system structures and unique functions of these programs\nfrom the viewpoint of users.\n",
"title": "YUI and HANA: Control and Visualization Programs for HRC in J-PARC"
}
| null | null | null | null | true | null |
16378
| null |
Default
| null | null |
null |
{
"abstract": " We identify four countable topological spaces $S_2$, $S_1$, $S_D$, and $S_0$\nwhich serve as canonical examples of topological spaces which fail to be\nquasi-Polish. These four spaces respectively correspond to the $T_2$, $T_1$,\n$T_D$, and $T_0$-separation axioms. $S_2$ is the space of rationals, $S_1$ is\nthe natural numbers with the cofinite topology, $S_D$ is an infinite chain\nwithout a top element, and $S_0$ is the set of finite sequences of natural\nnumbers with the lower topology induced by the prefix ordering. Our main result\nis a generalization of Hurewicz's theorem showing that a co-analytic subset of\na quasi-Polish space is either quasi-Polish or else contains a countable\n$\\Pi^0_2$-subset homeomorphic to one of these four spaces.\n",
"title": "A generalization of a theorem of Hurewicz for quasi-Polish spaces"
}
| null | null | null | null | true | null |
16379
| null |
Default
| null | null |
null |
{
"abstract": " Cryo-electron microscopy provides 2-D projection images of the 3-D electron\nscattering intensity of many instances of the particle under study (e.g., a\nvirus). Both symmetry (rotational point groups) and heterogeneity are important\naspects of biological particles and both aspects can be combined by describing\nthe electron scattering intensity of the particle as a stochastic process with\na symmetric probability law and therefore symmetric moments. A maximum\nlikelihood estimator implemented by an expectation-maximization algorithm is\ndescribed which estimates the unknown statistics of the electron scattering\nintensity stochastic process from images of instances of the particle. The\nalgorithm is demonstrated on the bacteriophage HK97 and the virus N$\\omega$V.\nThe results are contrasted with existing algorithms which assume that each\ninstance of the particle has the symmetry rather than the less restrictive\nassumption that the probability law has the symmetry.\n",
"title": "Reconstruction of stochastic 3-D signals with symmetric statistics from 2-D projection images motivated by cryo-electron microscopy"
}
| null | null | null | null | true | null |
16380
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we prove $L^q$-estimates for gradients of solutions to\nsingular quasilinear elliptic equations with measure data\n$$-\\operatorname{div}(A(x,\\nabla u))=\\mu,$$ in a bounded domain\n$\\Omega\\subset\\mathbb{R}^{N}$, where $A(x,\\nabla u)\\nabla u \\asymp |\\nabla\nu|^p$, $p\\in (1,2-\\frac{1}{n}]$ and $\\mu$ is a Radon measure in $\\Omega$\n",
"title": "Gradient estimates for singular quasilinear elliptic equations with measure data"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16381
| null |
Validated
| null | null |
null |
{
"abstract": " For various applications, the relations between the dependent and independent\nvariables are highly nonlinear. Consequently, for large scale complex problems,\nneural networks and regression trees are commonly preferred over linear models\nsuch as Lasso. This work proposes learning the feature nonlinearities by\nbinning feature values and finding the best fit in each quantile using\nnon-convex regularized linear regression. The algorithm first captures the\ndependence between neighboring quantiles by enforcing smoothness via\npiecewise-constant/linear approximation and then selects a sparse subset of\ngood features. We prove that the proposed algorithm is statistically and\ncomputationally efficient. In particular, it achieves linear rate of\nconvergence while requiring near-minimal number of samples. Evaluations on\nsynthetic and real datasets demonstrate that algorithm is competitive with\ncurrent state-of-the-art and accurately learns feature nonlinearities. Finally,\nwe explore an interesting connection between the binning stage of our algorithm\nand sparse Johnson-Lindenstrauss matrices.\n",
"title": "Learning Feature Nonlinearities with Non-Convex Regularized Binned Regression"
}
| null | null | null | null | true | null |
16382
| null |
Default
| null | null |
null |
{
"abstract": " Based on the median and the median absolute deviation estimators, and the\nHodges-Lehmann and Shamos estimators, robustified analogues of the conventional\n$t$-test statistic are proposed. The asymptotic distributions of these\nstatistics are recently provided. However, when the sample size is small, it is\nnot appropriate to use the asymptotic distribution of the robustified $t$-test\nstatistics for making a statistical inference including hypothesis testing,\nconfidence interval, p-value, etc.\nIn this article, through extensive Monte Carlo simulations, we obtain the\nempirical distributions of the robustified $t$-test statistics and their\nquantile values. Then these quantile values can be used for making a\nstatistical inference.\n",
"title": "Empirical distributions of the robustified $t$-test statistics"
}
| null | null | null | null | true | null |
16383
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we consider the cluster estimation problem under the Stochastic\nBlock Model. We show that the semidefinite programming (SDP) formulation for\nthis problem achieves an error rate that decays exponentially in the\nsignal-to-noise ratio. The error bound implies weak recovery in the sparse\ngraph regime with bounded expected degrees, as well as exact recovery in the\ndense regime. An immediate corollary of our results yields error bounds under\nthe Censored Block Model. Moreover, these error bounds are robust, continuing\nto hold under heterogeneous edge probabilities and a form of the so-called\nmonotone attack.\nSignificantly, this error rate is achieved by the SDP solution itself without\nany further pre- or post-processing, and improves upon existing\npolynomially-decaying error bounds proved using the Grothendieck\\textquoteright\ns inequality. Our analysis has two key ingredients: (i) showing that the graph\nhas a well-behaved spectrum, even in the sparse regime, after discounting an\nexponentially small number of edges, and (ii) an order-statistics argument that\ngoverns the final error rate. Both arguments highlight the implicit\nregularization effect of the SDP formulation.\n",
"title": "Exponential error rates of SDP for block models: Beyond Grothendieck's inequality"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
16384
| null |
Validated
| null | null |
null |
{
"abstract": " Representing data in hyperbolic space can effectively capture latent\nhierarchical relationships. With the goal of enabling accurate classification\nof points in hyperbolic space while respecting their hyperbolic geometry, we\nintroduce hyperbolic SVM, a hyperbolic formulation of support vector machine\nclassifiers, and elucidate through new theoretical work its connection to the\nEuclidean counterpart. We demonstrate the performance improvement of hyperbolic\nSVM for multi-class prediction tasks on real-world complex networks as well as\nsimulated datasets. Our work allows analytic pipelines that take the inherent\nhyperbolic geometry of the data into account in an end-to-end fashion without\nresorting to ill-fitting tools developed for Euclidean space.\n",
"title": "Large-Margin Classification in Hyperbolic Space"
}
| null | null | null | null | true | null |
16385
| null |
Default
| null | null |
null |
{
"abstract": " I investigate the nightly mean emission height and width of the OH*(3-1)\nlayer by comparing nightly mean temperatures measured by the ground-based\nspectrometer GRIPS 9 and the Na lidar at ALOMAR. The data set contains 42\ncoincident measurements between November 2010 and February 2014, when GRIPS 9\nwas in operation at the ALOMAR observatory (69.3$^\\circ$N, 16.0$^\\circ$E) in\nnorthern Norway. To closely resemble the mean temperature measured by GRIPS 9,\nI weight each nightly mean temperature profile measured by the lidar using\nGaussian distributions with 40 different centre altitudes and 40 different full\nwidths at half maximum. In principle, one can thus determine the altitude and\nwidth of an airglow layer by finding the minimum temperature difference between\nthe two instruments. On most nights, several combinations of centre altitude\nand width yield a temperature difference of $\\pm$2 K. The generally assumed\naltitude of 87 km and width of 8 km is never an unambiguous, good solution for\nany of the measurements. Even for a fixed width of $\\sim$8.4 km, one can\nsometimes find several centre altitudes that yield equally good temperature\nagreement. Weighted temperatures measured by lidar are not suitable to\ndetermine unambiguously the emission height and width of an airglow layer.\nHowever, when actual altitude and width data are lacking, a comparison with\nlidars can provide an estimate of how representative a measured rotational\ntemperature is of an assumed altitude and width. I found the rotational\ntemperature to represent the temperature at the commonly assumed altitude of\n87.4 km and width of 8.4 km to within $\\pm$16 K, on average. This is not a\nmeasurement uncertainty.\n",
"title": "The airglow layer emission altitude cannot be determined unambiguously from temperature comparison with lidars"
}
| null | null | null | null | true | null |
16386
| null |
Default
| null | null |
null |
{
"abstract": " The spinel/perovskite heterointerface $\\gamma$-Al$_2$O$_3$/SrTiO$_3$ hosts a\ntwo-dimensional electron system (2DES) with electron mobilities exceeding those\nin its all-perovskite counterpart LaAlO$_3$/SrTiO$_3$ by more than an order of\nmagnitude despite the abundance of oxygen vacancies which act as electron\ndonors as well as scattering sites. By means of resonant soft x-ray\nphotoemission spectroscopy and \\textit{ab initio} calculations we reveal the\npresence of a sharply localized type of oxygen vacancies at the very interface\ndue to the local breaking of the perovskite symmetry. We explain the\nextraordinarily high mobilities by reduced scattering resulting from the\npreferential formation of interfacial oxygen vacancies and spatial separation\nof the resulting 2DES in deeper SrTiO$_3$ layers. Our findings comply with\ntransport studies and pave the way towards defect engineering at interfaces of\noxides with different crystal structures.\n",
"title": "Microscopic origin of the mobility enhancement at a spinel/perovskite oxide heterointerface revealed by photoemission spectroscopy"
}
| null | null |
[
"Physics"
] | null | true | null |
16387
| null |
Validated
| null | null |
null |
{
"abstract": " We give a finite axiomatization for the variety generated by relational,\nintegral ordered monoids. As a corollary we get a finite axiomatization for the\nlanguage interpretation as well.\n",
"title": "Ordered Monoids: Languages and Relations"
}
| null | null | null | null | true | null |
16388
| null |
Default
| null | null |
null |
{
"abstract": " We show that for every $\\ell>1$, there is a counterexample to the\n$\\ell$-modular secrecy function conjecture by Oggier, Solé and Belfiore.\nThese counterexamples all satisfy the modified conjecture by Ernvall-Hytönen\nand Sethuraman. Furthermore, we provide a method to prove or disprove the\nmodified conjecture for any given $\\ell$-modular lattice rationally equivalent\nto a suitable amount of copies of $\\mathbb{Z}\\oplus \\sqrt{\\ell}\\,\\mathbb{Z}$\nwith $\\ell \\in \\{3,5,7,11,23\\}$. We also provide a variant of the method for\nstrongly $\\ell$-modular lattices when $\\ell\\in \\{6,14,15\\}$.\n",
"title": "On the secrecy gain of $\\ell$-modular lattices"
}
| null | null | null | null | true | null |
16389
| null |
Default
| null | null |
null |
{
"abstract": " Zeta functions for linear codes were defined by Iwan Duursma in 1999. They\nwere generalized to the case of some invariant polynomials by the preset\nauthor. One of the most important problems is whether extremal weight\nenumerators satisfy the Riemann hypothesis. In this article, we show there\nexist extremal polynomials of the weight enumerator type which are invariant\nunder the MacWilliams transform and do not satisfy the Riemann hypothesis.\n",
"title": "Extremal invariant polynomials not satisfying the Riemann hypothesis"
}
| null | null | null | null | true | null |
16390
| null |
Default
| null | null |
null |
{
"abstract": " Classical spectral analysis is based on the discrete Fourier transform of the\nauto-covariances. In this paper we investigate the asymptotic properties of new\nfrequency domain methods where the auto-covariances in the spectral density are\nreplaced by alternative dependence measures which can be estimated by\nU-statistics. An interesting example is given by Kendall{'}s $\\tau$ , for which\nthe limiting variance exhibits a surprising behavior.\n",
"title": "Fourier analysis of serial dependence measures"
}
| null | null | null | null | true | null |
16391
| null |
Default
| null | null |
null |
{
"abstract": " Modern corporations physically separate their sensitive computational\ninfrastructure from public or other accessible networks in order to prevent\ncyber-attacks. However, attackers still manage to infect these networks, either\nby means of an insider or by infiltrating the supply chain. Therefore, an\nattacker's main challenge is to determine a way to command and control the\ncompromised hosts that are isolated from an accessible network (e.g., the\nInternet).\nIn this paper, we propose a new adversarial model that shows how an air\ngapped network can receive communications over a covert thermal channel.\nConcretely, we show how attackers may use a compromised air-conditioning system\n(connected to the internet) to send commands to infected hosts within an\nair-gapped network. Since thermal communication protocols are a rather\nunexplored domain, we propose a novel line-encoding and protocol suitable for\nthis type of channel. Moreover, we provide experimental results to demonstrate\nthe covert channel's feasibility, and to calculate the channel's bandwidth.\nLastly, we offer a forensic analysis and propose various ways this channel can\nbe detected and prevented.\nWe believe that this study details a previously unseen vector of attack that\nsecurity experts should be aware of.\n",
"title": "HVACKer: Bridging the Air-Gap by Attacking the Air Conditioning System"
}
| null | null | null | null | true | null |
16392
| null |
Default
| null | null |
null |
{
"abstract": " We study the Koszul property of a standard graded $K$-algebra $R$ defined by\nthe binomial edge ideal of a pair of graphs $(G_1,G_2)$. We show that the\nfollowing statements are equivalent: (i) $R$ is Koszul; (ii) the defining ideal\n$J_{G_1,G_2}$ of $R$ has a quadratic Gröbner basis; (iii) the graded maximal\nideal of $R$ has linear quotients with respect to a suitable order of its\ngenerators\n",
"title": "Koszul binomial edge ideals of pairs of graphs"
}
| null | null | null | null | true | null |
16393
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks are currently among the most commonly used classifiers.\nDespite easily achieving very good performance, one of the best selling points\nof these models is their modular design - one can conveniently adapt their\narchitecture to specific needs, change connectivity patterns, attach\nspecialised layers, experiment with a large amount of activation functions,\nnormalisation schemes and many others. While one can find impressively wide\nspread of various configurations of almost every aspect of the deep nets, one\nelement is, in authors' opinion, underrepresented - while solving\nclassification problems, vast majority of papers and applications simply use\nlog loss. In this paper we try to investigate how particular choices of loss\nfunctions affect deep models and their learning dynamics, as well as resulting\nclassifiers robustness to various effects. We perform experiments on classical\ndatasets, as well as provide some additional, theoretical insights into the\nproblem. In particular we show that L1 and L2 losses are, quite surprisingly,\njustified classification objectives for deep nets, by providing probabilistic\ninterpretation in terms of expected misclassification. We also introduce two\nlosses which are not typically used as deep nets objectives and show that they\nare viable alternatives to the existing ones.\n",
"title": "On Loss Functions for Deep Neural Networks in Classification"
}
| null | null | null | null | true | null |
16394
| null |
Default
| null | null |
null |
{
"abstract": " Yes.\n",
"title": "Are theoretical results 'Results'?"
}
| null | null | null | null | true | null |
16395
| null |
Default
| null | null |
null |
{
"abstract": " End-to-end training from scratch of current deep architectures for new\ncomputer vision problems would require Imagenet-scale datasets, and this is not\nalways possible. In this paper we present a method that is able to take\nadvantage of freely available multi-modal content to train computer vision\nalgorithms without human supervision. We put forward the idea of performing\nself-supervised learning of visual features by mining a large scale corpus of\nmulti-modal (text and image) documents. We show that discriminative visual\nfeatures can be learnt efficiently by training a CNN to predict the semantic\ncontext in which a particular image is more probable to appear as an\nillustration. For this we leverage the hidden semantic structures discovered in\nthe text corpus with a well-known topic modeling technique. Our experiments\ndemonstrate state of the art performance in image classification, object\ndetection, and multi-modal retrieval compared to recent self-supervised or\nnatural-supervised approaches.\n",
"title": "Self-supervised learning of visual features through embedding images into text topic spaces"
}
| null | null |
[
"Computer Science"
] | null | true | null |
16396
| null |
Validated
| null | null |
null |
{
"abstract": " Most optical and IR spectra are now acquired using detectors with\nfinite-width pixels in a square array. This paper examines the effects of such\npixellation, using computed simulations to illustrate the effects which most\nconcern the astronomer end-user. Coarse sampling increases the random noise\nerrors in wavelength by typically 10 - 20% at 2 pixels/FWHM, but with wide\nvariation depending on the functional form of the instrumental Line Spread\nFunction (LSF) and on the pixel phase. Line widths are even more strongly\naffected at low sampling frequencies. However, the noise in fitted peak\namplitudes is minimally affected. Pixellation has a substantial but complex\neffect on the ability to see a relative minimum between two closely-spaced\npeaks (or relative maximum between two absorption lines). The consistent scale\nof resolving power presented by Robertson (2013) is extended to cover\npixellated spectra. The systematic bias errors in wavelength introduced by\npixellation are examined. While they may be negligible for smooth well-sampled\nsymmetric LSFs, they are very sensitive to asymmetry and high spatial frequency\nsubstructure. The Modulation Transfer Function for sampled data is shown to\ngive a useful indication of the extent of improperly sampled signal in an LSF.\nThe common maxim that 2 pixels/FWHM is the Nyquist limit is incorrect and most\nLSFs will exhibit some aliasing at this sample frequency. While 2 pixels/FWHM\nis often an acceptable minimum for moderate signal/noise work, it is preferable\nto carry out simulations for any actual or proposed LSF to find the effects of\nsampling frequency. Where end-users have a choice of sampling frequencies,\nthrough on-chip binning and/or spectrograph configurations, the instrument user\nmanual should include an examination of their effects. (Abridged)\n",
"title": "Detector sampling of optical/IR spectra: how many pixels per FWHM?"
}
| null | null |
[
"Physics"
] | null | true | null |
16397
| null |
Validated
| null | null |
null |
{
"abstract": " Most of the recent successful methods in accurate object detection and\nlocalization used some variants of R-CNN style two stage Convolutional Neural\nNetworks (CNN) where plausible regions were proposed in the first stage then\nfollowed by a second stage for decision refinement. Despite the simplicity of\ntraining and the efficiency in deployment, the single stage detection methods\nhave not been as competitive when evaluated in benchmarks consider mAP for high\nIoU thresholds. In this paper, we proposed a novel single stage end-to-end\ntrainable object detection network to overcome this limitation. We achieved\nthis by introducing Recurrent Rolling Convolution (RRC) architecture over\nmulti-scale feature maps to construct object classifiers and bounding box\nregressors which are \"deep in context\". We evaluated our method in the\nchallenging KITTI dataset which measures methods under IoU threshold of 0.7. We\nshowed that with RRC, a single reduced VGG-16 based model already significantly\noutperformed all the previously published results. At the time this paper was\nwritten our models ranked the first in KITTI car detection (the hard level),\nthe first in cyclist detection and the second in pedestrian detection. These\nresults were not reached by the previous single stage methods. The code is\npublicly available.\n",
"title": "Accurate Single Stage Detector Using Recurrent Rolling Convolution"
}
| null | null | null | null | true | null |
16398
| null |
Default
| null | null |
null |
{
"abstract": " We study the $m$-th Gauss map in the sense of F.~L.~Zak of a projective\nvariety $X \\subset \\mathbb{P}^N$ over an algebraically closed field in any\ncharacteristic. For all integer $m$ with $n:=\\dim(X) \\leq m < N$, we show that\nthe contact locus on $X$ of a general tangent $m$-plane is a linear variety if\nthe $m$-th Gauss map is separable. We also show that for smooth $X$ with $n <\nN-2$, the $(n+1)$-th Gauss map is birational if it is separable, unless $X$ is\nthe Segre embedding $\\mathbb{P}^1 \\times \\mathbb{P}^n \\subset\n\\mathbb{P}^{2n-1}$. This is related to L. Ein's classification of varieties\nwith small dual varieties in characteristic zero.\n",
"title": "On separable higher Gauss maps"
}
| null | null | null | null | true | null |
16399
| null |
Default
| null | null |
null |
{
"abstract": " If $\\mathcal{G}$ is the group (under composition) of diffeomorphisms $f :\n{\\bar{D}}(0;1) \\rightarrow {\\bar{D}}(0;1)$ of the closed unit disc\n${\\bar{D}}(0;1)$ which are the identity map $id : {\\bar{D}}(0;1) \\rightarrow\n{\\bar{D}}(0;1)$ on the closed unit circle and satisfy the condition $det(J(f))\n> 0$, where $J(f)$ is the Jacobian matrix of $f$ or (equivalently) the\nFréchet derivative of $f$, then $\\mathcal{G}$ equipped with the metric\n$d_{\\mathcal{G}}(f,g) = \\Vert f-g \\Vert_{\\infty } + \\Vert J(f) - J(g)\n\\Vert_{\\infty }$, where $f$, $g$ range over $\\mathcal{G}$, is a metric space in\nwhich $d_{\\mathcal{G}} \\left( f_{t} , id \\right) \\rightarrow 0$ as $t\n\\rightarrow 1^{+}$, where $f_{t}(z) = \\frac{ tz }{ 1 + (t-1) \\vert z \\vert }$,\nwhenever $z \\in {\\bar{D}}(0;1)$ and $t \\geq 1$.\n",
"title": "Diffeomorphisms of the closed unit disc converging to the identity"
}
| null | null |
[
"Mathematics"
] | null | true | null |
16400
| null |
Validated
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.