text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " In relativistic quantum field theories, compact objects of interacting bosons\ncan become stable owing to conservation of an additive quantum number $Q$.\nDiscovering such $Q$-balls propagating in the Universe would confirm\nsupersymmetric extensions of the standard model and may shed light on the\nmysteries of dark matter, but no unambiguous experimental evidence exists. We\nreport observation of a propagating long-lived $Q$-ball in superfluid $^3$He,\nwhere the role of $Q$-ball is played by a Bose-Einstein condensate of magnon\nquasiparticles. We achieve accurate representation of the $Q$-ball Hamiltonian\nusing the influence of the number of magnons, corresponding to the charge $Q$,\non the orbital structure of the superfluid $^3$He order parameter. This\nrealisation supports multiple coexisting $Q$-balls which in future allows\nstudies of $Q$-ball dynamics, interactions, and collisions.\n", "title": "Propagation of self-localised Q-ball solitons in the $^3$He universe" }
null
null
null
null
true
null
15101
null
Default
null
null
null
{ "abstract": " In this work, we make two improvements on the staggered grid hydrodynamics\n(SGH) Lagrangian scheme for modeling 2-dimensional compressible multi-material\nflows on triangular mesh. The first improvement is the construction of a\ndynamic local remeshing scheme for preventing mesh distortion. The remeshing\nscheme is similar to many published algorithms except that it introduces some\nspecial operations for treating grids around multi-material interfaces. This\nmakes the simulation of extremely deforming and topology-variable\nmulti-material processes possible, such as the complete process of a heavy\nfluid dipping into a light fluid. The second improvement is the construction of\nan Euler-like flow on each edge of the mesh to count for the \"edge-bending\"\neffect, so as to mitigate the \"checkerboard\" oscillation that commonly exists\nin Lagrangian simulations, especially the triangular mesh based simulations.\nSeveral typical hydrodynamic problems are simulated by the improved staggered\ngrid Lagrangian hydrodynamic method to test its performance.\n", "title": "Improving the staggered grid Lagrangian hydrodynamics for modeling multi-material flows" }
null
null
null
null
true
null
15102
null
Default
null
null
null
{ "abstract": " Vehicle climate control systems aim to keep passengers thermally comfortable.\nHowever, current systems control temperature rather than thermal comfort and\ntend to be energy hungry, which is of particular concern when considering\nelectric vehicles. This paper poses energy-efficient vehicle comfort control as\na Markov Decision Process, which is then solved numerically using\nSarsa({\\lambda}) and an empirically validated, single-zone, 1D thermal model of\nthe cabin. The resulting controller was tested in simulation using 200 randomly\nselected scenarios and found to exceed the performance of bang-bang,\nproportional, simple fuzzy logic, and commercial controllers with 23%, 43%,\n40%, 56% increase, respectively. Compared to the next best performing\ncontroller, energy consumption is reduced by 13% while the proportion of time\nspent thermally comfortable is increased by 23%. These results indicate that\nthis is a viable approach that promises to translate into substantial comfort\nand energy improvements in the car.\n", "title": "Reinforcement Learning-based Thermal Comfort Control for Vehicle Cabins" }
null
null
null
null
true
null
15103
null
Default
null
null
null
{ "abstract": " The truncated Fourier operator $\\mathscr{F}_{\\mathbb{R^{+}}}$, $$\n(\\mathscr{F}_{\\mathbb{R^{+}}}x)(t)=\\frac{1}{\\sqrt{2\\pi}}\n\\int\\limits_{\\mathbb{R^{+}}}x(\\xi)e^{it\\xi}\\,d\\xi\\,,\\ \\ \\\nt\\in{}{\\mathbb{R^{+}}}, $$ is studied. The operator\n$\\mathscr{F}_{\\mathbb{R^{+}}}$ is considered as an operator acting in the space\n$L^2(\\mathbb{R^{+}})$. The functional model for the operator\n$\\mathscr{F}_{\\mathbb{R^{+}}}$ is constructed. This functional model is the\nmultiplication operator on the appropriate $2\\times2$ matrix function acting in\nthe space $L^2(\\mathbb{R^{+}})\\oplus{}L^2(\\mathbb{R^{+}})$. Using this\nfunctional model, the spectrum of the operator $\\mathscr{F}_{\\mathbb{R^{+}}}$\nis found. The resolvent of the operator $\\mathscr{F}_{\\mathbb{R^{+}}}$ is\nestimated near its spectrum.\n", "title": "A functional model for the Fourier--Plancherel operator truncated on the positive half-axis" }
null
null
[ "Mathematics" ]
null
true
null
15104
null
Validated
null
null
null
{ "abstract": " We prove that the regular $n\\times n$ square grid of points in the integer\nlattice $\\mathbb{Z}^{2}$ cannot be recovered from an arbitrary $n^{2}$-element\nsubset of $\\mathbb{Z}^{2}$ via a mapping with prescribed Lipschitz constant\n(independent of $n$). This answers negatively a question of Feige from 2002.\nOur resolution of Feige's question takes place largely in a continuous setting\nand is based on some new results for Lipschitz mappings falling into two broad\nareas of interest, which we study independently. Firstly the present work\ncontains a detailed investigation of Lipschitz regular mappings on Euclidean\nspaces, with emphasis on their bilipschitz decomposability in a sense\ncomparable to that of the well known result of Jones. Secondly, we build on\nwork of Burago and Kleiner and McMullen on non-realisable densities. We verify\nthe existence, and further prevalence, of strongly non-realisable densities\ninside spaces of continuous functions.\n", "title": "Mapping $n$ grid points onto a square forces an arbitrarily large Lipschitz constant" }
null
null
null
null
true
null
15105
null
Default
null
null
null
{ "abstract": " Given a list of k source-sink pairs in an edge-weighted graph G, the minimum\nmulticut problem consists in selecting a set of edges of minimum total weight\nin G, such that removing these edges leaves no path from each source to its\ncorresponding sink. To the best of our knowledge, no non-trivial FPT result for\nspecial cases of this problem, which is APX-hard in general graphs for any\nfixed k>2, is known with respect to k only. When the graph G is planar, this\nproblem is known to be polynomial-time solvable if k=O(1), but cannot be FPT\nwith respect to k under the Exponential Time Hypothesis.\nIn this paper, we show that, if G is planar and in addition all sources and\nsinks lie on the outer face, then this problem does admit an FPT algorithm when\nparameterized by k (although it remains APX-hard when k is part of the input,\neven in stars). To do this, we provide a new characterization of optimal\nsolutions in this case, and then use it to design a \"divide-and-conquer\"\napproach: namely, some edges that are part of any such solution actually define\nan optimal solution for a polynomial-time solvable multiterminal variant of the\nproblem on some of the sources and sinks (which can be identified thanks to a\nreduced enumeration phase). Removing these edges from the graph cuts it into\nseveral smaller instances, which can then be solved recursively.\n", "title": "An FPT algorithm for planar multicuts with sources and sinks on the outer face" }
null
null
null
null
true
null
15106
null
Default
null
null
null
{ "abstract": " Label shift refers to the phenomenon where the marginal probability p(y) of\nobserving a particular class changes between the training and test\ndistributions while the conditional probability p(x|y) stays fixed. This is\nrelevant in settings such as medical diagnosis, where a classifier trained to\npredict disease based on observed symptoms may need to be adapted to a\ndifferent distribution where the baseline frequency of the disease is higher.\nGiven calibrated estimates of p(y|x), one can apply an EM algorithm to correct\nfor the shift in class imbalance between the training and test distributions\nwithout ever needing to calculate p(x|y). Unfortunately, modern neural networks\ntypically fail to produce well-calibrated probabilities, compromising the\neffectiveness of this approach. Although Temperature Scaling can greatly reduce\nmiscalibration in these networks, it can leave behind a systematic bias in the\nprobabilities that still poses a problem. To address this, we extend\nTemperature Scaling with class-specific bias parameters, which largely\neliminates systematic bias in the calibrated probabilities and allows for\neffective domain adaptation under label shift. We term our calibration approach\n\"Bias-Corrected Temperature Scaling\". On experiments with CIFAR10, we find that\nEM with Bias-Corrected Temperature Scaling significantly outperforms both EM\nwith Temperature Scaling and the recently-proposed Black-Box Shift Estimation.\n", "title": "Calibration with Bias-Corrected Temperature Scaling Improves Domain Adaptation Under Label Shift in Modern Neural Networks" }
null
null
null
null
true
null
15107
null
Default
null
null
null
{ "abstract": " There has been an increasing interest in learning dynamics simulators for\nmodel-based control. Compared with off-the-shelf physics engines, a learnable\nsimulator can quickly adapt to unseen objects, scenes, and tasks. However,\nexisting models like interaction networks only work for fully observable\nsystems; they also only consider pairwise interactions within a single time\nstep, both restricting their use in practical systems. We introduce Propagation\nNetworks (PropNet), a differentiable, learnable dynamics model that handles\npartially observable scenarios and enables instantaneous propagation of signals\nbeyond pairwise interactions. With these innovations, our propagation networks\nnot only outperform current learnable physics engines in forward simulation,\nbut also achieves superior performance on various control tasks. Compared with\nexisting deep reinforcement learning algorithms, model-based control with\npropagation networks is more accurate, efficient, and generalizable to novel,\npartially observable scenes and tasks.\n", "title": "Propagation Networks for Model-Based Control Under Partial Observation" }
null
null
null
null
true
null
15108
null
Default
null
null
null
{ "abstract": " The first billion years of the Universe is a pivotal time: stars, black holes\n(BHs) and galaxies form and assemble, sowing the seeds of galaxies as we know\nthem today. Detecting, identifying and understand- ing the first galaxies and\nBHs is one of the current observational and theoretical challenges in galaxy\nformation. In this paper we present a population synthesis model aimed at\ngalaxies, BHs and Active Galactic Nuclei (AGNs) at high redshift. The model\nbuilds a population based on empirical relations. Galaxies are characterized by\na spectral energy distribution determined by age and metallicity, and AGNs by a\nspectral energy distribution determined by BH mass and accretion rate. We\nvalidate the model against observational constraints, and then predict\nproperties of galaxies and AGN in other wavelength and/or luminosity ranges,\nestimating the contamination of stellar populations (normal stars and high-mass\nX-ray binaries) for AGN searches from the infrared to X-rays, and vice-versa\nfor galaxy searches. For high-redshift galaxies, with stellar ages < 1 Gyr, we\nfind that disentangling stellar and AGN emission is challenging at restframe\nUV/optical wavelengths, while high-mass X-ray binaries become more important\nsources of confusion in X-rays. We propose a color-color selection in JWST\nbands to separate AGN vs star-dominated galaxies in photometric observations.\nWe also esti- mate the AGN contribution, with respect to massive, hot,\nmetal-poor stars, at driving high ionization lines, such as C IV and He II.\nFinally, we test the influence of the minimum BH mass and occupa- tion fraction\nof BHs in low mass galaxies on the restframe UV/near-IR and X-ray AGN\nluminosity function.\n", "title": "High-redshift galaxies and black holes in the eyes of JWST: a population synthesis model from infrared to X-rays" }
null
null
null
null
true
null
15109
null
Default
null
null
null
{ "abstract": " Often when multiple labels are obtained for a training example it is assumed\nthat there is an element of noise that must be accounted for. It has been shown\nthat this disagreement can be considered signal instead of noise. In this work\nwe investigate using soft labels for training data to improve generalization in\nmachine learning models. However, using soft labels for training Deep Neural\nNetworks (DNNs) is not practical due to the costs involved in obtaining\nmultiple labels for large data sets. We propose soft label\nmemorization-generalization (SLMG), a fine-tuning approach to using soft labels\nfor training DNNs. We assume that differences in labels provided by human\nannotators represent ambiguity about the true label instead of noise.\nExperiments with SLMG demonstrate improved generalization performance on the\nNatural Language Inference (NLI) task. Our experiments show that by injecting a\nsmall percentage of soft label training data (0.03% of training set size) we\ncan improve generalization performance over several baselines.\n", "title": "Soft Label Memorization-Generalization for Natural Language Inference" }
null
null
null
null
true
null
15110
null
Default
null
null
null
{ "abstract": " We introduce the concept of Floquet topological magnons --- a mechanism by\nwhich a synthetic tunable Dzyaloshinskii-Moriya interaction (DMI) can be\ngenerated in quantum magnets using circularly polarized electric (laser) field.\nThe resulting effect is that Dirac magnons and nodal magnons in two-dimensional\n(2D) and three-dimensional (3D) quantum magnets can be tuned to magnon Chern\ninsulators and Weyl magnons respectively under circularly polarized laser\nfield. The Floquet formalism also yields a tunable intrinsic DMI in insulating\nquantum magnets without an inversion center. We demonstrate that the Floquet\ntopological magnons possess a finite thermal Hall conductivity tunable by the\nlaser field.\n", "title": "Floquet Topological Magnons" }
null
null
null
null
true
null
15111
null
Default
null
null
null
{ "abstract": " Inspired by recent interests of developing machine learning and data mining\nalgorithms on hypergraphs, we investigate in this paper the semi-supervised\nlearning algorithm of propagating \"soft labels\" (e.g. probability\ndistributions, class membership scores) over hypergraphs, by means of optimal\ntransportation. Borrowing insights from Wasserstein propagation on graphs\n[Solomon et al. 2014], we re-formulate the label propagation procedure as a\nmessage-passing algorithm, which renders itself naturally to a generalization\napplicable to hypergraphs through Wasserstein barycenters. Furthermore, in a\nPAC learning framework, we provide generalization error bounds for propagating\none-dimensional distributions on graphs and hypergraphs using 2-Wasserstein\ndistance, by establishing the \\textit{algorithmic stability} of the proposed\nsemi-supervised learning algorithm. These theoretical results also shed new\nlights upon deeper understandings of the Wasserstein propagation on graphs.\n", "title": "Wasserstein Soft Label Propagation on Hypergraphs: Algorithm and Generalization Error Bounds" }
null
null
[ "Statistics" ]
null
true
null
15112
null
Validated
null
null
null
{ "abstract": " Global recruitment into radical Islamic movements has spurred renewed\ninterest in the appeal of political extremism. Is the appeal a rational\nresponse to material conditions or is it the expression of psychological and\npersonality disorders associated with aggressive behavior, intolerance,\nconspiratorial imagination, and paranoia? Empirical answers using surveys have\nbeen limited by lack of access to extremist groups, while field studies have\nlacked psychological measures and failed to compare extremists with contrast\ngroups. We revisit the debate over the appeal of extremism in the U.S. context\nby comparing publicly available Twitter messages written by over 355,000\npolitical extremist followers with messages written by non-extremist U.S.\nusers. Analysis of text-based psychological indicators supports the moral\nfoundation theory which identifies emotion as a critical factor in determining\npolitical orientation of individuals. Extremist followers also differ from\nothers in four of the Big Five personality traits.\n", "title": "Psychological and Personality Profiles of Political Extremists" }
null
null
null
null
true
null
15113
null
Default
null
null
null
{ "abstract": " Interpretable machine learning tackles the important problem that humans\ncannot understand the behaviors of complex machine learning models and how\nthese models arrive at a particular decision. Although many approaches have\nbeen proposed, a comprehensive understanding of the achievements and challenges\nis still lacking. We provide a survey covering existing techniques to increase\nthe interpretability of machine learning models. We also discuss crucial issues\nthat the community should consider in future work such as designing\nuser-friendly explanations and developing comprehensive evaluation metrics to\nfurther push forward the area of interpretable machine learning.\n", "title": "Techniques for Interpretable Machine Learning" }
null
null
null
null
true
null
15114
null
Default
null
null
null
{ "abstract": " This doctoral work focuses on three main problems related to social networks:\n(1) Orchestrating Network Formation: We consider the problem of orchestrating\nformation of a social network having a certain given topology that may be\ndesirable for the intended usecases. Assuming the social network nodes to be\nstrategic in forming relationships, we derive conditions under which a given\ntopology can be uniquely obtained. We also study the efficiency and robustness\nof the derived conditions. (2) Multi-phase Influence Maximization: We propose\nthat information diffusion be carried out in multiple phases rather than in a\nsingle instalment. With the objective of achieving better diffusion, we\ndiscover optimal ways of splitting the available budget among the phases,\ndetermining the time delay between consecutive phases, and also finding the\nindividuals to be targeted for initiating the diffusion process. (3) Scalable\nPreference Aggregation: It is extremely useful to determine a small number of\nrepresentatives of a social network such that the individual preferences of\nthese nodes, when aggregated, reflect the aggregate preference of the entire\nnetwork. Using real-world data collected from Facebook with human subjects, we\ndiscover a model that faithfully captures the spread of preferences in a social\nnetwork. We hence propose fast and reliable ways of computing a truly\nrepresentative aggregate preference of the entire network. In particular, we\ndevelop models and methods for solving the above problems, which primarily deal\nwith formation and analysis of social networks.\n", "title": "New Models and Methods for Formation and Analysis of Social Networks" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
15115
null
Validated
null
null
null
{ "abstract": " Many complex systems can be represented as networks, and the problem of\nnetwork comparison is becoming increasingly relevant. There are many techniques\nfor network comparison, from simply comparing network summary statistics to\nsophisticated but computationally costly alignment-based approaches. Yet it\nremains challenging to accurately cluster networks that are of a different size\nand density, but hypothesized to be structurally similar. In this paper, we\naddress this problem by introducing a new network comparison methodology that\nis aimed at identifying common organizational principles in networks. The\nmethodology is simple, intuitive and applicable in a wide variety of settings\nranging from the functional classification of proteins to tracking the\nevolution of a world trade network.\n", "title": "Identifying networks with common organizational principles" }
null
null
null
null
true
null
15116
null
Default
null
null
null
{ "abstract": " A new variation of blockchain proof of work algorithm is proposed to\nincentivize the timely execution of image processing algorithms. A sample image\nprocessing algorithm is proposed to determine interesting images using analysis\nof the entropy of pixel subsets within images. The efficacy of the image\nprocessing algorithm is examined using two small sets of training and test\ndata. The interesting image algorithm is then integrated into a simplified\nblockchain mining proof of work algorithm based on Bitcoin. The incentive of\ncryptocurrency mining is theorized to incentivize the execution of the\nalgorithm and thus the retrieval of images that satisfy a minimum requirement\nset forth by the interesting image algorithm. The digital storage implications\nof running an image- based blockchain are then examined mathematically.\n", "title": "Image-based Proof of Work Algorithm for the Incentivization of Blockchain Archival of Interesting Images" }
null
null
null
null
true
null
15117
null
Default
null
null
null
{ "abstract": " An extensive, precise and robust recognition and modeling of the environment\nis a key factor for next generations of Advanced Driver Assistance Systems and\ndevelopment of autonomous vehicles. In this paper, a real-time approach for the\nperception of multiple lanes on highways is proposed. Lane markings detected by\ncamera systems and observations of other traffic participants provide the input\ndata for the algorithm. The information is accumulated and fused using\nGraphSLAM and the result constitutes the basis for a multilane clothoid model.\nTo allow incorporation of additional information sources, input data is\nprocessed in a generic format. Evaluation of the method is performed by\ncomparing real data, collected with an experimental vehicle on highways, to a\nground truth map. The results show that ego and adjacent lanes are robustly\ndetected with high quality up to a distance of 120 m. In comparison to serial\nlane detection, an increase in the detection range of the ego lane and a\ncontinuous perception of neighboring lanes is achieved. The method can\npotentially be utilized for the longitudinal and lateral control of\nself-driving vehicles.\n", "title": "Multi-Lane Perception Using Feature Fusion Based on GraphSLAM" }
null
null
null
null
true
null
15118
null
Default
null
null
null
{ "abstract": " Linear Parameter-Varying (LPV) systems with jumps and piecewise\ndifferentiable parameters is a class of hybrid LPV systems for which no\ntailored stability analysis and stabilization conditions have been obtained so\nfar. We fill this gap here by proposing an approach relying on the\nreformulation of the considered LPV system as an extended equivalent hybrid\nsystem that will incorporate, through a suitable state augmentation,\ninformation on both the dynamics of the state of the system and the considered\nclass of parameter trajectories. Two stability conditions are established using\na result pertaining on the stability of hybrid systems and shown to naturally\ngeneralize and unify the well-known quadratic and robust stability criteria\ntogether. The obtained conditions being infinite-dimensional semidefinite\nprogramming problems, a relaxation approach based on sum of squares programming\nis used in order to obtain tractable finite-dimensional conditions. The\nconditions are then losslessly extended to solve two control problems, namely,\nthe stabilization by continuous and sampled-data gain-scheduled state-feedback\ncontrollers. The approach is finally illustrated on several examples from the\nliterature.\n", "title": "Stability analysis and stabilization of LPV systems with jumps and (piecewise) differentiable parameters using continuous and sampled-data controllers" }
null
null
null
null
true
null
15119
null
Default
null
null
null
{ "abstract": " We consider the problem of identity testing and recovering (that is,\ninterpolating) of a \"hidden\" monic polynomials $f$, given an oracle access to\n$f(x)^e$ for $x\\in\\mathbb F_q$, where $\\mathbb F_q$ is the finite field of $q$\nelements and an extension fields access is not permitted.\nThe naive interpolation algorithm needs $de+1$ queries, where $d =\\max\\{{\\rm\ndeg}\\ f, {\\rm deg }\\ g\\}$ and thus requires $ de<q$. For a prime $q = p$, we\ndesign an algorithm that is asymptotically better in certain cases, especially\nwhen $d$ is large. The algorithm is based on a result of independent interest\nin spirit of additive combinatorics. It gives an upper bound on the number of\nvalues of a rational function of large degree, evaluated on a short sequence of\nconsecutive integers, that belong to a small subgroup of $\\mathbb F_p^*$.\n", "title": "Identity Testing and Interpolation from High Powers of Polynomials of Large Degree over Finite Fields" }
null
null
null
null
true
null
15120
null
Default
null
null
null
{ "abstract": " We present a thorough tight-binding analysis of the band structure of a wide\nvariety of lattices belonging to the class of honeycomb and Kagome systems\nincluding several mixed forms combining both lattices. The band structure of\nthese systems are made of a combination of dispersive and flat bands. The\ndispersive bands possess Dirac cones (linear dispersion) at the six corners (K\npoints) of the Brillouin zone although in peculiar cases Dirac cones at the\ncenter of the zone $(\\Gamma$ point) appear. The flat bands can be of different\nnature. Most of them are tangent to the dispersive bands at the center of the\nzone but some, for symmetry reasons, do not hybridize with other states. The\nobjective of our work is to provide an analysis of a wide class of so-called\nligand-decorated honeycomb Kagome lattices that are observed in 2D\nmetal-organic framework (MOF) where the ligand occupy honeycomb sites and the\nmetallic atoms the Kagome sites. We show that the $p_x$-$p_y$ graphene model is\nrelevant in these systems and there exists four types of flat bands: Kagome\nflat (singly degenerate) bands, two kinds of ligand-centered flat bands (A$_2$\nlike and E like, respectively doubly and singly degenerate) and metal-centered\n(three fold degenerate) flat bands.\n", "title": "A bird's eye view on the flat and conic band world of the honeycomb and Kagome lattices: Towards an understanding of 2D Metal-Organic Frameworks electronic structure" }
null
null
[ "Physics" ]
null
true
null
15121
null
Validated
null
null
null
{ "abstract": " An increasing number of sensors on mobile, Internet of things (IoT), and\nwearable devices generate time-series measurements of physical activities.\nThough access to the sensory data is critical to the success of many beneficial\napplications such as health monitoring or activity recognition, a wide range of\npotentially sensitive information about the individuals can also be discovered\nthrough access to sensory data and this cannot easily be protected using\ntraditional privacy approaches.\nIn this paper, we propose a privacy-preserving sensing framework for managing\naccess to time-series data in order to provide utility while protecting\nindividuals' privacy. We introduce Replacement AutoEncoder, a novel algorithm\nwhich learns how to transform discriminative features of data that correspond\nto sensitive inferences, into some features that have been more observed in\nnon-sensitive inferences, to protect users' privacy. This efficiency is\nachieved by defining a user-customized objective function for deep\nautoencoders. Our replacement method will not only eliminate the possibility of\nrecognizing sensitive inferences, it also eliminates the possibility of\ndetecting the occurrence of them. That is the main weakness of other approaches\nsuch as filtering or randomization. We evaluate the efficacy of the algorithm\nwith an activity recognition task in a multi-sensing environment using\nextensive experiments on three benchmark datasets. We show that it can retain\nthe recognition accuracy of state-of-the-art techniques while simultaneously\npreserving the privacy of sensitive information. Finally, we utilize the GANs\nfor detecting the occurrence of replacement, after releasing data, and show\nthat this can be done only if the adversarial network is trained on the users'\noriginal data.\n", "title": "Replacement AutoEncoder: A Privacy-Preserving Algorithm for Sensory Data Analysis" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
15122
null
Validated
null
null
null
{ "abstract": " The recently introduced mixed time-averaging semiclassical initial value\nrepresentation molecular dynamics method for spectroscopic calculations [M.\nBuchholz, F. Grossmann, and M. Ceotto, J. Chem. Phys. 144, 094102 (2016)] is\napplied to systems with up to 61 dimensions, ruled by a condensed phase\nCaldeira-Leggett model potential. By calculating the ground state as well as\nthe first few excited states of the system Morse oscillator, changes of both\nthe harmonic frequency and the anharmonicity are determined. The method\nfaithfully reproduces blueshift and redshift effects and the importance of the\ncounter term, as previously suggested by other methods. Differently from\nprevious methods, the present semiclassical method does not take advantage of\nthe specific form of the potential and it can represent a practical tool that\nopens the route to direct ab initio semiclassical simulation of condensed phase\nsystems.\n", "title": "Application of the Mixed Time-averaging Semiclassical Initial Value Representation method to Complex Molecular Spectra" }
null
null
null
null
true
null
15123
null
Default
null
null
null
{ "abstract": " In this paper we establish square-function estimates on the double and single\nlayer potentials with rough inputs for divergence form elliptic operators, of\narbitrary even order 2m, with variable t-independent coefficients in the upper\nhalf-space.\n", "title": "Bounds on layer potentials with rough inputs for higher order elliptic equations" }
null
null
null
null
true
null
15124
null
Default
null
null
null
{ "abstract": " The aim of this paper is to study a poset isomorphism between two support\n$\\tau$-tilting posets. We take several algebraic information from combinatorial\nproperties of support $\\tau$-tilting posets. As an application, we treat a\ncertain class of basic algebras which contains preprojective algebras of type\n$A$, Nakayama algebras, and generalized Brauer tree algebras. We provide a\nnecessary condition for that an algebra $\\Lambda$ share the same support\n$\\tau$-tilting poset with a given algebra $\\Gamma$ in this class. Furthermore,\nwe see that this necessary condition is also a sufficient condition if $\\Gamma$\nis either a preprojective algebra of type $A$, a Nakayama algebra, or a\ngeneralized Brauer tree algebra.\n", "title": "From support $τ$-tilting posets to algebras" }
null
null
null
null
true
null
15125
null
Default
null
null
null
{ "abstract": " One of the main challenges in probing the reionization epoch using the\nredshifted 21 cm line is that the magnitude of the signal is several orders\nsmaller than the astrophysical foregrounds. One of the methods to deal with the\nproblem is to avoid a wedge-shaped region in the Fourier $k_{\\perp} -\nk_{\\parallel}$ space which contains the signal from the spectrally smooth\nforegrounds. However, measuring the spherically averaged power spectrum using\nonly modes outside this wedge (i.e., in the reionization window), leads to a\nbias. We provide a prescription, based on expanding the power spectrum in terms\nof the shifted Legendre polynomials, which can be used to compute the angular\nmoments of the power spectrum in the reionization window. The prescription\nrequires computation of the monopole, quadrupole and hexadecapole moments of\nthe power spectrum using the theoretical model under consideration and also the\nknowledge of the effective extent of the foreground wedge in the $k_{\\perp} -\nk_{\\parallel}$ plane. One can then calculate the theoretical power spectrum in\nthe window which can be directly compared with observations. The analysis\nshould have implications for avoiding any bias in the parameter constraints\nusing 21 cm power spectrum data.\n", "title": "Measuring the reionization 21 cm fluctuations using clustering wedges" }
null
null
null
null
true
null
15126
null
Default
null
null
null
{ "abstract": " Linear-Quadratic-Gaussian (LQG) control is concerned with the design of an\noptimal controller and estimator for linear Gaussian systems with imperfect\nstate information. Standard LQG assumes the set of sensor measurements, to be\nfed to the estimator, to be given. However, in many problems, arising in\nnetworked systems and robotics, one may not be able to use all the available\nsensors, due to power or payload constraints, or may be interested in using the\nsmallest subset of sensors that guarantees the attainment of a desired control\ngoal. In this paper, we introduce the sensing-constrained LQG control problem,\nin which one has to jointly design sensing, estimation, and control, under\ngiven constraints on the resources spent for sensing. We focus on the realistic\ncase in which the sensing strategy has to be selected among a finite set of\npossible sensing modalities. While the computation of the optimal sensing\nstrategy is intractable, we present the first scalable algorithm that computes\na near-optimal sensing strategy with provable sub-optimality guarantees. To\nthis end, we show that a separation principle holds, which allows the design of\nsensing, estimation, and control policies in isolation. We conclude the paper\nby discussing two applications of sensing-constrained LQG control, namely,\nsensing-constrained formation control and resource-constrained robot\nnavigation.\n", "title": "Sensing-Constrained LQG Control" }
null
null
null
null
true
null
15127
null
Default
null
null
null
{ "abstract": " Detection, tracking, and pose estimation of surgical instruments are crucial\ntasks for computer assistance during minimally invasive robotic surgery. In the\nmajority of cases, the first step is the automatic segmentation of surgical\ntools. Prior work has focused on binary segmentation, where the objective is to\nlabel every pixel in an image as tool or background. We improve upon previous\nwork in two major ways. First, we leverage recent techniques such as deep\nresidual learning and dilated convolutions to advance binary-segmentation\nperformance. Second, we extend the approach to multi-class segmentation, which\nlets us segment different parts of the tool, in addition to background. We\ndemonstrate the performance of this method on the MICCAI Endoscopic Vision\nChallenge Robotic Instruments dataset.\n", "title": "Deep Residual Learning for Instrument Segmentation in Robotic Surgery" }
null
null
null
null
true
null
15128
null
Default
null
null
null
{ "abstract": " This paper is about computing constrained approximate Nash equilibria in\npolymatrix games, which are succinctly represented many-player games defined by\nan interaction graph between the players. In a recent breakthrough, Rubinstein\nshowed that there exists a small constant $\\epsilon$, such that it is\nPPAD-complete to find an (unconstrained) $\\epsilon$-Nash equilibrium of a\npolymatrix game. In the first part of the paper, we show that is NP-hard to\ndecide if a polymatrix game has a constrained approximate equilibrium for 9\nnatural constraints and any non-trivial approximation guarantee. These results\nhold even for planar bipartite polymatrix games with degree 3 and at most 7\nstrategies per player, and all non-trivial approximation guarantees. These\nresults stand in contrast to similar results for bimatrix games, which\nobviously need a non-constant number of actions, and which rely on stronger\ncomplexity-theoretic conjectures such as the exponential time hypothesis. In\nthe second part, we provide a deterministic QPTAS for interaction graphs with\nbounded treewidth and with logarithmically many actions per player that can\ncompute constrained approximate equilibria for a wide family of constraints\nthat cover many of the constraints dealt with in the first part.\n", "title": "Computing Constrained Approximate Equilibria in Polymatrix Games" }
null
null
null
null
true
null
15129
null
Default
null
null
null
{ "abstract": " A promising paradigm for achieving highly efficient deep neural networks is\nthe idea of evolutionary deep intelligence, which mimics biological evolution\nprocesses to progressively synthesize more efficient networks. A crucial design\nfactor in evolutionary deep intelligence is the genetic encoding scheme used to\nsimulate heredity and determine the architectures of offspring networks. In\nthis study, we take a deeper look at the notion of synaptic cluster-driven\nevolution of deep neural networks which guides the evolution process towards\nthe formation of a highly sparse set of synaptic clusters in offspring\nnetworks. Utilizing a synaptic cluster-driven genetic encoding, the\nprobabilistic encoding of synaptic traits considers not only individual\nsynaptic properties but also inter-synaptic relationships within a deep neural\nnetwork. This process results in highly sparse offspring networks which are\nparticularly tailored for parallel computational devices such as GPUs and deep\nneural network accelerator chips. Comprehensive experimental results using four\nwell-known deep neural network architectures (LeNet-5, AlexNet, ResNet-56, and\nDetectNet) on two different tasks (object categorization and object detection)\ndemonstrate the efficiency of the proposed method. Cluster-driven genetic\nencoding scheme synthesizes networks that can achieve state-of-the-art\nperformance with significantly smaller number of synapses than that of the\noriginal ancestor network. ($\\sim$125-fold decrease in synapses for MNIST).\nFurthermore, the improved cluster efficiency in the generated offspring\nnetworks ($\\sim$9.71-fold decrease in clusters for MNIST and a $\\sim$8.16-fold\ndecrease in clusters for KITTI) is particularly useful for accelerated\nperformance on parallel computing hardware architectures such as those in GPUs\nand deep neural network accelerator chips.\n", "title": "Evolution in Groups: A deeper look at synaptic cluster driven evolution of deep neural networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
15130
null
Validated
null
null
null
{ "abstract": " The problem of Non-Gaussian Component Analysis (NGCA) is about finding a\nmaximal low-dimensional subspace $E$ in $\\mathbb{R}^n$ so that data points\nprojected onto $E$ follow a non-gaussian distribution. Although this is an\nappropriate model for some real world data analysis problems, there has been\nlittle progress on this problem over the last decade.\nIn this paper, we attempt to address this state of affairs in two ways.\nFirst, we give a new characterization of standard gaussian distributions in\nhigh-dimensions, which lead to effective tests for non-gaussianness. Second, we\npropose a simple algorithm, \\emph{Reweighted PCA}, as a method for solving the\nNGCA problem. We prove that for a general unknown non-gaussian distribution,\nthis algorithm recovers at least one direction in $E$, with sample and time\ncomplexity depending polynomially on the dimension of the ambient space. We\nconjecture that the algorithm actually recovers the entire $E$.\n", "title": "Polynomial Time and Sample Complexity for Non-Gaussian Component Analysis: Spectral Methods" }
null
null
null
null
true
null
15131
null
Default
null
null
null
{ "abstract": " Acoustic ranging based indoor positioning solutions have the advantage of\nhigher ranging accuracy and better compatibility with commercial-off-the-self\nconsumer devices. However, similar to other time-domain based approaches using\nTime-of-Arrival and Time-Difference-of-Arrival, they suffer from performance\ndegradation in presence of multi-path propagation and low received\nsignal-to-noise ratio (SNR) in indoor environments. In this paper, we improve\nupon our previous work on asynchronous acoustic indoor positioning and develop\nARABIS, a robust and low-cost acoustic positioning system (IPS) for mobile\ndevices. We develop a low-cost acoustic board custom-designed to support large\noperational ranges and extensibility. To mitigate the effects of low SNR and\nmulti-path propagation, we devise a robust algorithm that iteratively removes\npossible outliers by taking advantage of redundant TDoA estimates. Experiments\nhave been carried in two testbeds of sizes 10.67m*7.76m and 15m*15m, one in an\nacademic building and one in a convention center. The proposed system achieves\naverage and 95% quantile localization errors of 7.4cm and 16.0cm in the first\ntestbed with 8 anchor nodes and average and 95% quantile localization errors of\n20.4cm and 40.0cm in the second testbed with 4 anchor nodes only.\n", "title": "ARABIS: an Asynchronous Acoustic Indoor Positioning System for Mobile Devices" }
null
null
[ "Computer Science" ]
null
true
null
15132
null
Validated
null
null
null
{ "abstract": " Semi-supervised learning (SSL) provides a powerful framework for leveraging\nunlabeled data when labels are limited or expensive to obtain. SSL algorithms\nbased on deep neural networks have recently proven successful on standard\nbenchmark tasks. However, we argue that these benchmarks fail to address many\nissues that these algorithms would face in real-world applications. After\ncreating a unified reimplementation of various widely-used SSL techniques, we\ntest them in a suite of experiments designed to address these issues. We find\nthat the performance of simple baselines which do not use unlabeled data is\noften underreported, that SSL methods differ in sensitivity to the amount of\nlabeled and unlabeled data, and that performance can degrade substantially when\nthe unlabeled dataset contains out-of-class examples. To help guide SSL\nresearch towards real-world applicability, we make our unified reimplemention\nand evaluation platform publicly available.\n", "title": "Realistic Evaluation of Deep Semi-Supervised Learning Algorithms" }
null
null
[ "Statistics" ]
null
true
null
15133
null
Validated
null
null
null
{ "abstract": " We are concerned with the inverse scattering problem of recovering an\ninhomogeneous medium by the associated acoustic wave measurement. We prove that\nunder certain assumptions, a single far-field pattern determines the values of\na perturbation to the refractive index on the corners of its support. These\nassumptions are satisfied for example in the low acoustic frequency regime. As\na consequence if the perturbation is piecewise constant with either a\npolyhedral nest geometry or a known polyhedral cell geometry, such as a pixel\nor voxel array, we establish the injectivity of the perturbation to far-field\nmap given a fixed incident wave. This is the first unique determinancy result\nof its type in the literature, and all of the existing results essentially make\nuse of infinitely many measurements.\n", "title": "Recovering piecewise constant refractive indices by a single far-field pattern" }
null
null
null
null
true
null
15134
null
Default
null
null
null
{ "abstract": " We introduce a \"workable\" notion of degree for non-homogeneous polynomial\nideals and formulate and prove ideal theoretic Bézout Inequalities for the\nsum of two ideals in terms of this notion of degree and the degree of\ngenerators. We compute probabilistically the degree of an equidimensional\nideal.\n", "title": "On Bezout Inequalities for non-homogeneous Polynomial Ideals" }
null
null
null
null
true
null
15135
null
Default
null
null
null
{ "abstract": " In this paper, we further develop the theory of complete mixability and joint\nmixability for some distribution families. We generalize a result of\nRüschendorf and Uckelmann (2002) related to complete mixability of continuous\ndistribution function having a symmetric and unimodal density. Two different\nproofs to a result of Wang and Wang (2016) which related to the joint\nmixability of elliptical distributions with the same characteristic generator\nare present. We solve the Open Problem 7 in Wang (2015) by constructing a\nbimodal-symmetric distribution. The joint mixability of slash-elliptical\ndistributions and skew-elliptical distributions is studied and the extension to\nmultivariate distributions is also investigated.\n", "title": "Joint Mixability of Elliptical Distributions and Related Families" }
null
null
null
null
true
null
15136
null
Default
null
null
null
{ "abstract": " To guarantee the security of uniform random numbers generated by a quantum\nrandom number generator, we study secure extraction of uniform random numbers\nwhen the environment of a given quantum state is controlled by the third party,\nthe eavesdropper. Here we restrict our operations to incoherent strategies that\nare composed of the measurement on the computational basis and incoherent\noperations (or incoherence-preserving operations). We show that the maximum\nsecure extraction rate is equal to the relative entropy of coherence. By\ncontrast, the coherence of formation gives the extraction rate when a certain\nconstraint is imposed on eavesdropper's operations. The condition under which\nthe two extraction rates coincide is then determined. Furthermore, we find that\nthe exponential decreasing rate of the leaked information is characterized by\nRényi relative entropies of coherence. These results clarify the power of\nincoherent strategies in random number generation, and can be applied to\nguarantee the quality of random numbers generated by a quantum random number\ngenerator.\n", "title": "Secure uniform random number extraction via incoherent strategies" }
null
null
[ "Computer Science" ]
null
true
null
15137
null
Validated
null
null
null
{ "abstract": " Email cryptography applications often suffer from major problems that prevent\ntheir widespread implementation. MEG, or the Mobile Encryption Gateway aims to\nfix the issues associated with email encryption by ensuring that encryption is\neasy to perform while still maintaining data security. MEG performs automatic\ndecryption and encryption of all emails using PGP. Users do not need to\nunderstand the internal workings of the encryption process to use the\napplication. MEG is meant to be email-client-agnostic, enabling users to employ\nvirtually any email service to send messages. Encryption actions are performed\non the user's mobile device, which means their keys and data remain personal.\nMEG can also tackle network effect problems by inviting non-users to join. Most\nimportantly, MEG uses end-to-end encryption, which ensures that all aspects of\nthe encrypted information remains private. As a result, we are hopeful that MEG\nwill finally solve the problem of practical email encryption.\n", "title": "Mobile Encryption Gateway (MEG) for Email Encryption" }
null
null
null
null
true
null
15138
null
Default
null
null
null
{ "abstract": " We study topological excitations in two-component nematic superconductors,\nwith a particular focus on Cu$_x$Bi$_2$Se$_3$ as a candidate material. We find\nthat the lowest-energy topological excitations are coreless vortices: a bound\nstate of two spatially separated half-quantum vortices. These objects are\nnematic Skyrmions, since they are characterized by an additional topological\ncharge. The inter-Skyrmion forces are dipolar in this model, i.e. attractive\nfor certain relative orientations of the Skyrmions, hence forming\nmulti-Skyrmion bound states.\n", "title": "Nematic Skyrmions in Odd-Parity Superconductors" }
null
null
null
null
true
null
15139
null
Default
null
null
null
{ "abstract": " Statistical inference for exponential-family models of random graphs with\ndependent edges is challenging. We stress the importance of additional\nstructure and show that additional structure facilitates statistical inference.\nA simple example of a random graph with additional structure is a random graph\nwith neighborhoods and local dependence within neighborhoods. We develop the\nfirst concentration and consistency results for maximum likelihood and\n$M$-estimators of a wide range of canonical and curved exponential-family\nmodels of random graphs with local dependence. All results are non-asymptotic\nand applicable to random graphs with finite populations of nodes, although\nasymptotic consistency results can be obtained as well. In addition, we show\nthat additional structure can facilitate subgraph-to-graph estimation, and\npresent concentration results for subgraph-to-graph estimators. As an\napplication, we consider popular curved exponential-family models of random\ngraphs, with local dependence induced by transitivity and parameter vectors\nwhose dimensions depend on the number of nodes.\n", "title": "Concentration and consistency results for canonical and curved exponential-family models of random graphs" }
null
null
null
null
true
null
15140
null
Default
null
null
null
{ "abstract": " We propose a mixed integer programming (MIP) model and iterative algorithms\nbased on topological orders to solve optimization problems with acyclic\nconstraints on a directed graph. The proposed MIP model has a significantly\nlower number of constraints compared to popular MIP models based on cycle\nelimination constraints and triangular inequalities. The proposed iterative\nalgorithms use gradient descent and iterative reordering approaches,\nrespectively, for searching topological orders. A computational experiment is\npresented for the Gaussian Bayesian network learning problem, an optimization\nproblem minimizing the sum of squared errors of regression models with L1\npenalty over a feature network with application of gene network inference in\nbioinformatics.\n", "title": "Bayesian Network Learning via Topological Order" }
null
null
null
null
true
null
15141
null
Default
null
null
null
{ "abstract": " The recent announcement of a Neptune-sized exomoon candidate around the\ntransiting Jupiter-sized object Kepler-1625 b could indicate the presence of a\nhitherto unknown kind of gas giant moons, if confirmed. Three transits have\nbeen observed, allowing radius estimates of both objects. Here we investigate\npossible mass regimes of the transiting system that could produce the observed\nsignatures and study them in the context of moon formation in the solar system,\ni.e. via impacts, capture, or in-situ accretion. The radius of Kepler-1625 b\nsuggests it could be anything from a gas giant planet somewhat more massive\nthan Saturn (0.4 M_Jup) to a brown dwarf (BD) (up to 75 M_Jup) or even a\nvery-low-mass star (VLMS) (112 M_Jup ~ 0.11 M_sun). The proposed companion\nwould certainly have a planetary mass. Possible extreme scenarios range from a\nhighly inflated Earth-mass gas satellite to an atmosphere-free water-rock\ncompanion of about 180 M_Ear. Furthermore, the planet-moon dynamics during the\ntransits suggest a total system mass of 17.6_{-12.6}^{+19.2} M_Jup. A\nNeptune-mass exomoon around a giant planet or low-mass BD would not be\ncompatible with the common mass scaling relation of the solar system moons\nabout gas giants. The case of a mini-Neptune around a high-mass BD or a VLMS,\nhowever, would be located in a similar region of the satellite-to-host mass\nratio diagram as Proxima b, the TRAPPIST-1 system, and LHS 1140 b. The capture\nof a Neptune-mass object around a 10 M_Jup planet during a close binary\nencounter is possible in principle. The ejected object, however, would have had\nto be a super-Earth object, raising further questions of how such a system\ncould have formed. In summary, this exomoon candidate is barely compatible with\nestablished moon formation theories. If it can be validated as orbiting a\nsuper-Jovian planet, then it would pose an exquisite riddle for formation\ntheorists to solve.\n", "title": "The nature of the giant exomoon candidate Kepler-1625 b-i" }
null
null
null
null
true
null
15142
null
Default
null
null
null
{ "abstract": " We present a data-driven framework called generative adversarial privacy\n(GAP). Inspired by recent advancements in generative adversarial networks\n(GANs), GAP allows the data holder to learn the privatization mechanism\ndirectly from the data. Under GAP, finding the optimal privacy mechanism is\nformulated as a constrained minimax game between a privatizer and an adversary.\nWe show that for appropriately chosen adversarial loss functions, GAP provides\nprivacy guarantees against strong information-theoretic adversaries. We also\nevaluate the performance of GAP on multi-dimensional Gaussian mixture models\nand the GENKI face database.\n", "title": "Generative Adversarial Privacy" }
null
null
[ "Statistics" ]
null
true
null
15143
null
Validated
null
null
null
{ "abstract": " Eradicating hunger and malnutrition is a key development goal of the 21st\ncentury. We address the problem of optimally identifying seed varieties to\nreliably increase crop yield within a risk-sensitive decision-making framework.\nSpecifically, we introduce a novel hierarchical machine learning mechanism for\npredicting crop yield (the yield of different seed varieties of the same crop).\nWe integrate this prediction mechanism with a weather forecasting model, and\npropose three different approaches for decision making under uncertainty to\nselect seed varieties for planting so as to balance yield maximization and\nrisk.We apply our model to the problem of soybean variety selection given in\nthe 2016 Syngenta Crop Challenge. Our prediction model achieves a median\nabsolute error of 3.74 bushels per acre and thus provides good estimates for\ninput into the decision models.Our decision models identify the selection of\nsoybean varieties that appropriately balance yield and risk as a function of\nthe farmer's risk aversion level. More generally, our models support farmers in\ndecision making about which seed varieties to plant.\n", "title": "Hierarchical Modeling of Seed Variety Yields and Decision Making for Future Planting Plans" }
null
null
null
null
true
null
15144
null
Default
null
null
null
{ "abstract": " Stress Urinary Incontinence (SUI) or urine leakage from urethra occurs due to\nan increase in abdominal pressure resulting from stress like a cough or jumping\nheight. SUI is more frequent among post-menopausal women. In the absence of\nbladder contraction, vesical pressure exceeds from urethral pressure leading to\nurine leakage. Despite a large number of patients diagnosed with this problem,\nfew studies have investigated its function and mechanics. The main goal of this\nstudy is to model bladder and urethra computationally under an external\npressure like sneezing. Finite Element Method and Fluid-Structure Interactions\nare utilized for simulation. Linear mechanical properties assigned to the\nbladder and urethra and pressure boundary conditions are indispensable in this\nmodel. The results show good accordance between the clinical data and predicted\nvalues of the computational models, such as the pressure at the center of the\nbladder. This indicates that numerical methods and simplified physics of\nbiological systems like inferior urinary tract are helpful to achieve the\nresults similar to clinical results, in order to investigate pathological\nconditions.\n", "title": "A Clinical and Finite Elements Study of Stress Urinary Incontinence in Women Using Fluid-Structure Interactions" }
null
null
null
null
true
null
15145
null
Default
null
null
null
{ "abstract": " In this paper, we study the compressibility of random processes and fields,\ncalled generalized Lévy processes, that are solutions of stochastic\ndifferential equations driven by $d$-dimensional periodic Lévy white noises.\nOur results are based on the estimation of the Besov regularity of Lévy white\nnoises and generalized Lévy processes. We show in particular that\nnon-Gaussian generalized Lévy processes are more compressible in a wavelet\nbasis than the corresponding Gaussian processes, in the sense that their\n$n$-term approximation error decays faster. We quantify this compressibility in\nterms of the Blumenthal-Getoor index of the underlying Lévy white noise.\n", "title": "The n-term Approximation of Periodic Generalized Lévy Processes" }
null
null
null
null
true
null
15146
null
Default
null
null
null
{ "abstract": " Space-filling designs are popular choices for computer experiments. A sliced\ndesign is a design that can be partitioned into several subdesigns. We propose\na new type of sliced space-filling design called sliced rotated sphere packing\ndesigns. Their full designs and subdesigns are rotated sphere packing designs.\nThey are constructed by rescaling, rotating, translating and extracting the\npoints from a sliced lattice. We provide two fast algorithms to generate such\ndesigns. Furthermore, we propose a strategy to use sliced rotated sphere\npacking designs adaptively. Under this strategy, initial runs are uniformly\ndistributed in the design space, follow-up runs are added by incorporating\ninformation gained from initial runs, and the combined design is space-filling\nfor any local region. Examples are given to illustrate its potential\napplication.\n", "title": "Sliced rotated sphere packing designs" }
null
null
null
null
true
null
15147
null
Default
null
null
null
{ "abstract": " Adaptive stochastic gradient descent methods, such as AdaGrad, RMSProp, Adam,\nAMSGrad, etc., have been demonstrated efficacious in solving non-convex\nstochastic optimization, such as training deep neural networks. However, their\nconvergence rates have not been touched under the non-convex stochastic\ncircumstance except recent breakthrough results on AdaGrad, perturbed AdaGrad\nand AMSGrad. In this paper, we propose two new adaptive stochastic gradient\nmethods called AdaHB and AdaNAG which integrate a novel weighted\ncoordinate-wise AdaGrad with heavy ball momentum and Nesterov accelerated\ngradient momentum, respectively. The $\\mathcal{O}(\\frac{\\log{T}}{\\sqrt{T}})$\nnon-asymptotic convergence rates of AdaHB and AdaNAG in non-convex stochastic\nsetting are also jointly established by leveraging a newly developed unified\nformulation of these two momentum mechanisms. Moreover, comparisons have been\nmade between AdaHB, AdaNAG, Adam and RMSProp, which, to a certain extent,\nexplains the reasons why Adam and RMSProp are divergent. In particular, when\nmomentum term vanishes we obtain convergence rate of coordinate-wise AdaGrad in\nnon-convex stochastic setting as a byproduct.\n", "title": "On the Convergence of Weighted AdaGrad with Momentum for Training Deep Neural Networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
15148
null
Validated
null
null
null
{ "abstract": " This paper studies scenarios of cyclic dominance in a coevolutionary spatial\nmodel in which game strategies and links between agents adaptively evolve over\ntime. The Optional Prisoner's Dilemma (OPD) game is employed. The OPD is an\nextended version of the traditional Prisoner's Dilemma where players have a\nthird option to abstain from playing the game. We adopt an agent-based\nsimulation approach and use Monte Carlo methods to perform the OPD with\ncoevolutionary rules. The necessary conditions to break the scenarios of cyclic\ndominance are also investigated. This work highlights that cyclic dominance is\nessential in the sustenance of biodiversity. Moreover, we also discuss the\nimportance of a spatial coevolutionary model in maintaining cyclic dominance in\nadverse conditions.\n", "title": "Cyclic Dominance in the Spatial Coevolutionary Optional Prisoner's Dilemma Game" }
null
null
null
null
true
null
15149
null
Default
null
null
null
{ "abstract": " Deep networks have recently been shown to be vulnerable to universal\nperturbations: there exist very small image-agnostic perturbations that cause\nmost natural images to be misclassified by such classifiers. In this paper, we\npropose the first quantitative analysis of the robustness of classifiers to\nuniversal perturbations, and draw a formal link between the robustness to\nuniversal perturbations, and the geometry of the decision boundary.\nSpecifically, we establish theoretical bounds on the robustness of classifiers\nunder two decision boundary models (flat and curved models). We show in\nparticular that the robustness of deep networks to universal perturbations is\ndriven by a key property of their curvature: there exists shared directions\nalong which the decision boundary of deep networks is systematically positively\ncurved. Under such conditions, we prove the existence of small universal\nperturbations. Our analysis further provides a novel geometric method for\ncomputing universal perturbations, in addition to explaining their properties.\n", "title": "Analysis of universal adversarial perturbations" }
null
null
null
null
true
null
15150
null
Default
null
null
null
{ "abstract": " We investigate anomaly detection in an unsupervised framework and introduce\nLong Short Term Memory (LSTM) neural network based algorithms. In particular,\ngiven variable length data sequences, we first pass these sequences through our\nLSTM based structure and obtain fixed length sequences. We then find a decision\nfunction for our anomaly detectors based on the One Class Support Vector\nMachines (OC-SVM) and Support Vector Data Description (SVDD) algorithms. As the\nfirst time in the literature, we jointly train and optimize the parameters of\nthe LSTM architecture and the OC-SVM (or SVDD) algorithm using highly effective\ngradient and quadratic programming based training methods. To apply the\ngradient based training method, we modify the original objective criteria of\nthe OC-SVM and SVDD algorithms, where we prove the convergence of the modified\nobjective criteria to the original criteria. We also provide extensions of our\nunsupervised formulation to the semi-supervised and fully supervised\nframeworks. Thus, we obtain anomaly detection algorithms that can process\nvariable length data sequences while providing high performance, especially for\ntime series data. Our approach is generic so that we also apply this approach\nto the Gated Recurrent Unit (GRU) architecture by directly replacing our LSTM\nbased structure with the GRU based structure. In our experiments, we illustrate\nsignificant performance gains achieved by our algorithms with respect to the\nconventional methods.\n", "title": "Unsupervised and Semi-supervised Anomaly Detection with LSTM Neural Networks" }
null
null
null
null
true
null
15151
null
Default
null
null
null
{ "abstract": " Rapid miniaturization and cost reduction of computing, along with the\navailability of wearable and implantable physiological sensors have led to the\ngrowth of human Body Area Network (BAN) formed by a network of such sensors and\ncomputing devices. One promising application of such a network is wearable\nhealth monitoring where the collected data from the sensors would be\ntransmitted and analyzed to assess the health of a person. Typically, the\ndevices in a BAN are connected through wireless (WBAN), which suffers from\nenergy inefficiency due to the high-energy consumption of wireless\ntransmission. Human Body Communication (HBC) uses the relatively low loss human\nbody as the communication medium to connect these devices, promising order(s)\nof magnitude better energy-efficiency and built-in security compared to WBAN.\nIn this paper, we demonstrate a health monitoring device and system built using\nCommercial-Off-The- Shelf (COTS) sensors and components, that can collect data\nfrom physiological sensors and transmit it through a) intra-body HBC to another\ndevice (hub) worn on the body or b) upload health data through HBC-based\nhuman-machine interaction to an HBC capable machine. The system design\nconstraints and signal transfer characteristics for the implemented HBC-based\nwearable health monitoring system are measured and analyzed, showing reliable\nconnectivity with >8x power savings compared to Bluetooth lowenergy (BTLE).\n", "title": "Wearable Health Monitoring Using Capacitive Voltage-Mode Human Body Communication" }
null
null
null
null
true
null
15152
null
Default
null
null
null
{ "abstract": " We investigate the effect of band-limited white Gaussian noise (BLWGN) on\nelectromagnetically induced transparency (EIT) and Autler-Townes (AT)\nsplitting, when performing atom-based continuous-wave (CW) radio-frequency (RF)\nelectric (E) field strength measurements with Rydberg atoms in an atomic vapor.\nThis EIT/AT-based E-field measurement approach is currently being investigated\nby several groups around the world as a means to develop a new SI traceable RF\nE-field measurement technique. For this to be a useful technique, it is\nimportant to understand the influence of BLWGN. We perform EIT/AT based E-field\nexperiments with BLWGN centered on the RF transition frequency and for the\nBLWGN blue-shifted and red-shifted relative to the RF transition frequency. The\nEIT signal can be severely distorted for certain noise conditions (band-width,\ncenter-frequency, and noise power), hence altering the ability to accurately\nmeasure a CW RF E-field strength. We present a model to predict the changes in\nthe EIT signal in the presence of noise. This model includes AC Stark shifts\nand on resonance transitions associated with the noise source. The results of\nthis model are compared to the experimental data and we find very good\nagreement between the two.\n", "title": "Electromagnetically Induced Transparency (EIT) and Autler-Townes (AT) splitting in the Presence of Band-Limited White Gaussian Noise" }
null
null
null
null
true
null
15153
null
Default
null
null
null
{ "abstract": " Signal-to-noise-plus-interference ratio (SINR) outage probability is among\none of the key performance metrics of a wireless cellular network. In this\npaper, we propose a semi-analytical method based on saddle point approximation\n(SPA) technique to calculate the SINR outage of a wireless system whose SINR\ncan be modeled in the form $\\frac{\\sum_{i=1}^M X_i}{\\sum_{i=1}^N Y_i +1}$ where\n$X_i$ denotes the useful signal power, $Y_i$ denotes the power of the\ninterference signal, and $\\sum_{i=1}^M X_i$, $\\sum_{i=1}^N Y_i$ are independent\nrandom variables. Both $M$ and $N$ can also be random variables. The proposed\napproach is based on the saddle point approximation to cumulative distribution\nfunction (CDF) as given by \\tit{Wood-Booth-Butler formula}. The approach is\napplicable whenever the cumulant generating function (CGF) of the received\nsignal and interference exists, and it allows us to tackle distributions with\nlarge skewness and kurtosis with higher accuracy. In this regard, we exploit a\nfour parameter \\tit{normal-inverse Gaussian} (NIG) distribution as a base\ndistribution. Given that the skewness and kurtosis satisfy a specific\ncondition, NIG-based SPA works reliably. When this condition is violated, we\nrecommend SPA based on normal or symmetric NIG distribution, both special cases\nof NIG distribution, at the expense of reduced accuracy. For the purpose of\ndemonstration, we apply SPA for the SINR outage evaluation of a typical user\nexperiencing a downlink coordinated multi-point transmission (CoMP) from the\nbase stations (BSs) that are modeled by homogeneous Poisson point process. We\ncharacterize the outage of the typical user in scenarios such as (a)~when the\nnumber and locations of interferers are random, and (b)~when the fading\nchannels and number of interferers are random. Numerical results are presented\nto illustrate the accuracy of the proposed set of approximations.\n", "title": "SINR Outage Evaluation in Cellular Networks: Saddle Point Approximation (SPA) Using Normal Inverse Gaussian (NIG) Distribution" }
null
null
null
null
true
null
15154
null
Default
null
null
null
{ "abstract": " One of the most important tools for the development of the smart grid is\nsimulation. Therefore, analyzing, designing, modeling, and simulating the smart\ngrid will allow to explore future scenarios and support decision making for the\ngrid's development. In this paper, we compare two open source simulation tools\nfor the smart grid, GridLAB-Distribution (GridLAB-D) and Renewable Alternative\nPower systems Simulation (RAPSim). The comparison is based on the\nimplementation of two case studies related to a power flow problem and the\nintegration of renewable energy resources to the grid. Results show that even\nfor very simple case studies, specific properties such as weather simulation or\nload modeling are influencing the results in a way that they are not\nreproducible with a different simulator.\n", "title": "Smart grid modeling and simulation - Comparing GridLAB-D and RAPSim via two Case studies" }
null
null
[ "Computer Science" ]
null
true
null
15155
null
Validated
null
null
null
{ "abstract": " Revealing Adverse Drug Reactions (ADR) is an essential part of post-marketing\ndrug surveillance, and data from health-related forums and medical communities\ncan be of a great significance for estimating such effects. In this paper, we\npropose an end-to-end CNN-based method for predicting drug safety on user\ncomments from healthcare discussion forums. We present an architecture that is\nbased on a vast ensemble of CNNs with varied structural parameters, where the\nprediction is determined by the majority vote. To evaluate the performance of\nthe proposed solution, we present a large-scale dataset collected from a\nmedical website that consists of over 50 thousand reviews for more than 4000\ndrugs. The results demonstrate that our model significantly outperforms\nconventional approaches and predicts medicine safety with an accuracy of 87.17%\nfor binary and 62.88% for multi-classification tasks.\n", "title": "A Large-Scale CNN Ensemble for Medication Safety Analysis" }
null
null
null
null
true
null
15156
null
Default
null
null
null
{ "abstract": " It is well-known that the problem to solve equations in virtually free groups\ncan be reduced to the problem to solve twisted word equations with regular\nconstraints over free monoids with involution.\nIn a first part of the paper we prove that the set of all solutions of such a\ntwisted word equation is an EDT0L language and that the specification of that\nEDT0L language can be computed in PSPACE. (We give a more precise bound in the\npaper.) Within the same complexity bound we can decide whether the solution set\nis empty, finite, or infinite. No PSPACE-algorithm, actually no concrete\ncomplexity bound was known for deciding emptiness before. Decidability of\nfiniteness was considered to be an open problem.\nIn the second part we apply the results to the solution set of equations with\nrational constraints in finitely generated virtually free groups. For each such\ngroup we obtain the same results as above for the set of solutions in standard\nnormal forms with respect to some natural set of generators. In particular, for\na fixed group we can decide in PSPACE whether the solution set is empty,\nfinite, or infinite.\nOur results generalize the work by Lohrey and Sénizergues (ICALP 2006) and\nDahmani and Guirardel (J. of Topology 2010) with respect to both complexity and\nexpressive power. Neither paper gave any concrete complexity bound and the\nresults in these papers are stated subsets of solutions only, whereas our\nresults concern all solutions. Moreover, we give a formal language\ncharacterization of the full solution set as an EDT0L language.\n", "title": "Solutions to twisted word equations and equations in virtually free groups" }
null
null
null
null
true
null
15157
null
Default
null
null
null
{ "abstract": " Predicting traffic conditions has been recently explored as a way to relieve\ntraffic congestion. Several pioneering approaches have been proposed based on\ntraffic observations of the target location as well as its adjacent regions,\nbut they obtain somewhat limited accuracy due to lack of mining road topology.\nTo address the effect attenuation problem, we propose to take account of the\ntraffic of surrounding locations(wider than adjacent range). We propose an\nend-to-end framework called DeepTransport, in which Convolutional Neural\nNetworks (CNN) and Recurrent Neural Networks (RNN) are utilized to obtain\nspatial-temporal traffic information within a transport network topology. In\naddition, attention mechanism is introduced to align spatial and temporal\ninformation. Moreover, we constructed and released a real-world large traffic\ncondition dataset with 5-minute resolution. Our experiments on this dataset\ndemonstrate our method captures the complex relationship in temporal and\nspatial domain. It significantly outperforms traditional statistical methods\nand a state-of-the-art deep learning method.\n", "title": "DeepTransport: Learning Spatial-Temporal Dependency for Traffic Condition Forecasting" }
null
null
null
null
true
null
15158
null
Default
null
null
null
{ "abstract": " The article addresses a long-standing open problem on the justification of\nusing variational Bayes methods for parameter estimation. We provide general\nconditions for obtaining optimal risk bounds for point estimates acquired from\nmean-field variational Bayesian inference. The conditions pertain to the\nexistence of certain test functions for the distance metric on the parameter\nspace and minimal assumptions on the prior. A general recipe for verification\nof the conditions is outlined which is broadly applicable to existing Bayesian\nmodels with or without latent variables. As illustrations, specific\napplications to Latent Dirichlet Allocation and Gaussian mixture models are\ndiscussed.\n", "title": "On Statistical Optimality of Variational Bayes" }
null
null
null
null
true
null
15159
null
Default
null
null
null
{ "abstract": " This paper is a comprehensive introduction to the results of [7]. It grew as\nan expanded version of a talk given at INdAM Meeting Complex and Symplectic\nGeometry, held at Cortona in June 12-18, 2016. It deals with the construction\nof the Teichmüller space of a smooth compact manifold M (that is the space of\nisomorphism classes of complex structures on M) in arbitrary dimension. The\nmain problem is that, whenever we leave the world of surfaces, the\nTeichmüller space is no more a complex manifold or an analytic space but an\nanalytic Artin stack. We explain how to construct explicitly an atlas for this\nstack using ideas coming from foliation theory. Throughout the article, we use\nthe case of $\\mathbb{S}^3\\times\\mathbb{S}^1$ as a recurrent example.\n", "title": "The Teichmüller Stack" }
null
null
null
null
true
null
15160
null
Default
null
null
null
{ "abstract": " The rise of user-contributed Open Source Software (OSS) ecosystems\ndemonstrate their prevalence in the software engineering discipline. Libraries\nwork together by depending on each other across the ecosystem. From these\necosystems emerges a minimized library called a micro-package. Micro- packages\nbecome problematic when breaks in a critical ecosystem dependency ripples its\neffects to unsuspecting users. In this paper, we investigate the impact of\nmicro-packages in the npm JavaScript ecosystem. Specifically, we conducted an\nempirical in- vestigation with 169,964 JavaScript npm packages to understand\n(i) the widespread phenomena of micro-packages, (ii) the size dependencies\ninherited by a micro-package and (iii) the developer usage cost (ie., fetch,\ninstall, load times) of using a micro-package. Results of the study find that\nmicro-packages form a significant portion of the npm ecosystem. Apart from the\nease of readability and comprehension, we show that some micro-packages have\nlong dependency chains and incur just as much usage costs as other npm\npackages. We envision that this work motivates the need for developers to be\naware of how sensitive their third-party dependencies are to critical changes\nin the software ecosystem.\n", "title": "On the Impact of Micro-Packages: An Empirical Study of the npm JavaScript Ecosystem" }
null
null
null
null
true
null
15161
null
Default
null
null
null
{ "abstract": " The design of sparse spatially stretched tripole arrays is an important but\nalso challenging task and this paper proposes for the very first time efficient\nsolutions to this problem. Unlike for the design of traditional sparse antenna\narrays, the developed approaches optimise both the dipole locations and\norientations. The novelty of the paper consists in formulating these\noptimisation problems into a form that can be solved by the proposed\ncompressive sensing and Bayesian compressive sensing based approaches. The\nperformance of the developed approaches is validated and it is shown that\naccurate approximation of a reference response can be achieved with a 67%\nreduction in the number of dipoles required as compared to an equivalent\nuniform spatially stretched tripole array, leading to a significant reduction\nin the cost associated with the resulting arrays.\n", "title": "Location and Orientation Optimisation for Spatially Stretched Tripole Arrays Based on Compressive Sensing" }
null
null
[ "Computer Science" ]
null
true
null
15162
null
Validated
null
null
null
{ "abstract": " The almost sure Hausdorff dimension of the limsup set of randomly distributed\nrectangles in a product of Ahlfors regular metric spaces is computed in terms\nof the singular value function of the rectangles.\n", "title": "Hausdorff dimension of limsup sets of random rectangles in products of regular spaces" }
null
null
[ "Mathematics" ]
null
true
null
15163
null
Validated
null
null
null
{ "abstract": " As power electronics shrinks down to sub-micron scale, the thermal transport\nfrom a solid surface to environment becomes significant. Under circumstances\nwhen the device works in rare gas environment, the scale for thermal transport\nis comparable to the mean free path of molecules, and is difficult to\ncharacterize. In this work, we present an experimental study about thermal\ntransport around a microwire in rare gas environment by using a steady state\nhot wire method. Unlike conventional hot wire technique of using transient heat\ntransfer process, this method considers both the heat conduction along the wire\nand convection effect from wire surface to surroundings. Convection heat\ntransfer coefficient from a platinum wire in diameter 25 um to air is\ncharacterized under different heating power and air pressures to comprehend the\neffect of temperature and density of gas molecules. It is observed that\nconvection heat transfer coefficient varies from 14 Wm-2K-1 at 7 Pa to 629\nWm-2K-1 at atmosphere pressure. In free molecule regime, Nusselt number has a\nlinear relationship with inverse Knudsen number and the slope of 0.274 is\nemployed to determined equivalent thermal dissipation boundary as 7.03E10-4 m.\nIn transition regime, the equivalent thermal dissipation boundary is obtained\nas 5.02E10-4 m. Under a constant pressure, convection heat transfer coefficient\ndecreases with increasing temperature, and this correlation is more sensitive\nto larger pressure. This work provides a pathway for studying both heat\nconduction and heat convection effect at micro/nanoscale under rare gas\nenvironment, the knowledge of which is essential for regulating heat\ndissipation in various industrial applications.\n", "title": "Thermal Characterization of Microscale Heat Convection under Rare Gas Condition by a Modified Hot Wire Method" }
null
null
null
null
true
null
15164
null
Default
null
null
null
{ "abstract": " Let $V_1,V_2,V_3$ be a triple of even dimensional vector spaces over a number\nfield $F$ equipped with nondegenerate quadratic forms\n$\\mathcal{Q}_1,\\mathcal{Q}_2,\\mathcal{Q}_3$, respectively. Let \\begin{align*} Y\n\\subset \\prod_{i=1}V_i \\end{align*} be the closed subscheme consisting of\n$(v_1,v_2,v_3)$ on which\n$\\mathcal{Q}_1(v_1)=\\mathcal{Q}_2(v_2)=\\mathcal{Q}_3(v_3)$. Motivated by\nconjectures of Braverman and Kazhdan and related work of Lafforgue, Ngô, and\nSakellaridis we prove an analogue of the Poisson summation formula for certain\nfunctions on this space.\n", "title": "A summation formula for triples of quadratic spaces" }
null
null
null
null
true
null
15165
null
Default
null
null
null
{ "abstract": " In 1969, Strassen shocked the world by showing that two n x n matrices could\nbe multiplied in time asymptotically less than $O(n^3)$. While the recursive\nconstruction in his algorithm is very clear, the key gain was made by showing\nthat 2 x 2 matrix multiplication could be performed with only 7 multiplications\ninstead of 8. The latter construction was arrived at by a process of\nelimination and appears to come out of thin air. Here, we give the simplest and\nmost transparent proof of Strassen's algorithm that we are aware of, using only\na simple unitary 2-design and a few easy lines of calculation. Moreover, using\nbasic facts from the representation theory of finite groups, we use 2-designs\ncoming from group orbits to generalize our construction to all n (although the\nresulting algorithms aren't optimal for n at least 3).\n", "title": "Designing Strassen's algorithm" }
null
null
null
null
true
null
15166
null
Default
null
null
null
{ "abstract": " Spiking Neural Network (SNN) naturally inspires hardware implementation as it\nis based on biology. For learning, spike time dependent plasticity (STDP) may\nbe implemented using an energy efficient waveform superposition on memristor\nbased synapse. However, system level implementation has three challenges.\nFirst, a classic dilemma is that recognition requires current reading for short\nvoltage$-$spikes which is disturbed by large voltage$-$waveforms that are\nsimultaneously applied on the same memristor for real$-$time learning i.e. the\nsimultaneous read$-$write dilemma. Second, the hardware needs to exactly\nreplicate software implementation for easy adaptation of algorithm to hardware.\nThird, the devices used in hardware simulations must be realistic. In this\npaper, we present an approach to address the above concerns. First, the\nlearning and recognition occurs in separate arrays simultaneously in\nreal$-$time, asynchronously $-$ avoiding non$-$biomimetic clocking based\ncomplex signal management. Second, we show that the hardware emulates software\nat every stage by comparison of SPICE (circuit$-$simulator) with MATLAB\n(mathematical SNN algorithm implementation in software) implementations. As an\nexample, the hardware shows 97.5 per cent accuracy in classification which is\nequivalent to software for a Fisher$-$Iris dataset. Third, the STDP is\nimplemented using a model of synaptic device implemented using HfO2 memristor.\nWe show that an increasingly realistic memristor model slightly reduces the\nhardware performance (85 per cent), which highlights the need to engineer RRAM\ncharacteristics specifically for SNN.\n", "title": "A Software-equivalent SNN Hardware using RRAM-array for Asynchronous Real-time Learning" }
null
null
null
null
true
null
15167
null
Default
null
null
null
{ "abstract": " The \\emph{vitality} of an arc/node of a graph with respect to the maximum\nflow between two fixed nodes $s$ and $t$ is defined as the reduction of the\nmaximum flow caused by the removal of that arc/node. In this paper we address\nthe issue of determining the vitality of arcs and/or nodes for the maximum flow\nproblem. We show how to compute the vitality of all arcs in a general\nundirected graph by solving only $2(n-1)$ max flow instances and, In\n$st$-planar graphs (directed or undirected) we show how to compute the vitality\nof all arcs and all nodes in $O(n)$ worst-case time. Moreover, after\ndetermining the vitality of arcs and/or nodes, and given a planar embedding of\nthe graph, we can determine the vitality of a `contiguous' set of arcs/nodes in\ntime proportional to the size of the set.\n", "title": "Max flow vitality in general and $st$-planar graphs" }
null
null
null
null
true
null
15168
null
Default
null
null
null
{ "abstract": " This paper presents a new approach in understanding how deep neural networks\n(DNNs) work by applying homomorphic signal processing techniques. Focusing on\nthe task of multi-pitch estimation (MPE), this paper demonstrates the\nequivalence relation between a generalized cepstrum and a DNN in terms of their\nstructures and functionality. Such an equivalence relation, together with pitch\nperception theories and the recently established\nrectified-correlations-on-a-sphere (RECOS) filter analysis, provide an\nalternative way in explaining the role of the nonlinear activation function and\nthe multi-layer structure, both of which exist in a cepstrum and a DNN. To\nvalidate the efficacy of this new approach, a new feature designed in the same\nfashion is proposed for pitch salience function. The new feature outperforms\nthe one-layer spectrum in the MPE task and, as predicted, it addresses the\nissue of the missing fundamental effect and also achieves better robustness to\nnoise.\n", "title": "Between Homomorphic Signal Processing and Deep Neural Networks: Constructing Deep Algorithms for Polyphonic Music Transcription" }
null
null
null
null
true
null
15169
null
Default
null
null
null
{ "abstract": " The ability of the mammalian ear in processing high frequency sounds, up to\n$\\sim$100 kHz, is based on the capability of outer hair cells (OHCs) responding\nto stimulation at high frequencies. These cells show a unique motility in their\ncell body coupled with charge movement. With this motile element, voltage\nchanges generated by stimuli at their hair bundles drives the cell body and\nthat, in turn, amplifies the stimuli. In vitro experiments show that the\nmovement of these charges significantly increases the membrane capacitance,\nlimiting the motile activity by additionally attenuating voltage changes. It\nwas found, however, that such an effect is due to the absence of mechanical\nload. In the presence of mechanical resonance, such as in vivo conditions, the\nmovement of motile charges is expected to create negative capacitance near the\nresonance frequency. Therefore this motile mechanism is effective at high\nfrequencies.\n", "title": "Negative membrane capacitance of outer hair cells: electromechanical coupling near resonance" }
null
null
[ "Physics" ]
null
true
null
15170
null
Validated
null
null
null
{ "abstract": " We derive general expressions for resonant inelastic x-ray scattering (RIXS)\noperators for $t_{2g}$ orbital systems, which exhibit a rich array of\nunconventional magnetism arising from unquenched orbital moments. Within the\nfast collision approximation, which is valid especially for 4$d$ and 5$d$\ntransition metal compounds with short core-hole lifetimes, the RIXS operators\nare expressed in terms of total spin and orbital angular momenta of the\nconstituent ions. We then map these operators onto pseudospins that represent\nspin-orbit entangled magnetic moments in systems with strong spin-orbit\ncoupling. Applications of our theory to such systems as iridates and ruthenates\nare discussed, with a particular focus on compounds based on $d^4$ ions with\nVan Vleck-type nonmagnetic ground state.\n", "title": "Resonant inelastic x-ray scattering operators for $t_{2g}$ orbital systems" }
null
null
[ "Physics" ]
null
true
null
15171
null
Validated
null
null
null
{ "abstract": " In this paper, we investigate Hamiltonian path problem in the context of\nsplit graphs, and produce a dichotomy result on the complexity of the problem.\nOur main result is a deep investigation of the structure of $K_{1,4}$-free\nsplit graphs in the context of Hamiltonian path problem, and as a consequence,\nwe obtain a polynomial-time algorithm to the Hamiltonian path problem in\n$K_{1,4}$-free split graphs. We close this paper with the hardness result: we\nshow that, unless P=NP, Hamiltonian path problem is NP-complete in\n$K_{1,5}$-free split graphs by reducing from Hamiltonian cycle problem in\n$K_{1,5}$-free split graphs. Thus this paper establishes a \"thin complexity\nline\" separating NP-complete instances and polynomial-time solvable instances.\n", "title": "Hamiltonian Path in Split Graphs- a Dichotomy" }
null
null
[ "Computer Science" ]
null
true
null
15172
null
Validated
null
null
null
{ "abstract": " We address the task of ranking objects (such as people, blogs, or verticals)\nthat, unlike documents, do not have direct term-based representations. To be\nable to match them against keyword queries, evidence needs to be amassed from\ndocuments that are associated with the given object. We present two design\npatterns, i.e., general reusable retrieval strategies, which are able to\nencompass most existing approaches from the past. One strategy combines\nevidence on the term level (early fusion), while the other does it on the\ndocument level (late fusion). We demonstrate the generality of these patterns\nby applying them to three different object retrieval tasks: expert finding,\nblog distillation, and vertical ranking.\n", "title": "Design Patterns for Fusion-Based Object Retrieval" }
null
null
[ "Computer Science" ]
null
true
null
15173
null
Validated
null
null
null
{ "abstract": " Multiple design iterations are inevitable in nanometer Integrated Circuit\n(IC) design flow until desired printability and performance metrics are\nachieved. This starts with placement optimization aimed at improving\nroutability, wirelength, congestion and timing in the design. Contrarily, no\nsuch practice exists on a floorplanned layout, during the early stage of the\ndesign flow. Recently, STAIRoute \\cite{karb2} aimed to address that by\nidentifying the shortest routing path of a net through a set of routing regions\nin the floorplan in multiple metal layers. Since the blocks in hierarchical\nASIC/SoC designs do not use all the permissible routing layers for the internal\nrouting corresponding to standard cell connectivity, the proposed STAIRoute\nframework is not an effective for early global routability assessment. This\nleads to improper utilization of routing area, specifically in higher routing\nlayers with fewer routing blockages, as the lack of placement of standard cells\ndoes not facilitates any routing of their interconnections.\nThis paper presents a generalized model for early global routability\nassessment, HGR, by utilizing the free regions over the blocks beyond certain\nmetal layers. The proposed (hybrid) routing model comprises of (a) the junction\ngraph model in STAIRoute routing through the block boundary regions in lower\nrouting layers, and (ii) the grid graph model for routing in higher layers over\nthe free regions of the blocks.\nExperiment with the latest floorplanning benchmarks exhibit an average\nreduction of $4\\%$, $54\\%$ and $70\\%$ in netlength, via count, and congestion\nrespectively when HGR is used over STAIRoute. Further, we conducted another\nexperiment on an industrial design flow targeted for $45nm$ process, and the\nresults are encouraging with $~3$X runtime boost when early global routing is\nused in conjunction with the existing physical design flow.\n", "title": "Early Routability Assessment in VLSI Floorplans: A Generalized Routing Model" }
null
null
null
null
true
null
15174
null
Default
null
null
null
{ "abstract": " We consider the inverse Ising problem, i.e. the inference of network\ncouplings from observed spin trajectories for a model with continuous time\nGlauber dynamics. By introducing two sets of auxiliary latent random variables\nwe render the likelihood into a form, which allows for simple iterative\ninference algorithms with analytical updates. The variables are: (1) Poisson\nvariables to linearise an exponential term which is typical for point process\nlikelihoods and (2) Pólya-Gamma variables, which make the likelihood\nquadratic in the coupling parameters. Using the augmented likelihood, we derive\nan expectation-maximization (EM) algorithm to obtain the maximum likelihood\nestimate of network parameters. Using a third set of latent variables we extend\nthe EM algorithm to sparse couplings via L1 regularization. Finally, we develop\nan efficient approximate Bayesian inference algorithm using a variational\napproach. We demonstrate the performance of our algorithms on data simulated\nfrom an Ising model. For data which are simulated from a more biologically\nplausible network with spiking neurons, we show that the Ising model captures\nwell the low order statistics of the data and how the Ising couplings are\nrelated to the underlying synaptic structure of the simulated network.\n", "title": "Inverse Ising problem in continuous time: A latent variable approach" }
null
null
null
null
true
null
15175
null
Default
null
null
null
{ "abstract": " We studied the emergence process of 42 active region (ARs) by analyzing the\ntime derivative, R(t), of the total unsigned flux. Line-of-sight magnetograms\nacquired by the Helioseismic and Magnetic Imager (HMI) onboard the Solar\nDynamics Observatory (SDO) were used. A continuous piecewise linear fitting to\nthe R(t)-profile was applied to detect an interval, dt_2, of nearly-constant\nR(t) covering one or several local maxima. The averaged over dt_2 magnitude of\nR(t) was accepted as an estimate of the maximal value of the flux growth rate,\nR_MAX, which varies in a range of (0.5-5)x10^20 Mx hour^-1 for active regions\nwith the maximal total unsigned flux of (0.5-3)x10^22 Mx. The normalized flux\ngrowth rate, R_N, was defined under an assumption that the saturated total\nunsigned flux, F_MAX, equals unity. Out of 42 ARs in our initial list, 36 event\nwere successfully fitted and they form two subsets (with a small overlap of 8\nevents): the ARs with a short (<13 hours) interval dt_2 and a high (>0.024\nhour^-1) normalized flux emergence rate, R_N, form the \"rapid\" emergence event\nsubset. The second subset consists of \"gradual\" emergence events and it is\ncharacterized by a long (>13 hours) interval dt_2 and a low R_N (<0.024\nhour^-1). In diagrams of R_MAX plotted versus F_MAX, the events from different\nsubsets are not overlapped and each subset displays an individual power law.\nThe power law index derived from the entire ensemble of 36 events is\n0.69+-0.10. The \"rapid\" emergence is consistent with a \"two-step\" emergence\nprocess of a single twisted flux tube. The \"gradual\" emergence is possibly\nrelated to a consecutive rising of several flux tubes emerging at nearly the\nsame location in the photosphere.\n", "title": "Analysis of the flux growth rate in emerging active regions on the Sun" }
null
null
null
null
true
null
15176
null
Default
null
null
null
{ "abstract": " Let $f$ be a Lipschitz map from a subset $A$ of a stratified group to a\nBanach homogeneous group. We show that directional derivatives of $f$ act as\nhomogeneous homomorphisms at density points of $A$ outside a $\\sigma$-porous\nset. At density points of $A$ we establish a pointwise characterization of\ndifferentiability in terms of directional derivatives. We use these new results\nto obtain an alternate proof of almost everywhere differentiability of\nLipschitz maps from subsets of stratified groups to Banach homogeneous groups\nsatisfying a suitably weakened Radon-Nikodym property. As a consequence we also\nget an alternative proof of Pansu's Theorem.\n", "title": "Porosity and Differentiability of Lipschitz Maps from Stratified Groups to Banach Homogeneous Groups" }
null
null
null
null
true
null
15177
null
Default
null
null
null
{ "abstract": " Sleep plays a vital role in human health, both mental and physical. Sleep\ndisorders like sleep apnea are increasing in prevalence, with the rapid\nincrease in factors like obesity. Sleep apnea is most commonly treated with\nContinuous Positive Air Pressure (CPAP) therapy. Presently, however, there is\nno mechanism to monitor a patient's progress with CPAP. Accurate detection of\nsleep stages from CPAP flow signal is crucial for such a mechanism. We propose,\nfor the first time, an automated sleep staging model based only on the flow\nsignal. Deep neural networks have recently shown high accuracy on sleep staging\nby eliminating handcrafted features. However, these methods focus exclusively\non extracting informative features from the input signal, without paying much\nattention to the dynamics of sleep stages in the output sequence. We propose an\nend-to-end framework that uses a combination of deep convolution and recurrent\nneural networks to extract high-level features from raw flow signal with a\nstructured output layer based on a conditional random field to model the\ntemporal transition structure of the sleep stages. We improve upon the previous\nmethods by 10% using our model, that can be augmented to the previous sleep\nstaging deep learning methods. We also show that our method can be used to\naccurately track sleep metrics like sleep efficiency calculated from sleep\nstages that can be deployed for monitoring the response of CPAP therapy on\nsleep apnea patients. Apart from the technical contributions, we expect this\nstudy to motivate new research questions in sleep science.\n", "title": "A Structured Learning Approach with Neural Conditional Random Fields for Sleep Staging" }
null
null
null
null
true
null
15178
null
Default
null
null
null
{ "abstract": " We study which algebras have tilting modules that are both generated and\ncogenerated by projective-injective modules. Crawley-Boevey and Sauter have\nshown that Auslander algebras have such tilting modules; and for algebras of\nglobal dimension $2$, Auslander algebras are classified by the existence of\nsuch tilting modules.\nIn this paper, we show that the existence of such a tilting module is\nequivalent to the algebra having dominant dimension at least $2$, independent\nof its global dimension. In general such a tilting module is not necessarily\ncotilting. Here, we show that the algebras which have a tilting-cotilting\nmodule generated-cogenerated by projective-injective modules are precisely\n$1$-Auslander-Gorenstein algebras.\nWhen considering such a tilting module, without the assumption that it is\ncotilting, we study the global dimension of its endomorphism algebra, and\ndiscuss a connection with the Finitistic Dimension Conjecture. Furthermore, as\nspecial cases, we show that triangular matrix algebras obtained from Auslander\nalgebras and certain injective modules, have such a tilting module. We also\ngive a description of which Nakayama algebras have such a tilting module.\n", "title": "Dominant dimension and tilting modules" }
null
null
null
null
true
null
15179
null
Default
null
null
null
{ "abstract": " Deep neural network models used for medical image segmentation are large\nbecause they are trained with high-resolution three-dimensional (3D) images.\nGraphics processing units (GPUs) are widely used to accelerate the trainings.\nHowever, the memory on a GPU is not large enough to train the models. A popular\napproach to tackling this problem is patch-based method, which divides a large\nimage into small patches and trains the models with these small patches.\nHowever, this method would degrade the segmentation quality if a target object\nspans multiple patches. In this paper, we propose a novel approach for 3D\nmedical image segmentation that utilizes the data-swapping, which swaps out\nintermediate data from GPU memory to CPU memory to enlarge the effective GPU\nmemory size, for training high-resolution 3D medical images without patching.\nWe carefully tuned parameters in the data-swapping method to obtain the best\ntraining performance for 3D U-Net, a widely used deep neural network model for\nmedical image segmentation. We applied our tuning to train 3D U-Net with\nfull-size images of 192 x 192 x 192 voxels in brain tumor dataset. As a result,\ncommunication overhead, which is the most important issue, was reduced by\n17.1%. Compared with the patch-based method for patches of 128 x 128 x 128\nvoxels, our training for full-size images achieved improvement on the mean Dice\nscore by 4.48% and 5.32 % for detecting whole tumor sub-region and tumor core\nsub-region, respectively. The total training time was reduced from 164 hours to\n47 hours, resulting in 3.53 times of acceleration.\n", "title": "Fast and Accurate 3D Medical Image Segmentation with Data-swapping Method" }
null
null
null
null
true
null
15180
null
Default
null
null
null
{ "abstract": " Gaining a detailed understanding of water transport behavior through\nultra-thin polymer membranes is increasingly becoming necessary due to the\nrecent interest in exploring applications such as water desalination using\nnanoporous membranes. Current techniques only measure bulk water transport\nrates and do not offer direct visualization of water transport which can\nprovide insights into the microscopic mechanisms affecting bulk behavior such\nas the role of defects. We describe the use of a technique, referred here as\nBright-Field Nanoscopy (BFN) to directly image the transport of water across\nthin polymer films using a regular bright-field microscope. The technique\nexploits the strong thickness dependent color response of an optical stack\nconsisting of a thin (~25 nm) germanium film deposited over a gold substrate.\nUsing this technique, we were able to observe the strong influence of the\nterminal layer and ambient conditions on the bulk water transport rates in thin\n(~ 20 nm) layer-by-layer deposited multilayer films of weak polyelectrolytes\n(PEMs).\n", "title": "Direct Optical Visualization of Water Transport across Polymer Nano-films" }
null
null
null
null
true
null
15181
null
Default
null
null
null
{ "abstract": " Recently, the authors and de Wolff introduced the imaginary projection of a\npolynomial $f\\in\\mathbb{C}[\\mathbf{z}]$ as the projection of the variety of $f$\nonto its imaginary part, $\\mathcal{I}(f) \\ = \\ \\{\\text{Im}(\\mathbf{z}) \\, : \\,\n\\mathbf{z} \\in \\mathcal{V}(f) \\}$. Since a polynomial $f$ is stable if and only\nif $\\mathcal{I}(f) \\cap \\mathbb{R}_{>0}^n \\ = \\ \\emptyset$, the notion offers a\nnovel geometric view underlying stability questions of polynomials. In this\narticle, we study the relation between the imaginary projections and\nhyperbolicity cones, where the latter ones are only defined for homogeneous\npolynomials. Building upon this, for homogeneous polynomials we provide a tight\nupper bound for the number of components in the complement $\\mathcal{I}(f)^{c}$\nand thus for the number of hyperbolicity cones of $f$. And we show that for $n\n\\ge 2$, a polynomial $f$ in $n$ variables can have an arbitrarily high number\nof strictly convex and bounded components in $\\mathcal{I}(f)^{c}$.\n", "title": "Hyperbolicity cones and imaginary projections" }
null
null
null
null
true
null
15182
null
Default
null
null
null
{ "abstract": " Fairness-aware classification is receiving increasing attention in the\nmachine learning fields. Recently research proposes to formulate the\nfairness-aware classification as constrained optimization problems. However,\nseveral limitations exist in previous works due to the lack of a theoretical\nframework for guiding the formulation. In this paper, we propose a general\nframework for learning fair classifiers which addresses previous limitations.\nThe framework formulates various commonly-used fairness metrics as convex\nconstraints that can be directly incorporated into classic classification\nmodels. Within the framework, we propose a constraint-free criterion on the\ntraining data which ensures that any classifier learned from the data is fair.\nWe also derive the constraints which ensure that the real fairness metric is\nsatisfied when surrogate functions are used to achieve convexity. Our framework\ncan be used to for formulating fairness-aware classification with fairness\nguarantee and computational efficiency. The experiments using real-world\ndatasets demonstrate our theoretical results and show the effectiveness of\nproposed framework and methods.\n", "title": "Fairness-aware Classification: Criterion, Convexity, and Bounds" }
null
null
null
null
true
null
15183
null
Default
null
null
null
{ "abstract": " Community identification in a network is an important problem in fields such\nas social science, neuroscience, and genetics. Over the past decade, stochastic\nblock models (SBMs) have emerged as a popular statistical framework for this\nproblem. However, SBMs have an important limitation in that they are suited\nonly for networks with unweighted edges; in various scientific applications,\ndisregarding the edge weights may result in a loss of valuable information. We\nstudy a weighted generalization of the SBM, in which observations are collected\nin the form of a weighted adjacency matrix and the weight of each edge is\ngenerated independently from an unknown probability density determined by the\ncommunity membership of its endpoints. We characterize the optimal rate of\nmisclustering error of the weighted SBM in terms of the Renyi divergence of\norder 1/2 between the weight distributions of within-community and\nbetween-community edges, substantially generalizing existing results for\nunweighted SBMs. Furthermore, we present a computationally tractable algorithm\nbased on discretization that achieves the optimal error rate. Our method is\nadaptive in the sense that the algorithm, without assuming knowledge of the\nweight densities, performs as well as the best algorithm that knows the weight\ndensities.\n", "title": "Optimal Rates for Community Estimation in the Weighted Stochastic Block Model" }
null
null
null
null
true
null
15184
null
Default
null
null
null
{ "abstract": " State-of-the-art static analysis tools for verifying finite-precision code\ncompute worst-case absolute error bounds on numerical errors. These are,\nhowever, often not a good estimate of accuracy as they do not take into account\nthe magnitude of the computed values. Relative errors, which compute errors\nrelative to the value's magnitude, are thus preferable. While today's tools do\nreport relative error bounds, these are merely computed via absolute errors and\nthus not necessarily tight or more informative. Furthermore, whenever the\ncomputed value is close to zero on part of the domain, the tools do not report\nany relative error estimate at all. Surprisingly, the quality of relative error\nbounds computed by today's tools has not been systematically studied or\nreported to date. In this paper, we investigate how state-of-the-art static\ntechniques for computing sound absolute error bounds can be used, extended and\ncombined for the computation of relative errors. Our experiments on a standard\nbenchmark set show that computing relative errors directly, as opposed to via\nabsolute errors, is often beneficial and can provide error estimates up to six\norders of magnitude tighter, i.e. more accurate. We also show that interval\nsubdivision, another commonly used technique to reduce over-approximations, has\nless benefit when computing relative errors directly, but it can help to\nalleviate the effects of the inherent issue of relative error estimates close\nto zero.\n", "title": "On Sound Relative Error Bounds for Floating-Point Arithmetic" }
null
null
null
null
true
null
15185
null
Default
null
null
null
{ "abstract": " Application of NaI(Tl) detectors in the search for galactic dark matter\nparticles through their elastic scattering off the target nuclei is well\nmotivated because of the long standing DAMA/LIBRA highly significant positive\nresult on annual modulation, still requiring confirmation. For such a goal, it\nis mandatory to reach very low threshold in energy (at or below the keV level),\nvery low radioactive background (at a few counts/keV/kg/day), and high\ndetection mass (at or above the 100 kg scale). One of the most relevant\ntechnical issues is the optimization of the crystal intrinsic scintillation\nlight yield and the efficiency of the light collecting system for large mass\ncrystals. In the frame of the ANAIS (Annual modulation with NaI Scintillators)\ndark matter search project large NaI(Tl) crystals from different providers\ncoupled to two photomultiplier tubes (PMTs) have been tested at the Canfranc\nUnderground Laboratory. In this paper we present the estimates of the NaI(Tl)\nscintillation light collected using full-absorption peaks at very low energy\nfrom external and internal sources emitting gammas/electrons, and\nsingle-photoelectron events populations selected by using very low energy\npulses tails. Outstanding scintillation light collection at the level of\n15~photoelectrons/keV can be reported for the final design and provider chosen\nfor ANAIS detectors. Taking into account the Quantum Efficiency of the PMT\nunits used, the intrinsic scintillation light yield in these NaI(Tl) crystals\nis above 40~photoelectrons/keV for energy depositions in the range from 3 up to\n25~keV. This very high light output of ANAIS crystals allows triggering below\n1~keV, which is very important in order to increase the sensitivity in the\ndirect detection of dark matter.\n", "title": "Light yield determination in large sodium iodide detectors applied in the search for dark matter" }
null
null
[ "Physics" ]
null
true
null
15186
null
Validated
null
null
null
{ "abstract": " The finite-difference time-domain (FDTD) method is a well established method\nfor solving the time evolution of Maxwell's equations. Unfortunately the scheme\nintroduces numerical dispersion and therefore phase and group velocities which\ndeviate from the correct values. The solution to Maxwell's equations in more\nthan one dimension results in non-physical predictions such as numerical\ndispersion or numerical Cherenkov radiation emitted by a relativistic electron\nbeam propagating in vacuum.\nImproved solvers, which keep the staggered Yee-type grid for electric and\nmagnetic fields, generally modify the spatial derivative operator in the\nMaxwell-Faraday equation by increasing the computational stencil. These\nmodified solvers can be characterized by different sets of coefficients,\nleading to different dispersion properties. In this work we introduce a norm\nfunction to rewrite the choice of coefficients into a minimization problem. We\nsolve this problem numerically and show that the minimization procedure leads\nto phase and group velocities that are considerably closer to $c$ as compared\nto schemes with manually set coefficients available in the literature.\nDepending on a specific problem at hand (e.g. electron beam propagation in\nplasma, high-order harmonic generation from plasma surfaces, etc), the norm\nfunction can be chosen accordingly, for example, to minimize the numerical\ndispersion in a certain given propagation direction. Particle-in-cell\nsimulations of an electron beam propagating in vacuum using our solver are\nprovided.\n", "title": "A Systematic Approach to Numerical Dispersion in Maxwell Solvers" }
null
null
null
null
true
null
15187
null
Default
null
null
null
{ "abstract": " This work considers a stochastic Nash game in which each player solves a\nparameterized stochastic optimization problem. In deterministic regimes,\nbest-response schemes have been shown to be convergent under a suitable\nspectral property associated with the proximal best-response map. However, a\ndirect application of this scheme to stochastic settings requires obtaining\nexact solutions to stochastic optimization at each iteration. Instead, we\npropose an inexact generalization in which an inexact solution is computed via\nan increasing number of projected stochastic gradient steps. Based on this\nframework, we present three inexact best-response schemes: (i) First, we\npropose a synchronous scheme where all players simultaneously update their\nstrategies; (ii) Subsequently, we extend this to a randomized setting where a\nsubset of players is randomly chosen to their update strategies while the\nothers keep their strategies invariant; (iii) Finally, we propose an\nasynchronous scheme, where each player determines its own update frequency and\nmay use outdated rival-specific data in updating its strategy. Under a suitable\ncontractive property of the proximal best-response map, we derive a.s.\nconvergence of the iterates for (i) and (ii) and mean-convergence for (i) --\n(iii). In addition, we show that for (i) -- (iii), the iterates converge to the\nunique equilibrium in mean at a prescribed linear rate. Finally, we establish\nthe overall iteration complexity in terms of projected stochastic gradient\nsteps for computing an $\\epsilon-$Nash equilibrium and in all settings, the\niteration complexity is ${\\cal O}(1/\\epsilon^{2(1+c) + \\delta})$ where $c = 0$\nin the context of (i) and represents the positive cost of randomization (in\n(ii)) and asynchronicity and delay (in (iii)). The schemes are further extended\nto linear and quadratic recourse-based stochastic Nash games.\n", "title": "On Synchronous, Asynchronous, and Randomized Best-Response schemes for computing equilibria in Stochastic Nash games" }
null
null
null
null
true
null
15188
null
Default
null
null
null
{ "abstract": " In algebraic terms, the insertion of $n$-powers in words may be modelled at\nthe language level by considering the pseudovariety of ordered monoids defined\nby the inequality $1\\le x^n$. We compare this pseudovariety with several other\nnatural pseudovarieties of ordered monoids and of monoids associated with the\nBurnside pseudovariety of groups defined by the identity $x^n=1$. In\nparticular, we are interested in determining the pseudovariety of monoids that\nit generates, which can be viewed as the problem of determining the Boolean\nclosure of the class of regular languages closed under $n$-power insertions. We\nexhibit a simple upper bound and show that it satisfies all pseudoidentities\nwhich are provable from $1\\le x^n$ in which both sides are regular elements\nwith respect to the upper bound.\n", "title": "On the insertion of n-powers" }
null
null
null
null
true
null
15189
null
Default
null
null
null
{ "abstract": " The origin and nature of extreme energy cosmic rays (EECRs), which have\nenergies above the 50 EeV, the Greisen-Zatsepin-Kuzmin (GZK) energy limit, is\none of the most interesting and complicated problems in modern cosmic-ray\nphysics. Existing ground-based detectors have helped to obtain remarkable\nresults in studying cosmic rays before and after the GZK limit, but have also\nproduced some contradictions in our understanding of cosmic ray mass\ncomposition. Moreover, each of these detectors covers only a part of the\ncelestial sphere, which poses problems for studying the arrival directions of\nEECRs and identifying their sources. As a new generation of EECR space\ndetectors, TUS (Tracking Ultraviolet Set-up), KLYPVE and JEM-EUSO, are intended\nto study the most energetic cosmic-ray particles, providing larger, uniform\nexposures of the entire celestial sphere. The TUS detector, launched on board\nthe Lomonosov satellite on April 28, 2016, from Vostochny Cosmodrome in Russia,\nis the first of these. It employs a single-mirror optical system and a\nphotomultiplier tube matrix as a photo-detector and will test the fluorescent\nmethod of measuring EECRs from space. Utilizing the Earth's atmosphere as a\nhuge calorimeter, it is expected to detect EECRs with energies above 100 EeV.\nIt will also be able to register slower atmospheric transient events:\natmospheric fluorescence in electrical discharges of various types including\nprecipitating electrons escaping the magnetosphere and from the radiation of\nmeteors passing through the atmosphere. We describe the design of the TUS\ndetector and present results of different ground-based tests and simulations.\n", "title": "The TUS detector of extreme energy cosmic rays on board the Lomonosov satellite" }
null
null
null
null
true
null
15190
null
Default
null
null
null
{ "abstract": " We propose a precise ellipsometric method for the investigation of coherent\nlight with a small ellipticity. The main feature of this method is the use of\ncompensators with phase delays providing the maximum accuracy of measurements\nfor the selected range of ellipticities and taking into account the\ninterference of multiple reflections of coherent light. The relative error of\nthe ellipticity measurement in the range of mesurement does not exceed 0.02.\n", "title": "New ellipsometric approach for determining small light ellipticities" }
null
null
[ "Physics" ]
null
true
null
15191
null
Validated
null
null
null
{ "abstract": " Single-user multiple-input / multiple-output (SU-MIMO) communication systems\nhave been successfully used over the years and have provided a significant\nincrease on a wireless link's capacity by enabling the transmission of multiple\ndata streams. Assuming channel knowledge at the transmitter, the maximization\nof the mutual information of a MIMO link is achieved by finding the optimal\npower allocation under a given sum-power constraint, which is in turn obtained\nby the water-filling (WF) algorithm. However, in spectrum sharing setups, such\nas Licensed Shared Access (LSA), where a primary link (PL) and a secondary link\n(SL) coexist, the power transmitted by the SL transmitter may induce harmful\ninterference to the PL receiver. While such co-existing links have been\nconsidered extensively in various spectrum sharing setups, the mutual\ninformation of the SL under a constraint on the interference it may cause to\nthe PL receiver has, quite astonishingly, not been evaluated so far. In this\npaper, we solve this problem, find its unique optimal solution and provide the\npower allocation policy and corresponding precoding solution that achieves the\noptimal capacity under the imposed constraint. The performance of the optimal\nsolution and the penalty due to the interference constraint are evaluated over\nsome indicative Rayleigh fading channel conditions and interference thresholds.\nWe believe that the obtained results are of general nature and that they may\napply, beyond spectrum sharing, to a variety of applications that admit a\nsimilar setup.\n", "title": "Maximizing the Mutual Information of Multi-Antenna Links Under an Interfered Receiver Power Constraint" }
null
null
null
null
true
null
15192
null
Default
null
null
null
{ "abstract": " Bistability and multistationarity are properties of reaction networks linked\nto switch-like responses and connected to cell memory and cell decision making.\nDetermining whether and when a network exhibits bistability is a hard and open\nmathematical problem. One successful strategy consists of analyzing small\nnetworks and deducing that some of the properties are preserved upon passage to\nthe full network. Motivated by this we study chemical reaction networks with\nfew chemical complexes. Under mass-action kinetics the steady states of these\nnetworks are described by fewnomial systems, that is polynomial systems having\nfew distinct monomials. Such systems of polynomials are often studied in real\nalgebraic geometry by the use of Gale dual systems. Using this Gale duality we\ngive precise conditions in terms of the reaction rate constants for the number\nand stability of the steady states of families of reaction networks with one\nnon-flow reaction.\n", "title": "Multistationarity and Bistability for Fewnomial Chemical Reaction Networks" }
null
null
null
null
true
null
15193
null
Default
null
null
null
{ "abstract": " Background: In silico drug-target interaction (DTI) prediction plays an\nintegral role in drug repositioning: the discovery of new uses for existing\ndrugs. One popular method of drug repositioning is network-based DTI\nprediction, which uses complex network theory to predict DTIs from a\ndrug-target network. Currently, most network-based DTI prediction is based on\nmachine learning methods such as Restricted Boltzmann Machines (RBM) or Support\nVector Machines (SVM). These methods require additional information about the\ncharacteristics of drugs, targets and DTIs, such as chemical structure, genome\nsequence, binding types, causes of interactions, etc., and do not perform\nsatisfactorily when such information is unavailable. We propose a new,\nalternative method for DTI prediction that makes use of only network topology\ninformation attempting to solve this problem.\nResults: We compare our method for DTI prediction against the well-known RBM\napproach. We show that when applied to the MATADOR database, our approach based\non node neighborhoods yield higher precision for high-ranking predictions than\nRBM when no information regarding DTI types is available.\nConclusion: This demonstrates that approaches purely based on network\ntopology provide a more suitable approach to DTI prediction in the many\nreal-life situations where little or no prior knowledge is available about the\ncharacteristics of drugs, targets, or their interactions.\n", "title": "Erratum: Link prediction in drug-target interactions network using similarity indices" }
null
null
null
null
true
null
15194
null
Default
null
null
null
{ "abstract": " Penalty-based variable selection methods are powerful in selecting relevant\ncovariates and estimating coefficients simultaneously. However, variable\nselection could fail to be consistent when covariates are highly correlated.\nThe partial correlation approach has been adopted to solve the problem with\ncorrelated covariates. Nevertheless, the restrictive range of partial\ncorrelation is not effective for capturing signal strength for relevant\ncovariates. In this paper, we propose a new Semi-standard PArtial Covariance\n(SPAC) which is able to reduce correlation effects from other predictors while\nincorporating the magnitude of coefficients. The proposed SPAC variable\nselection facilitates choosing covariates which have direct association with\nthe response variable, via utilizing dependency among covariates. We show that\nthe proposed method with the Lasso penalty (SPAC-Lasso) enjoys strong sign\nconsistency in both finite-dimensional and high-dimensional settings under\nregularity conditions. Simulation studies and the `HapMap' gene data\napplication show that the proposed method outperforms the traditional Lasso,\nadaptive Lasso, SCAD, and Peter-Clark-simple (PC-simple) methods for highly\ncorrelated predictors.\n", "title": "Variable Selection for Highly Correlated Predictors" }
null
null
null
null
true
null
15195
null
Default
null
null
null
{ "abstract": " In the last three decades, we have seen a significant increase in trading\ngoods and services through online auctions. However, this business created an\nattractive environment for malicious moneymakers who can commit different types\nof fraud activities, such as Shill Bidding (SB). The latter is predominant\nacross many auctions but this type of fraud is difficult to detect due to its\nsimilarity to normal bidding behaviour. The unavailability of SB datasets makes\nthe development of SB detection and classification models burdensome.\nFurthermore, to implement efficient SB detection models, we should produce SB\ndata from actual auctions of commercial sites. In this study, we first scraped\na large number of eBay auctions of a popular product. After preprocessing the\nraw auction data, we build a high-quality SB dataset based on the most reliable\nSB strategies. The aim of our research is to share the preprocessed auction\ndataset as well as the SB training (unlabelled) dataset, thereby researchers\ncan apply various machine learning techniques by using authentic data of\nauctions and fraud.\n", "title": "Scraping and Preprocessing Commercial Auction Data for Fraud Classification" }
null
null
[ "Statistics" ]
null
true
null
15196
null
Validated
null
null
null
{ "abstract": " The impact of the maximally possible batch size (for the better runtime) on\nperformance of graphic processing units (GPU) and tensor processing units (TPU)\nduring training and inference phases is investigated. The numerous runs of the\nselected deep neural network (DNN) were performed on the standard MNIST and\nFashion-MNIST datasets. The significant speedup was obtained even for extremely\nlow-scale usage of Google TPUv2 units (8 cores only) in comparison to the quite\npowerful GPU NVIDIA Tesla K80 card with the speedup up to 10x for training\nstage (without taking into account the overheads) and speedup up to 2x for\nprediction stage (with and without taking into account overheads). The precise\nspeedup values depend on the utilization level of TPUv2 units and increase with\nthe increase of the data volume under processing, but for the datasets used in\nthis work (MNIST and Fashion-MNIST with images of sizes 28x28) the speedup was\nobserved for batch sizes >512 images for training phase and >40 000 images for\nprediction phase. It should be noted that these results were obtained without\ndetriment to the prediction accuracy and loss that were equal for both GPU and\nTPU runs up to the 3rd significant digit for MNIST dataset, and up to the 2nd\nsignificant digit for Fashion-MNIST dataset.\n", "title": "Batch Size Influence on Performance of Graphic and Tensor Processing Units during Training and Inference Phases" }
null
null
[ "Computer Science" ]
null
true
null
15197
null
Validated
null
null
null
{ "abstract": " We consider composite-composite testing problems for the expectation in the\nGaussian sequence model where the null hypothesis corresponds to a convex\nsubset $\\mathcal{C}$ of $\\mathbb{R}^d$. We adopt a minimax point of view and\nour primary objective is to describe the smallest Euclidean distance between\nthe null and alternative hypotheses such that there is a test with small total\nerror probability. In particular, we focus on the dependence of this distance\non the dimension $d$ and the sample size/variance parameter $n$ giving rise to\nthe minimax separation rate. In this paper we discuss lower and upper bounds on\nthis rate for different smooth and non- smooth choices for $\\mathcal{C}$.\n", "title": "Minimax Euclidean Separation Rates for Testing Convex Hypotheses in $\\mathbb{R}^d$" }
null
null
null
null
true
null
15198
null
Default
null
null
null
{ "abstract": " A data-based policy for iterative control task is presented. The proposed\nstrategy is model-free and can be applied whenever safe input and state\ntrajectories of a system performing an iterative task are available. These\ntrajectories, together with a user-defined cost function, are exploited to\nconstruct a piecewise affine approximation to the value function. Approximated\nvalue functions are then used to evaluate the control policy by solving a\nlinear program. We show that for linear system subject to convex cost and\nconstraints, the proposed strategy guarantees closed-loop constraint\nsatisfaction and performance bounds on the closed-loop trajectory. We evaluate\nthe proposed strategy in simulations and experiments, the latter carried out on\nthe Berkeley Autonomous Race Car (BARC) platform. We show that the proposed\nstrategy is able to reduce the computation time by one order of magnitude while\nachieving the same performance as our model-based control algorithm.\n", "title": "Simple Policy Evaluation for Data-Rich Iterative Tasks" }
null
null
null
null
true
null
15199
null
Default
null
null
null
{ "abstract": " In this paper we investigate the metric properties of quadrics and cones of\nthe $n$-dimensional Euclidean space. As applications of our formulas we give a\nmore detailed description of the construction of Chasles and the wire model of\nStaude, respectively.\n", "title": "Proper quadrics in the Euclidean $n$-space" }
null
null
null
null
true
null
15200
null
Default
null
null