text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null | {
"abstract": " The topological morphology--order of zeros at the positions of electrons with\nrespect to a specific electron--of Laughlin state at filling fractions $1/m$\n($m$ odd) is homogeneous as every electron feels zeros of order $m$ at the\npositions of other electrons. Although fairly accurate ground state wave\nfunctions for most of the other quantum Hall states in the lowest Landau level\nare quite well-known, it had been an open problem in expressing the ground\nstate wave functions in terms of flux-attachment to particles, {\\em a la}, this\nmorphology of Laughlin state. With a very general consideration of\nflux-particle relations only, in spherical geometry, we here report a novel\nmethod for determining morphologies of these states. Based on these, we\nconstruct almost exact ground state wave-functions for the Coulomb interaction.\nAlthough the form of interaction may change the ground state wave-function, the\nsame morphology constructs the latter irrespective of the nature of the\ninteraction between electrons.\n",
"title": "Jastrow form of the Ground State Wave Functions for Fractional Quantum Hall States"
} | null | null | null | null | true | null | 201 | null | Default | null | null |
null | {
"abstract": " The purpose of this paper is to formulate and study a common refinement of a\nversion of Stark's conjecture and its $p$-adic analogue, in terms of Fontaine's\n$p$-adic period ring and $p$-adic Hodge theory. We construct period-ring-valued\nfunctions under a generalization of Yoshida's conjecture on the transcendental\nparts of CM-periods. Then we conjecture a reciprocity law on their special\nvalues concerning the absolute Frobenius action. We show that our conjecture\nimplies a part of Stark's conjecture when the base field is an arbitrary real\nfield and the splitting place is its real place. It also implies a refinement\nof the Gross-Stark conjecture under a certain assumption. When the base field\nis the rational number field, our conjecture follows from Coleman's formula on\nFermat curves. We also prove some partial results in other cases.\n",
"title": "On a common refinement of Stark units and Gross-Stark units"
} | null | null | null | null | true | null | 202 | null | Default | null | null |
null | {
"abstract": " This paper considers the problem of autonomous multi-agent cooperative target\nsearch in an unknown environment using a decentralized framework under a\nno-communication scenario. The targets are considered as static targets and the\nagents are considered to be homogeneous. The no-communication scenario\ntranslates as the agents do not exchange either the information about the\nenvironment or their actions among themselves. We propose an integrated\ndecision and control theoretic solution for a search problem which generates\nfeasible agent trajectories. In particular, a perception based algorithm is\nproposed which allows an agent to estimate the probable strategies of other\nagents' and to choose a decision based on such estimation. The algorithm shows\nrobustness with respect to the estimation accuracy to a certain degree. The\nperformance of the algorithm is compared with random strategies and numerical\nsimulation shows considerable advantages.\n",
"title": "An Integrated Decision and Control Theoretic Solution to Multi-Agent Co-Operative Search Problems"
} | null | null | null | null | true | null | 203 | null | Default | null | null |
null | {
"abstract": " This study explores the validity of chain effects of clean water, which are\nknown as the \"Mills-Reincke phenomenon,\" in early twentieth-century Japan.\nRecent studies have reported that water purifications systems are responsible\nfor huge contributions to human capital. Although a few studies have\ninvestigated the short-term effects of water-supply systems in pre-war Japan,\nlittle is known about the benefits associated with these systems. By analyzing\ncity-level cause-specific mortality data from the years 1922-1940, we found\nthat eliminating typhoid fever infections decreased the risk of deaths due to\nnon-waterborne diseases. Our estimates show that for one additional typhoid\ndeath, there were approximately one to three deaths due to other causes, such\nas tuberculosis and pneumonia. This suggests that the observed Mills-Reincke\nphenomenon could have resulted from the prevention typhoid fever in a\npreviously-developing Asian country.\n",
"title": "Chain effects of clean water: The Mills-Reincke phenomenon in early twentieth-century Japan"
} | null | null | null | null | true | null | 204 | null | Default | null | null |
null | {
"abstract": " Developing neural network image classification models often requires\nsignificant architecture engineering. In this paper, we study a method to learn\nthe model architectures directly on the dataset of interest. As this approach\nis expensive when the dataset is large, we propose to search for an\narchitectural building block on a small dataset and then transfer the block to\na larger dataset. The key contribution of this work is the design of a new\nsearch space (the \"NASNet search space\") which enables transferability. In our\nexperiments, we search for the best convolutional layer (or \"cell\") on the\nCIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking\ntogether more copies of this cell, each with their own parameters to design a\nconvolutional architecture, named \"NASNet architecture\". We also introduce a\nnew regularization technique called ScheduledDropPath that significantly\nimproves generalization in the NASNet models. On CIFAR-10 itself, NASNet\nachieves 2.4% error rate, which is state-of-the-art. On ImageNet, NASNet\nachieves, among the published works, state-of-the-art accuracy of 82.7% top-1\nand 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than\nthe best human-invented architectures while having 9 billion fewer FLOPS - a\nreduction of 28% in computational demand from the previous state-of-the-art\nmodel. When evaluated at different levels of computational cost, accuracies of\nNASNets exceed those of the state-of-the-art human-designed models. For\ninstance, a small version of NASNet also achieves 74% top-1 accuracy, which is\n3.1% better than equivalently-sized, state-of-the-art models for mobile\nplatforms. Finally, the learned features by NASNet used with the Faster-RCNN\nframework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO\ndataset.\n",
"title": "Learning Transferable Architectures for Scalable Image Recognition"
} | null | null | [
"Computer Science"
]
| null | true | null | 205 | null | Validated | null | null |
null | {
"abstract": " We propose a new multi-frame method for efficiently computing scene flow\n(dense depth and optical flow) and camera ego-motion for a dynamic scene\nobserved from a moving stereo camera rig. Our technique also segments out\nmoving objects from the rigid scene. In our method, we first estimate the\ndisparity map and the 6-DOF camera motion using stereo matching and visual\nodometry. We then identify regions inconsistent with the estimated camera\nmotion and compute per-pixel optical flow only at these regions. This flow\nproposal is fused with the camera motion-based flow proposal using fusion moves\nto obtain the final optical flow and motion segmentation. This unified\nframework benefits all four tasks - stereo, optical flow, visual odometry and\nmotion segmentation leading to overall higher accuracy and efficiency. Our\nmethod is currently ranked third on the KITTI 2015 scene flow benchmark.\nFurthermore, our CPU implementation runs in 2-3 seconds per frame which is 1-3\norders of magnitude faster than the top six methods. We also report a thorough\nevaluation on challenging Sintel sequences with fast camera and object motion,\nwhere our method consistently outperforms OSF [Menze and Geiger, 2015], which\nis currently ranked second on the KITTI benchmark.\n",
"title": "Fast Multi-frame Stereo Scene Flow with Motion Segmentation"
} | null | null | null | null | true | null | 206 | null | Default | null | null |
null | {
"abstract": " Let $\\K$ be an algebraically closed field of positive characteristic $p$. We\nmainly classify pointed Hopf algebras over $\\K$ of dimension $p^2q$, $pq^2$ and\n$pqr$ where $p,q,r$ are distinct prime numbers. We obtain a complete\nclassification of such Hopf algebras except two subcases when they are not\ngenerated by the first terms of coradical filtration. In particular, we obtain\nmany new examples of non-commutative and non-cocommutative finite-dimensional\nHopf algebras.\n",
"title": "Pointed $p^2q$-dimensional Hopf algebras in positive characteristic"
} | null | null | null | null | true | null | 207 | null | Default | null | null |
null | {
"abstract": " We present the mixed Galerkin discretization of distributed parameter\nport-Hamiltonian systems. On the prototypical example of hyperbolic systems of\ntwo conservation laws in arbitrary spatial dimension, we derive the main\ncontributions: (i) A weak formulation of the underlying geometric\n(Stokes-Dirac) structure with a segmented boundary according to the causality\nof the boundary ports. (ii) The geometric approximation of the Stokes-Dirac\nstructure by a finite-dimensional Dirac structure is realized using a mixed\nGalerkin approach and power-preserving linear maps, which define minimal\ndiscrete power variables. (iii) With a consistent approximation of the\nHamiltonian, we obtain finite-dimensional port-Hamiltonian state space models.\nBy the degrees of freedom in the power-preserving maps, the resulting family of\nstructure-preserving schemes allows for trade-offs between centered\napproximations and upwinding. We illustrate the method on the example of\nWhitney finite elements on a 2D simplicial triangulation and compare the\neigenvalue approximation in 1D with a related approach.\n",
"title": "Weak Form of Stokes-Dirac Structures and Geometric Discretization of Port-Hamiltonian Systems"
} | null | null | [
"Computer Science"
]
| null | true | null | 208 | null | Validated | null | null |
null | {
"abstract": " The regularity of earthquakes, their destructive power, and the nuisance of\nground vibration in urban environments, all motivate designs of defence\nstructures to lessen the impact of seismic and ground vibration waves on\nbuildings. Low frequency waves, in the range $1$ to $10$ Hz for earthquakes and\nup to a few tens of Hz for vibrations generated by human activities, cause a\nlarge amount of damage, or inconvenience, depending on the geological\nconditions they can travel considerable distances and may match the resonant\nfundamental frequency of buildings. The ultimate aim of any seismic\nmetamaterial, or any other seismic shield, is to protect over this entire range\nof frequencies, the long wavelengths involved, and low frequency, have meant\nthis has been unachievable to date.\nElastic flexural waves, applicable in the mechanical vibrations of thin\nelastic plates, can be designed to have a broad zero-frequency stop-band using\na periodic array of very small clamped circles. Inspired by this experimental\nand theoretical observation, all be it in a situation far removed from seismic\nwaves, we demonstrate that it is possible to achieve elastic surface (Rayleigh)\nand body (pressure P and shear S) wave reflectors at very large wavelengths in\nstructured soils modelled as a fully elastic layer periodically clamped to\nbedrock.\nWe identify zero frequency stop-bands that only exist in the limit of columns\nof concrete clamped at their base to the bedrock. In a realistic configuration\nof a sedimentary basin 15 meters deep we observe a zero frequency stop-band\ncovering a broad frequency range of $0$ to $30$ Hz.\n",
"title": "Clamped seismic metamaterials: Ultra-low broad frequency stop-bands"
} | null | null | null | null | true | null | 209 | null | Default | null | null |
null | {
"abstract": " In this paper, we prove some difference analogue of second main theorems of\nmeromorphic mapping from Cm into an algebraic variety V intersecting a finite\nset of fixed hypersurfaces in subgeneral position. As an application, we prove\na result on algebraically degenerate of holomorphic curves intersecting\nhypersurfaces and difference analogue of Picard's theorem on holomorphic\ncurves. Furthermore, we obtain a second main theorem of meromorphic mappings\nintersecting hypersurfaces in N-subgeneral position for Veronese embedding in\nPn(C) and a uniqueness theorem sharing hypersurfaces.\n",
"title": "Difference analogue of second main theorems for meromorphic mapping into algebraic variety"
} | null | null | [
"Mathematics"
]
| null | true | null | 210 | null | Validated | null | null |
null | {
"abstract": " Large-scale datasets have played a significant role in progress of neural\nnetwork and deep learning areas. YouTube-8M is such a benchmark dataset for\ngeneral multi-label video classification. It was created from over 7 million\nYouTube videos (450,000 hours of video) and includes video labels from a\nvocabulary of 4716 classes (3.4 labels/video on average). It also comes with\npre-extracted audio & visual features from every second of video (3.2 billion\nfeature vectors in total). Google cloud recently released the datasets and\norganized 'Google Cloud & YouTube-8M Video Understanding Challenge' on Kaggle.\nCompetitors are challenged to develop classification algorithms that assign\nvideo-level labels using the new and improved Youtube-8M V2 dataset. Inspired\nby the competition, we started exploration of audio understanding and\nclassification using deep learning algorithms and ensemble methods. We built\nseveral baseline predictions according to the benchmark paper and public github\ntensorflow code. Furthermore, we improved global prediction accuracy (GAP) from\nbase level 77% to 80.7% through approaches of ensemble.\n",
"title": "An Effective Way to Improve YouTube-8M Classification Accuracy in Google Cloud Platform"
} | null | null | null | null | true | null | 211 | null | Default | null | null |
null | {
"abstract": " Observational data collected during experiments, such as the planned Fire and\nSmoke Model Evaluation Experiment (FASMEE), are critical for progressing and\ntransitioning coupled fire-atmosphere models like WRF-SFIRE and WRF-SFIRE-CHEM\ninto operational use. Historical meteorological data, representing typical\nweather conditions for the anticipated burn locations and times, have been\nprocessed to initialize and run a set of simulations representing the planned\nexperimental burns. Based on an analysis of these numerical simulations, this\npaper provides recommendations on the experimental setup that include the\nignition procedures, size and duration of the burns, and optimal sensor\nplacement. New techniques are developed to initialize coupled fire-atmosphere\nsimulations with weather conditions typical of the planned burn locations and\ntime of the year. Analysis of variation and sensitivity analysis of simulation\ndesign to model parameters by repeated Latin Hypercube Sampling are used to\nassess the locations of the sensors. The simulations provide the locations of\nthe measurements that maximize the expected variation of the sensor outputs\nwith the model parameters.\n",
"title": "Experimental Design of a Prescribed Burn Instrumentation"
} | null | null | null | null | true | null | 212 | null | Default | null | null |
null | {
"abstract": " For a knot $K$ in a homology $3$-sphere $\\Sigma$, let $M$ be the result of\n$2/q$-surgery on $K$, and let $X$ be the universal abelian covering of $M$. Our\nfirst theorem is that if the first homology of $X$ is finite cyclic and $M$ is\na Seifert fibered space with $N\\ge 3$ singular fibers, then $N\\ge 4$ if and\nonly if the first homology of the universal abelian covering of $X$ is\ninfinite. Our second theorem is that under an appropriate assumption on the\nAlexander polynomial of $K$, if $M$ is a Seifert fibered space, then $q=\\pm 1$\n(i.e.\\ integral surgery).\n",
"title": "Seifert surgery on knots via Reidemeister torsion and Casson-Walker-Lescop invariant III"
} | null | null | null | null | true | null | 213 | null | Default | null | null |
null | {
"abstract": " Sparse feature selection is necessary when we fit statistical models, we have\naccess to a large group of features, don't know which are relevant, but assume\nthat most are not. Alternatively, when the number of features is larger than\nthe available data the model becomes over parametrized and the sparse feature\nselection task involves selecting the most informative variables for the model.\nWhen the model is a simple location model and the number of relevant features\ndoes not grow with the total number of features, sparse feature selection\ncorresponds to sparse mean estimation. We deal with a simplified mean\nestimation problem consisting of an additive model with gaussian noise and mean\nthat is in a restricted, finite hypothesis space. This restriction simplifies\nthe mean estimation problem into a selection problem of combinatorial nature.\nAlthough the hypothesis space is finite, its size is exponential in the\ndimension of the mean. In limited data settings and when the size of the\nhypothesis space depends on the amount of data or on the dimension of the data,\nchoosing an approximation set of hypotheses is a desirable approach. Choosing a\nset of hypotheses instead of a single one implies replacing the bias-variance\ntrade off with a resolution-stability trade off. Generalization capacity\nprovides a resolution selection criterion based on allowing the learning\nalgorithm to communicate the largest amount of information in the data to the\nlearner without error. In this work the theory of approximation set coding and\ngeneralization capacity is explored in order to understand this approach. We\nthen apply the generalization capacity criterion to the simplified sparse mean\nestimation problem and detail an importance sampling algorithm which at once\nsolves the difficulty posed by large hypothesis spaces and the slow convergence\nof uniform sampling algorithms.\n",
"title": "Sparse mean localization by information theory"
} | null | null | null | null | true | null | 214 | null | Default | null | null |
null | {
"abstract": " In this letter, we consider the joint power and admission control (JPAC)\nproblem by assuming that only the channel distribution information (CDI) is\navailable. Under this assumption, we formulate a new chance (probabilistic)\nconstrained JPAC problem, where the signal to interference plus noise ratio\n(SINR) outage probability of the supported links is enforced to be not greater\nthan a prespecified tolerance. To efficiently deal with the chance SINR\nconstraint, we employ the sample approximation method to convert them into\nfinitely many linear constraints. Then, we propose a convex approximation based\ndeflation algorithm for solving the sample approximation JPAC problem. Compared\nto the existing works, this letter proposes a novel two-timescale JPAC\napproach, where admission control is performed by the proposed deflation\nalgorithm based on the CDI in a large timescale and transmission power is\nadapted instantly with fast fadings in a small timescale. The effectiveness of\nthe proposed algorithm is illustrated by simulations.\n",
"title": "Joint Power and Admission Control based on Channel Distribution Information: A Novel Two-Timescale Approach"
} | null | null | null | null | true | null | 215 | null | Default | null | null |
null | {
"abstract": " A ROSAT survey of the Alpha Per open cluster in 1993 detected its brightest\nstar, mid-F supergiant Alpha Persei: the X-ray luminosity and spectral hardness\nwere similar to coronally active late-type dwarf members. Later, in 2010, a\nHubble Cosmic Origins Spectrograph SNAPshot of Alpha Persei found\nfar-ultraviolet coronal proxy SiIV unexpectedly weak. This, and a suspicious\noffset of the ROSAT source, suggested that a late-type companion might be\nresponsible for the X-rays. Recently, a multi-faceted program tested that\npremise. Groundbased optical coronography, and near-UV imaging with HST Wide\nField Camera 3, searched for any close-in faint candidate coronal objects, but\nwithout success. Then, a Chandra pointing found the X-ray source single and\ncoincident with the bright star. Significantly, the SiIV emissions of Alpha\nPersei, in a deeper FUV spectrum collected by HST COS as part of the joint\nprogram, aligned well with chromospheric atomic oxygen (which must be intrinsic\nto the luminous star), within the context of cooler late-F and early-G\nsupergiants, including Cepheid variables. This pointed to the X-rays as the\nfundamental anomaly. The over-luminous X-rays still support the case for a\nhyperactive dwarf secondary, albeit now spatially unresolved. However, an\nalternative is that Alpha Persei represents a novel class of coronal source.\nResolving the first possibility now has become more difficult, because the easy\nsolution -- a well separated companion -- has been eliminated. Testing the\nother possibility will require a broader high-energy census of the early-F\nsupergiants.\n",
"title": "A Closer Look at the Alpha Persei Coronal Conundrum"
} | null | null | null | null | true | null | 216 | null | Default | null | null |
null | {
"abstract": " Realistic music generation is a challenging task. When building generative\nmodels of music that are learnt from data, typically high-level representations\nsuch as scores or MIDI are used that abstract away the idiosyncrasies of a\nparticular performance. But these nuances are very important for our perception\nof musicality and realism, so in this work we embark on modelling music in the\nraw audio domain. It has been shown that autoregressive models excel at\ngenerating raw audio waveforms of speech, but when applied to music, we find\nthem biased towards capturing local signal structure at the expense of\nmodelling long-range correlations. This is problematic because music exhibits\nstructure at many different timescales. In this work, we explore autoregressive\ndiscrete autoencoders (ADAs) as a means to enable autoregressive models to\ncapture long-range correlations in waveforms. We find that they allow us to\nunconditionally generate piano music directly in the raw audio domain, which\nshows stylistic consistency across tens of seconds.\n",
"title": "The challenge of realistic music generation: modelling raw audio at scale"
} | null | null | [
"Statistics"
]
| null | true | null | 217 | null | Validated | null | null |
null | {
"abstract": " Young asteroid families are unique sources of information about fragmentation\nphysics and the structure of their parent bodies, since their physical\nproperties have not changed much since their birth. Families have different\nproperties such as age, size, taxonomy, collision severity and others, and\nunderstanding the effect of those properties on our observations of the\nsize-frequency distribution (SFD) of family fragments can give us important\ninsights into the hypervelocity collision processes at scales we cannot achieve\nin our laboratories. Here we take as an example the very young Datura family,\nwith a small 8-km parent body, and compare its size distribution to other\nfamilies, with both large and small parent bodies, and created by both\ncatastrophic and cratering formation events. We conclude that most likely\nexplanation for the shallower size distribution compared to larger families is\na more pronounced observational bias because of its small size. Its size\ndistribution is perfectly normal when its parent body size is taken into\naccount. We also discuss some other possibilities. In addition, we study\nanother common feature: an offset or \"bump\" in the distribution occurring for a\nfew of the larger elements. We hypothesize that it can be explained by a newly\ndescribed regime of cratering, \"spall cratering\", which controls the majority\nof impact craters on the surface of small asteroids like Datura.\n",
"title": "Interpretations of family size distributions: The Datura example"
} | null | null | [
"Physics"
]
| null | true | null | 218 | null | Validated | null | null |
null | {
"abstract": " We provide a graph formula which describes an arbitrary monomial in {\\omega}\nclasses (also referred to as stable {\\psi} classes) in terms of a simple family\nof dual graphs (pinwheel graphs) with edges decorated by rational functions in\n{\\psi} classes. We deduce some numerical consequences and in particular a\ncombinatorial formula expressing top intersections of \\k{appa} classes on Mg in\nterms of top intersections of {\\psi} classes.\n",
"title": "Intersections of $ω$ classes in $\\overline{\\mathcal{M}}_{g,n}$"
} | null | null | null | null | true | null | 219 | null | Default | null | null |
null | {
"abstract": " Tomography has made a radical impact on diverse fields ranging from the study\nof 3D atomic arrangements in matter to the study of human health in medicine.\nDespite its very diverse applications, the core of tomography remains the same,\nthat is, a mathematical method must be implemented to reconstruct the 3D\nstructure of an object from a number of 2D projections. In many scientific\napplications, however, the number of projections that can be measured is\nlimited due to geometric constraints, tolerable radiation dose and/or\nacquisition speed. Thus it becomes an important problem to obtain the\nbest-possible reconstruction from a limited number of projections. Here, we\npresent the mathematical implementation of a tomographic algorithm, termed\nGENeralized Fourier Iterative REconstruction (GENFIRE). By iterating between\nreal and reciprocal space, GENFIRE searches for a global solution that is\nconcurrently consistent with the measured data and general physical\nconstraints. The algorithm requires minimal human intervention and also\nincorporates angular refinement to reduce the tilt angle error. We demonstrate\nthat GENFIRE can produce superior results relative to several other popular\ntomographic reconstruction techniques by numerical simulations, and by\nexperimentally by reconstructing the 3D structure of a porous material and a\nfrozen-hydrated marine cyanobacterium. Equipped with a graphical user\ninterface, GENFIRE is freely available from our website and is expected to find\nbroad applications across different disciplines.\n",
"title": "GENFIRE: A generalized Fourier iterative reconstruction algorithm for high-resolution 3D imaging"
} | null | null | null | null | true | null | 220 | null | Default | null | null |
null | {
"abstract": " Generative Adversarial Networks (GANs) excel at creating realistic images\nwith complex models for which maximum likelihood is infeasible. However, the\nconvergence of GAN training has still not been proved. We propose a two\ntime-scale update rule (TTUR) for training GANs with stochastic gradient\ndescent on arbitrary GAN loss functions. TTUR has an individual learning rate\nfor both the discriminator and the generator. Using the theory of stochastic\napproximation, we prove that the TTUR converges under mild assumptions to a\nstationary local Nash equilibrium. The convergence carries over to the popular\nAdam optimization, for which we prove that it follows the dynamics of a heavy\nball with friction and thus prefers flat minima in the objective landscape. For\nthe evaluation of the performance of GANs at image generation, we introduce the\n\"Fréchet Inception Distance\" (FID) which captures the similarity of generated\nimages to real ones better than the Inception Score. In experiments, TTUR\nimproves learning for DCGANs and Improved Wasserstein GANs (WGAN-GP)\noutperforming conventional GAN training on CelebA, CIFAR-10, SVHN, LSUN\nBedrooms, and the One Billion Word Benchmark.\n",
"title": "GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 221 | null | Validated | null | null |
null | {
"abstract": " Based on optical high-resolution spectra obtained with CFHT/ESPaDOnS, we\npresent new measurements of activity and magnetic field proxies of 442 low-mass\nK5-M7 dwarfs. The objects were analysed as potential targets to search for\nplanetary-mass companions with the new spectropolarimeter and high-precision\nvelocimeter, SPIRou. We have analysed their high-resolution spectra in an\nhomogeneous way: circular polarisation, chromospheric features, and Zeeman\nbroadening of the FeH infrared line. The complex relationship between these\nactivity indicators is analysed: while no strong connection is found between\nthe large-scale and small-scale magnetic fields, the latter relates with the\nnon-thermal flux originating in the chromosphere.\nWe then examine the relationship between various activity diagnostics and the\noptical radial-velocity jitter available in the literature, especially for\nplanet host stars. We use this to derive for all stars an activity merit\nfunction (higher for quieter stars) with the goal of identifying the most\nfavorable stars where the radial-velocity jitter is low enough for planet\nsearches. We find that the main contributors to the RV jitter are the\nlarge-scale magnetic field and the chromospheric non-thermal emission.\nIn addition, three stars (GJ 1289, GJ 793, and GJ 251) have been followed\nalong their rotation using the spectropolarimetric mode, and we derive their\nmagnetic topology. These very slow rotators are good representatives of future\nSPIRou targets. They are compared to other stars where the magnetic topology is\nalso known. The poloidal component of the magnetic field is predominent in all\nthree stars.\n",
"title": "SPIRou Input Catalog: Activity, Rotation and Magnetic Field of Cool Dwarfs"
} | null | null | null | null | true | null | 222 | null | Default | null | null |
null | {
"abstract": " Inferring directional connectivity from point process data of multiple\nelements is desired in various scientific fields such as neuroscience,\ngeography, economics, etc. Here, we propose an inference procedure for this\ngoal based on the kinetic Ising model. The procedure is composed of two steps:\n(1) determination of the time-bin size for transforming the point-process data\nto discrete time binary data and (2) screening of relevant couplings from the\nestimated networks. For these, we develop simple methods based on information\ntheory and computational statistics. Applications to data from artificial and\n\\textit{in vitro} neuronal networks show that the proposed procedure performs\nfairly well when identifying relevant couplings, including the discrimination\nof their signs, with low computational cost. These results highlight the\npotential utility of the kinetic Ising model to analyze real interacting\nsystems with event occurrences.\n",
"title": "Objective Procedure for Reconstructing Couplings in Complex Systems"
} | null | null | null | null | true | null | 223 | null | Default | null | null |
null | {
"abstract": " Support vector machines (SVMs) are an important tool in modern data analysis.\nTraditionally, support vector machines have been fitted via quadratic\nprogramming, either using purpose-built or off-the-shelf algorithms. We present\nan alternative approach to SVM fitting via the majorization--minimization (MM)\nparadigm. Algorithms that are derived via MM algorithm constructions can be\nshown to monotonically decrease their objectives at each iteration, as well as\nbe globally convergent to stationary points. We demonstrate the construction of\niteratively-reweighted least-squares (IRLS) algorithms, via the MM paradigm,\nfor SVM risk minimization problems involving the hinge, least-square,\nsquared-hinge, and logistic losses, and 1-norm, 2-norm, and elastic net\npenalizations. Successful implementations of our algorithms are presented via\nsome numerical examples.\n",
"title": "Iteratively-Reweighted Least-Squares Fitting of Support Vector Machines: A Majorization--Minimization Algorithm Approach"
} | null | null | null | null | true | null | 224 | null | Default | null | null |
null | {
"abstract": " Estimating vaccination uptake is an integral part of ensuring public health.\nIt was recently shown that vaccination uptake can be estimated automatically\nfrom web data, instead of slowly collected clinical records or population\nsurveys. All prior work in this area assumes that features of vaccination\nuptake collected from the web are temporally regular. We present the first ever\nmethod to remove this assumption from vaccination uptake estimation: our method\ndynamically adapts to temporal fluctuations in time series web data used to\nestimate vaccination uptake. We show our method to outperform the state of the\nart compared to competitive baselines that use not only web data but also\ncurated clinical data. This performance improvement is more pronounced for\nvaccines whose uptake has been irregular due to negative media attention (HPV-1\nand HPV-2), problems in vaccine supply (DiTeKiPol), and targeted at children of\n12 years old (whose vaccination is more irregular compared to younger\nchildren).\n",
"title": "Time-Series Adaptive Estimation of Vaccination Uptake Using Web Search Queries"
} | null | null | null | null | true | null | 225 | null | Default | null | null |
null | {
"abstract": " We show that every invertible strong mixing transformation on a Lebesgue\nspace has strictly over-recurrent sets. Also, we give an explicit procedure for\nconstructing strong mixing transformations with no under-recurrent sets. This\nanswers both parts of a question of V. Bergelson.\nWe define $\\epsilon$-over-recurrence and show that given $\\epsilon > 0$, any\nergodic measure preserving invertible transformation (including discrete\nspectrum) has $\\epsilon$-over-recurrent sets of arbitrarily small measure.\nDiscrete spectrum transformations and rotations do not have over-recurrent\nsets, but we construct a weak mixing rigid transformation with strictly\nover-recurrent sets.\n",
"title": "Over Recurrence for Mixing Transformations"
} | null | null | null | null | true | null | 226 | null | Default | null | null |
null | {
"abstract": " Development of a mesoscale neural circuitry map of the common marmoset is an\nessential task due to the ideal characteristics of the marmoset as a model\norganism for neuroscience research. To facilitate this development there is a\nneed for new computational tools to cross-register multi-modal data sets\ncontaining MRI volumes as well as multiple histological series, and to register\nthe combined data set to a common reference atlas. We present a fully automatic\npipeline for same-subject-MRI guided reconstruction of image volumes from a\nseries of histological sections of different modalities, followed by\ndiffeomorphic mapping to a reference atlas. We show registration results for\nNissl, myelin, CTB, and fluorescent tracer images using a same-subject ex-vivo\nMRI as our reference and show that our method achieves accurate registration\nand eliminates artifactual warping that may be result from the absence of a\nreference MRI data set. Examination of the determinant of the local metric\ntensor of the diffeomorphic mapping between each subject's ex-vivo MRI and\nresultant Nissl reconstruction allows an unprecedented local quantification of\ngeometrical distortions resulting from the histological processing, showing a\nslight shrinkage, a median linear scale change of ~-1% in going from the\nex-vivo MRI to the tape-transfer generated histological image data.\n",
"title": "Joint Atlas-Mapping of Multiple Histological Series combined with Multimodal MRI of Whole Marmoset Brains"
} | null | null | null | null | true | null | 227 | null | Default | null | null |
null | {
"abstract": " The system that we study in this paper contains a set of users that observe a\ndiscrete memoryless multiple source and communicate via noise-free channels\nwith the aim of attaining omniscience, the state that all users recover the\nentire multiple source. We adopt the concept of successive omniscience (SO),\ni.e., letting the local omniscience in some user subset be attained before the\nglobal omniscience in the entire system, and consider the problem of how to\nefficiently attain omniscience in a successive manner. Based on the existing\nresults on SO, we propose a CompSetSO algorithm for determining a complimentary\nset, a user subset in which the local omniscience can be attained first without\nincreasing the sum-rate, the total number of communications, for the global\nomniscience. We also derive a sufficient condition for a user subset to be\ncomplimentary so that running the CompSetSO algorithm only requires a lower\nbound, instead of the exact value, of the minimum sum-rate for attaining global\nomniscience. The CompSetSO algorithm returns a complimentary user subset in\npolynomial time. We show by example how to recursively apply the CompSetSO\nalgorithm so that the global omniscience can be attained by multi-stages of SO.\n",
"title": "A Practical Approach for Successive Omniscience"
} | null | null | null | null | true | null | 228 | null | Default | null | null |
null | {
"abstract": " In this paper we present a novel methodology for identifying scholars with a\nTwitter account. By combining bibliometric data from Web of Science and Twitter\nusers identified by Altmetric.com we have obtained the largest set of\nindividual scholars matched with Twitter users made so far. Our methodology\nconsists of a combination of matching algorithms, considering different\nlinguistic elements of both author names and Twitter names; followed by a\nrule-based scoring system that weights the common occurrence of several\nelements related with the names, individual elements and activities of both\nTwitter users and scholars matched. Our results indicate that about 2% of the\noverall population of scholars in the Web of Science is active on Twitter. By\ndomain we find a strong presence of researchers from the Social Sciences and\nthe Humanities. Natural Sciences is the domain with the lowest level of\nscholars on Twitter. Researchers on Twitter also tend to be younger than those\nthat are not on Twitter. As this is a bibliometric-based approach, it is\nimportant to highlight the reliance of the method on the number of publications\nproduced and tweeted by the scholars, thus the share of scholars on Twitter\nranges between 1% and 5% depending on their level of productivity. Further\nresearch is suggested in order to improve and expand the methodology.\n",
"title": "Scholars on Twitter: who and how many are they?"
} | null | null | null | null | true | null | 229 | null | Default | null | null |
null | {
"abstract": " As a measure for the centrality of a point in a set of multivariate data,\nstatistical depth functions play important roles in multivariate analysis,\nbecause one may conveniently construct descriptive as well as inferential\nprocedures relying on them. Many depth notions have been proposed in the\nliterature to fit to different applications. However, most of them are mainly\ndeveloped for the location setting. In this paper, we discuss the possibility\nof extending some of them into the regression setting. A general concept of\nregression depth function is also provided.\n",
"title": "General notions of regression depth function"
} | null | null | [
"Statistics"
]
| null | true | null | 230 | null | Validated | null | null |
null | {
"abstract": " When a two-dimensional electron gas is exposed to a perpendicular magnetic\nfield and an in-plane electric field, its conductance becomes quantized in the\ntransverse in-plane direction: this is known as the quantum Hall (QH) effect.\nThis effect is a result of the nontrivial topology of the system's electronic\nband structure, where an integer topological invariant known as the first Chern\nnumber leads to the quantization of the Hall conductance. Interestingly, it was\nshown that the QH effect can be generalized mathematically to four spatial\ndimensions (4D), but this effect has never been realized for the obvious reason\nthat experimental systems are bound to three spatial dimensions. In this work,\nwe harness the high tunability and control offered by photonic waveguide arrays\nto experimentally realize a dynamically-generated 4D QH system using a 2D array\nof coupled optical waveguides. The inter-waveguide separation is constructed\nsuch that the propagation of light along the device samples over\nhigher-dimensional momenta in the directions orthogonal to the two physical\ndimensions, thus realizing a 2D topological pump. As a result, the device's\nband structure is associated with 4D topological invariants known as second\nChern numbers which support a quantized bulk Hall response with a 4D symmetry.\nIn a finite-sized system, the 4D topological bulk response is carried by\nlocalized edges modes that cross the sample as a function of of the modulated\nauxiliary momenta. We directly observe this crossing through photon pumping\nfrom edge-to-edge and corner-to-corner of our system. These are equivalent to\nthe pumping of charge across a 4D system from one 3D hypersurface to the\nopposite one and from one 2D hyperedge to another, and serve as first\nexperimental realization of higher-dimensional topological physics.\n",
"title": "Photonic topological pumping through the edges of a dynamical four-dimensional quantum Hall system"
} | null | null | null | null | true | null | 231 | null | Default | null | null |
null | {
"abstract": " In many applications involving large dataset or online updating, stochastic\ngradient descent (SGD) provides a scalable way to compute parameter estimates\nand has gained increasing popularity due to its numerical convenience and\nmemory efficiency. While the asymptotic properties of SGD-based estimators have\nbeen established decades ago, statistical inference such as interval estimation\nremains much unexplored. The traditional resampling method such as the\nbootstrap is not computationally feasible since it requires to repeatedly draw\nindependent samples from the entire dataset. The plug-in method is not\napplicable when there are no explicit formulas for the covariance matrix of the\nestimator. In this paper, we propose a scalable inferential procedure for\nstochastic gradient descent, which, upon the arrival of each observation,\nupdates the SGD estimate as well as a large number of randomly perturbed SGD\nestimates. The proposed method is easy to implement in practice. We establish\nits theoretical properties for a general class of models that includes\ngeneralized linear models and quantile regression models as special cases. The\nfinite-sample performance and numerical utility is evaluated by simulation\nstudies and two real data applications.\n",
"title": "On Scalable Inference with Stochastic Gradient Descent"
} | null | null | null | null | true | null | 232 | null | Default | null | null |
null | {
"abstract": " In the work of Peng et al. in 2012, a new measure was proposed for fault\ndiagnosis of systems: namely, g-good-neighbor conditional diagnosability, which\nrequires that any fault-free vertex has at least g fault-free neighbors in the\nsystem. In this paper, we establish the g-good-neighbor conditional\ndiagnosability of locally twisted cubes under the PMC model and the MM^* model.\n",
"title": "The g-Good-Neighbor Conditional Diagnosability of Locally Twisted Cubes"
} | null | null | null | null | true | null | 233 | null | Default | null | null |
null | {
"abstract": " Categories of polymorphic lenses in computer science, and of open games in\ncompositional game theory, have a curious structure that is reminiscent of\ncompact closed categories, but differs in some crucial ways. Specifically they\nhave a family of morphisms that behave like the counits of a compact closed\ncategory, but have no corresponding units; and they have a `partial' duality\nthat behaves like transposition in a compact closed category when it is\ndefined. We axiomatise this structure, which we refer to as a `teleological\ncategory'. We precisely define a diagrammatic language suitable for these\ncategories, and prove a coherence theorem for them. This underpins the use of\ndiagrammatic reasoning in compositional game theory, which has previously been\nused only informally.\n",
"title": "Coherence for lenses and open games"
} | null | null | null | null | true | null | 234 | null | Default | null | null |
null | {
"abstract": " We present an efficient algorithm to compute Euler characteristic curves of\ngray scale images of arbitrary dimension. In various applications the Euler\ncharacteristic curve is used as a descriptor of an image.\nOur algorithm is the first streaming algorithm for Euler characteristic\ncurves. The usage of streaming removes the necessity to store the entire image\nin RAM. Experiments show that our implementation handles terabyte scale images\non commodity hardware. Due to lock-free parallelism, it scales well with the\nnumber of processor cores. Our software---CHUNKYEuler---is available as open\nsource on Bitbucket.\nAdditionally, we put the concept of the Euler characteristic curve in the\nwider context of computational topology. In particular, we explain the\nconnection with persistence diagrams.\n",
"title": "Streaming Algorithm for Euler Characteristic Curves of Multidimensional Images"
} | null | null | null | null | true | null | 235 | null | Default | null | null |
null | {
"abstract": " We give a new example of an automata group of intermediate growth. It is\ngenerated by an automaton with 4 states on an alphabet with 8 letters. This\nautomata group has exponential activity and its limit space is not simply\nconnected.\n",
"title": "An automata group of intermediate growth and exponential activity"
} | null | null | null | null | true | null | 236 | null | Default | null | null |
null | {
"abstract": " The crossover from Bardeen-Cooper-Schrieffer (BCS) superconductivity to\nBose-Einstein condensation (BEC) is difficult to realize in quantum materials\nbecause, unlike in ultracold atoms, one cannot tune the pairing interaction. We\nrealize the BCS-BEC crossover in a nearly compensated semimetal\nFe$_{1+y}$Se$_x$Te$_{1-x}$ by tuning the Fermi energy, $\\epsilon_F$, via\nchemical doping, which permits us to systematically change $\\Delta /\n\\epsilon_F$ from 0.16 to 0.5 were $\\Delta$ is the superconducting (SC) gap. We\nuse angle-resolved photoemission spectroscopy to measure the Fermi energy, the\nSC gap and characteristic changes in the SC state electronic dispersion as the\nsystem evolves from a BCS to a BEC regime. Our results raise important\nquestions about the crossover in multiband superconductors which go beyond\nthose addressed in the context of cold atoms.\n",
"title": "Tuning across the BCS-BEC crossover in the multiband superconductor Fe$_{1+y}$Se$_x$Te$_{1-x}$ : An angle-resolved photoemission study"
} | null | null | null | null | true | null | 237 | null | Default | null | null |
null | {
"abstract": " Model compression is essential for serving large deep neural nets on devices\nwith limited resources or applications that require real-time responses. As a\ncase study, a state-of-the-art neural language model usually consists of one or\nmore recurrent layers sandwiched between an embedding layer used for\nrepresenting input tokens and a softmax layer for generating output tokens. For\nproblems with a very large vocabulary size, the embedding and the softmax\nmatrices can account for more than half of the model size. For instance, the\nbigLSTM model achieves state-of- the-art performance on the One-Billion-Word\n(OBW) dataset with around 800k vocabulary, and its word embedding and softmax\nmatrices use more than 6GBytes space, and are responsible for over 90% of the\nmodel parameters. In this paper, we propose GroupReduce, a novel compression\nmethod for neural language models, based on vocabulary-partition (block) based\nlow-rank matrix approximation and the inherent frequency distribution of tokens\n(the power-law distribution of words). The experimental results show our method\ncan significantly outperform traditional compression methods such as low-rank\napproximation and pruning. On the OBW dataset, our method achieved 6.6 times\ncompression rate for the embedding and softmax matrices, and when combined with\nquantization, our method can achieve 26 times compression rate, which\ntranslates to a factor of 12.8 times compression for the entire model with very\nlittle degradation in perplexity.\n",
"title": "GroupReduce: Block-Wise Low-Rank Approximation for Neural Language Model Shrinking"
} | null | null | null | null | true | null | 238 | null | Default | null | null |
null | {
"abstract": " 200 nm thick SiO2 layers grown on Si substrates and Ge ions of 150 keV energy\nwere implanted into SiO2 matrix with Different fluences. The implanted samples\nwere annealed at 950 C for 30 minutes in Ar ambience. Topographical studies of\nimplanted as well as annealed samples were captured by the atomic force\nmicroscopy (AFM). Two dimension (2D) multifractal detrended fluctuation\nanalysis (MFDFA) based on the partition function approach has been used to\nstudy the surfaces of ion implanted and annealed samples. The partition\nfunction is used to calculate generalized Hurst exponent with the segment size.\nMoreover, it is seen that the generalized Hurst exponents vary nonlinearly with\nthe moment, thereby exhibiting the multifractal nature. The multifractality of\nsurface is pronounced after annealing for the surface implanted with fluence\n7.5X1016 ions/cm^2.\n",
"title": "Morphological characterization of Ge ion implanted SiO2 matrix using multifractal technique"
} | null | null | null | null | true | null | 239 | null | Default | null | null |
null | {
"abstract": " Corrosion of Indian RAFMS (reduced activation ferritic martensitic steel)\nmaterial with liquid metal, Lead Lithium ( Pb-Li) has been studied under static\ncondition, maintaining Pb-Li at 550 C for different time durations, 2500, 5000\nand 9000 hours. Corrosion rate was calculated from weight loss measurements.\nMicrostructure analysis was carried out using SEM and chemical composition by\nSEM-EDX measurements. Micro Vickers hardness and tensile testing were also\ncarried out. Chromium was found leaching from the near surface regions and\nsurface hardness was found to decrease in all the three cases. Grain boundaries\nwere affected. Some grains got detached from the surface giving rise to pebble\nlike structures in the surface micrographs. There was no significant reduction\nin the tensile strength, after exposure to liquid metal. This paper discusses\nthe experimental details and the results obtained.\n",
"title": "Preliminary corrosion studies of IN-RAFM steel with stagnant Lead Lithium at 550 C"
} | null | null | null | null | true | null | 240 | null | Default | null | null |
null | {
"abstract": " This paper presents an overview and discussion of magnetocapillary\nself-assemblies. New results are presented, in particular concerning the\npossible development of future applications. These self-organizing structures\npossess the notable ability to move along an interface when powered by an\noscillatory, uniform magnetic field. The system is constructed as follows. Soft\nmagnetic particles are placed on a liquid interface, and submitted to a\nmagnetic induction field. An attractive force due to the curvature of the\ninterface around the particles competes with an interaction between magnetic\ndipoles. Ordered structures can spontaneously emerge from these conditions.\nFurthermore, time-dependent magnetic fields can produce a wide range of dynamic\nbehaviours, including non-time-reversible deformation sequences that produce\ntranslational motion at low Reynolds number. In other words, due to a\nspontaneous breaking of time-reversal symmetry, the assembly can turn into a\nsurface microswimmer. Trajectories have been shown to be precisely\ncontrollable. As a consequence, this system offers a way to produce microrobots\nable to perform different tasks. This is illustrated in this paper by the\ncapture, transport and release of a floating cargo, and the controlled mixing\nof fluids at low Reynolds number.\n",
"title": "Magnetocapillary self-assemblies: locomotion and micromanipulation along a liquid interface"
} | null | null | null | null | true | null | 241 | null | Default | null | null |
null | {
"abstract": " For the problem of nonparametric detection of signal in Gaussian white noise\nwe point out strong asymptotically minimax tests. The sets of alternatives are\na ball in Besov space $B^r_{2\\infty}$ with \"small\" balls in $L_2$ removed.\n",
"title": "On asymptotically minimax nonparametric detection of signal in Gaussian white noise"
} | null | null | null | null | true | null | 242 | null | Default | null | null |
null | {
"abstract": " Metabolic flux balance analyses are a standard tool in analysing metabolic\nreaction rates compatible with measurements, steady-state and the metabolic\nreaction network stoichiometry. Flux analysis methods commonly place\nunrealistic assumptions on fluxes due to the convenience of formulating the\nproblem as a linear programming model, and most methods ignore the notable\nuncertainty in flux estimates. We introduce a novel paradigm of Bayesian\nmetabolic flux analysis that models the reactions of the whole genome-scale\ncellular system in probabilistic terms, and can infer the full flux vector\ndistribution of genome-scale metabolic systems based on exchange and\nintracellular (e.g. 13C) flux measurements, steady-state assumptions, and\ntarget function assumptions. The Bayesian model couples all fluxes jointly\ntogether in a simple truncated multivariate posterior distribution, which\nreveals informative flux couplings. Our model is a plug-in replacement to\nconventional metabolic balance methods, such as flux balance analysis (FBA).\nOur experiments indicate that we can characterise the genome-scale flux\ncovariances, reveal flux couplings, and determine more intracellular unobserved\nfluxes in C. acetobutylicum from 13C data than flux variability analysis. The\nCOBRA compatible software is available at github.com/markusheinonen/bamfa\n",
"title": "Bayesian Metabolic Flux Analysis reveals intracellular flux couplings"
} | null | null | null | null | true | null | 243 | null | Default | null | null |
null | {
"abstract": " We introduce a robust estimator of the location parameter for the\nchange-point in the mean based on the Wilcoxon statistic and establish its\nconsistency for $L_1$ near epoch dependent processes. It is shown that the\nconsistency rate depends on the magnitude of change. A simulation study is\nperformed to evaluate finite sample properties of the Wilcoxon-type estimator\nin standard cases, as well as under heavy-tailed distributions and disturbances\nby outliers, and to compare it with a CUSUM-type estimator. It shows that the\nWilcoxon-type estimator is equivalent to the CUSUM-type estimator in standard\ncases, but outperforms the CUSUM-type estimator in presence of heavy tails or\noutliers in the data.\n",
"title": "Robust Estimation of Change-Point Location"
} | null | null | null | null | true | null | 244 | null | Default | null | null |
null | {
"abstract": " In glass forming liquids close to the glass transition point, even a very\nslight increase in the macroscopic density results in a dramatic slowing down\nof the macroscopic relaxation. Concomitantly, the local density itself\nfluctuates in space. Therefore, one can imagine that even very small local\ndensity variations control the local glassy nature. Based on this perspective,\na model for describing growing length scale accompanying the vitrification is\nintroduced, in which we assume that in a subsystem whose density is above a\ncertain threshold value, $\\rho_{\\rm c}$, owing to steric constraints, particle\nrearrangements are highly suppressed for a sufficiently long time period\n($\\sim$ structural relaxation time). We regard such a subsystem as a glassy\ncluster. Then, based on the statistics of the subsystem-density, we predict\nthat with compression (increasing average density $\\rho$) at a fixed\ntemperature $T$ in supercooled states, the characteristic length of the\nclusters, $\\xi$, diverges as $\\xi\\sim(\\rho_{\\rm c}-\\rho)^{-2/d}$, where $d$ is\nthe spatial dimensionality. This $\\xi$ measures the average persistence length\nof the steric constraints in blocking the rearrangement motions and is\ndetermined by the subsystem density. Additionally, with decreasing $T$ at a\nfixed $\\rho$, the length scale diverges in the same manner as $\\xi\\sim(T-T_{\\rm\nc})^{-2/d}$, for which $\\rho$ is identical to $\\rho_{\\rm c}$ at $T=T_{\\rm c}$.\nThe exponent describing the diverging length scale is the same as the one\npredicted by some theoretical models and indeed has been observed in some\nsimulations and experiments. However, the basic mechanism for this divergence\nis different; that is, we do not invoke thermodynamic anomalies associated with\nthe thermodynamic phase transition as the origin of the growing length scale.\nWe further present arguements for the cooperative properties based on the\nclusters.\n",
"title": "Growing length scale accompanying the vitrification: A perspective based on non-singular density fluctuations"
} | null | null | null | null | true | null | 245 | null | Default | null | null |
null | {
"abstract": " We propose a new Pareto Local Search Algorithm for the many-objective\ncombinatorial optimization. Pareto Local Search proved to be a very effective\ntool in the case of the bi-objective combinatorial optimization and it was used\nin a number of the state-of-the-art algorithms for problems of this kind. On\nthe other hand, the standard Pareto Local Search algorithm becomes very\ninefficient for problems with more than two objectives. We build an effective\nMany-Objective Pareto Local Search algorithm using three new mechanisms: the\nefficient update of large Pareto archives with ND-Tree data structure, a new\nmechanism for the selection of the promising solutions for the neighborhood\nexploration, and a partial exploration of the neighborhoods. We apply the\nproposed algorithm to the instances of two different problems, i.e. the\ntraveling salesperson problem and the traveling salesperson problem with\nprofits with up to 5 objectives showing high effectiveness of the proposed\nalgorithm.\n",
"title": "Many-Objective Pareto Local Search"
} | null | null | null | null | true | null | 246 | null | Default | null | null |
null | {
"abstract": " We identify the components of bio-inspired artificial camouflage systems\nincluding actuation, sensing, and distributed computation. After summarizing\nrecent results in understanding the physiology and system-level performance of\na variety of biological systems, we describe computational algorithms that can\ngenerate similar patterns and have the potential for distributed\nimplementation. We find that the existing body of work predominately treats\ncomponent technology in an isolated manner that precludes a material-like\nimplementation that is scale-free and robust. We conclude with open research\nchallenges towards the realization of integrated camouflage solutions.\n",
"title": "From Natural to Artificial Camouflage: Components and Systems"
} | null | null | [
"Computer Science",
"Quantitative Biology"
]
| null | true | null | 247 | null | Validated | null | null |
null | {
"abstract": " In the present work we study Bayesian nonparametric inference for the\ncontinuous-time M/G/1 queueing system. In the focus of the study is the\nunobservable service time distribution. We assume that the only available data\nof the system are the marked departure process of customers with the marks\nbeing the queue lengths just after departure instants. These marks constitute\nan embedded Markov chain whose distribution may be parametrized by stochastic\nmatrices of a special delta form. We develop the theory in order to obtain\nintegral mixtures of Markov measures with respect to suitable prior\ndistributions. We have found a sufficient statistic with a distribution of a\nso-called S-structure sheding some new light on the inner statistical structure\nof the M/G/1 queue. Moreover, it allows to update suitable prior distributions\nto the posterior. Our inference methods are validated by large sample results\nas posterior consistency and posterior normality.\n",
"title": "Bayesian nonparametric inference for the M/G/1 queueing systems based on the marked departure process"
} | null | null | null | null | true | null | 248 | null | Default | null | null |
null | {
"abstract": " We will show that $(1-q)(1-q^2)\\dots (1-q^m)$ is a polynomial in $q$ with\ncoefficients from $\\{-1,0,1\\}$ iff $m=1,\\ 2,\\ 3,$ or $5$ and explore some\ninteresting consequences of this result. We find explicit formulas for the\n$q$-series coefficients of $(1-q^2)(1-q^3)(1-q^4)(1-q^5)\\dots$ and\n$(1-q^3)(1-q^4)(1-q^5)(1-q^6)\\dots$. In doing so, we extend certain\nobservations made by Sudler in 1964. We also discuss the classification of the\nproducts $(1-q)(1-q^2)\\dots (1-q^m)$ and some related series with respect to\ntheir absolute largest coefficients.\n",
"title": "On some polynomials and series of Bloch-Polya Type"
} | null | null | null | null | true | null | 249 | null | Default | null | null |
null | {
"abstract": " In this paper, we develop a position estimation system for Unmanned Aerial\nVehicles formed by hardware and software. It is based on low-cost devices: GPS,\ncommercial autopilot sensors and dense optical flow algorithm implemented in an\nonboard microcomputer. Comparative tests were conducted using our approach and\nthe conventional one, where only fusion of GPS and inertial sensors are used.\nExperiments were conducted using a quadrotor in two flying modes: hovering and\ntrajectory tracking in outdoor environments. Results demonstrate the\neffectiveness of the proposed approach in comparison with the conventional\napproaches presented in the vast majority of commercial drones.\n",
"title": "Improvement in the UAV position estimation with low-cost GPS, INS and vision-based system: Application to a quadrotor UAV"
} | null | null | null | null | true | null | 250 | null | Default | null | null |
null | {
"abstract": " We study the decomposition of a multivariate Hankel matrix H\\_$\\sigma$ as a\nsum of Hankel matrices of small rank in correlation with the decomposition of\nits symbol $\\sigma$ as a sum of polynomial-exponential series. We present a new\nalgorithm to compute the low rank decomposition of the Hankel operator and the\ndecomposition of its symbol exploiting the properties of the associated\nArtinian Gorenstein quotient algebra A\\_$\\sigma$. A basis of A\\_$\\sigma$ is\ncomputed from the Singular Value Decomposition of a sub-matrix of the Hankel\nmatrix H\\_$\\sigma$. The frequencies and the weights are deduced from the\ngeneralized eigenvectors of pencils of shifted sub-matrices of H $\\sigma$.\nExplicit formula for the weights in terms of the eigenvectors avoid us to solve\na Vandermonde system. This new method is a multivariate generalization of the\nso-called Pencil method for solving Prony-type decomposition problems. We\nanalyse its numerical behaviour in the presence of noisy input moments, and\ndescribe a rescaling technique which improves the numerical quality of the\nreconstruction for frequencies of high amplitudes. We also present a new Newton\niteration, which converges locally to the closest multivariate Hankel matrix of\nlow rank and show its impact for correcting errors on input moments.\n",
"title": "Structured low rank decomposition of multivariate Hankel matrices"
} | null | null | [
"Mathematics"
]
| null | true | null | 251 | null | Validated | null | null |
null | {
"abstract": " Linear time-periodic (LTP) dynamical systems frequently appear in the\nmodeling of phenomena related to fluid dynamics, electronic circuits, and\nstructural mechanics via linearization centered around known periodic orbits of\nnonlinear models. Such LTP systems can reach orders that make repeated\nsimulation or other necessary analysis prohibitive, motivating the need for\nmodel reduction.\nWe develop here an algorithmic framework for constructing reduced models that\nretains the linear time-periodic structure of the original LTP system. Our\napproach generalizes optimal approaches that have been established previously\nfor linear time-invariant (LTI) model reduction problems. We employ an\nextension of the usual H2 Hardy space defined for the LTI setting to\ntime-periodic systems and within this broader framework develop an a posteriori\nerror bound expressible in terms of related LTI systems. Optimization of this\nbound motivates our algorithm. We illustrate the success of our method on two\nnumerical examples.\n",
"title": "Linear time-periodic dynamical systems: An H2 analysis and a model reduction framework"
} | null | null | null | null | true | null | 252 | null | Default | null | null |
null | {
"abstract": " Broad efforts are underway to capture metadata about research software and\nretain it across services; notable in this regard is the CodeMeta project. What\nmetadata are important to have about (research) software? What metadata are\nuseful for searching for codes? What would you like to learn about astronomy\nsoftware? This BoF sought to gather information on metadata most desired by\nresearchers and users of astro software and others interested in registering,\nindexing, capturing, and doing research on this software. Information from this\nBoF could conceivably result in changes to the Astrophysics Source Code Library\n(ASCL) or other resources for the benefit of the community or provide input\ninto other projects concerned with software metadata.\n",
"title": "Software metadata: How much is enough?"
} | null | null | null | null | true | null | 253 | null | Default | null | null |
null | {
"abstract": " Recently, digital music libraries have been developed and can be plainly\naccessed. Latest research showed that current organization and retrieval of\nmusic tracks based on album information are inefficient. Moreover, they\ndemonstrated that people use emotion tags for music tracks in order to search\nand retrieve them. In this paper, we discuss separability of a set of emotional\nlabels, proposed in the categorical emotion expression, using Fisher's\nseparation theorem. We determine a set of adjectives to tag music parts: happy,\nsad, relaxing, exciting, epic and thriller. Temporal, frequency and energy\nfeatures have been extracted from the music parts. It could be seen that the\nmaximum separability within the extracted features occurs between relaxing and\nepic music parts. Finally, we have trained a classifier using Support Vector\nMachines to automatically recognize and generate emotional labels for a music\npart. Accuracy for recognizing each label has been calculated; where the\nresults show that epic music can be recognized more accurately (77.4%),\ncomparing to the other types of music.\n",
"title": "A Categorical Approach for Recognizing Emotional Effects of Music"
} | null | null | null | null | true | null | 254 | null | Default | null | null |
null | {
"abstract": " One key requirement for effective supply chain management is the quality of\nits inventory management. Various inventory management methods are typically\nemployed for different types of products based on their demand patterns,\nproduct attributes, and supply network. In this paper, our goal is to develop\nrobust demand prediction methods for weather sensitive products at retail\nstores. We employ historical datasets from Walmart, whose customers and markets\nare often exposed to extreme weather events which can have a huge impact on\nsales regarding the affected stores and products. We want to accurately predict\nthe sales of 111 potentially weather-sensitive products around the time of\nmajor weather events at 45 of Walmart retails locations in the U.S.\nIntuitively, we may expect an uptick in the sales of umbrellas before a big\nthunderstorm, but it is difficult for replenishment managers to predict the\nlevel of inventory needed to avoid being out-of-stock or overstock during and\nafter that storm. While they rely on a variety of vendor tools to predict sales\naround extreme weather events, they mostly employ a time-consuming process that\nlacks a systematic measure of effectiveness. We employ all the methods critical\nto any analytics project and start with data exploration. Critical features are\nextracted from the raw historical dataset for demand forecasting accuracy and\nrobustness. In particular, we employ Artificial Neural Network for forecasting\ndemand for each product sold around the time of major weather events. Finally,\nwe evaluate our model to evaluate their accuracy and robustness.\n",
"title": "Utilizing artificial neural networks to predict demand for weather-sensitive products at retail stores"
} | null | null | null | null | true | null | 255 | null | Default | null | null |
null | {
"abstract": " We propose a deformable generator model to disentangle the appearance and\ngeometric information from images into two independent latent vectors. The\nappearance generator produces the appearance information, including color,\nillumination, identity or category, of an image. The geometric generator\nproduces displacement of the coordinates of each pixel and performs geometric\nwarping, such as stretching and rotation, on the appearance generator to obtain\nthe final synthesized image. The proposed model can learn both representations\nfrom image data in an unsupervised manner. The learned geometric generator can\nbe conveniently transferred to the other image datasets to facilitate\ndownstream AI tasks.\n",
"title": "Deformable Generator Network: Unsupervised Disentanglement of Appearance and Geometry"
} | null | null | null | null | true | null | 256 | null | Default | null | null |
null | {
"abstract": " The Gaussian kernel is a very popular kernel function used in many\nmachine-learning algorithms, especially in support vector machines (SVM). For\nnonlinear training instances in machine learning, it often outperforms\npolynomial kernels in model accuracy. We use Gaussian kernel profoundly in\nformulating nonlinear classical SVM. In the recent research, P. Rebentrost\net.al. discuss a very elegant quantum version of least square support vector\nmachine using the quantum version of polynomial kernel, which is exponentially\nfaster than the classical counterparts. In this paper, we have demonstrated a\nquantum version of the Gaussian kernel and analyzed its complexity in the\ncontext of quantum SVM. Our analysis shows that the computational complexity of\nthe quantum Gaussian kernel is O(\\epsilon^(-1)logN) with N-dimensional\ninstances and \\epsilon with a Taylor remainder error term |R_m (\\epsilon^(-1)\nlogN)|.\n",
"title": "Gaussian Kernel in Quantum Paradigm"
} | null | null | [
"Computer Science"
]
| null | true | null | 257 | null | Validated | null | null |
null | {
"abstract": " Security, privacy, and fairness have become critical in the era of data\nscience and machine learning. More and more we see that achieving universally\nsecure, private, and fair systems is practically impossible. We have seen for\nexample how generative adversarial networks can be used to learn about the\nexpected private training data; how the exploitation of additional data can\nreveal private information in the original one; and how what looks like\nunrelated features can teach us about each other. Confronted with this\nchallenge, in this paper we open a new line of research, where the security,\nprivacy, and fairness is learned and used in a closed environment. The goal is\nto ensure that a given entity (e.g., the company or the government), trusted to\ninfer certain information with our data, is blocked from inferring protected\ninformation from it. For example, a hospital might be allowed to produce\ndiagnosis on the patient (the positive task), without being able to infer the\ngender of the subject (negative task). Similarly, a company can guarantee that\ninternally it is not using the provided data for any undesired task, an\nimportant goal that is not contradicting the virtually impossible challenge of\nblocking everybody from the undesired task. We design a system that learns to\nsucceed on the positive task while simultaneously fail at the negative one, and\nillustrate this with challenging cases where the positive task is actually\nharder than the negative one being blocked. Fairness, to the information in the\nnegative task, is often automatically obtained as a result of this proposed\napproach. The particular framework and examples open the door to security,\nprivacy, and fairness in very important closed scenarios, ranging from private\ndata accumulation companies like social networks to law-enforcement and\nhospitals.\n",
"title": "Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems"
} | null | null | null | null | true | null | 258 | null | Default | null | null |
null | {
"abstract": " The difficulty of modeling energy consumption in communication systems leads\nto challenges in energy harvesting (EH) systems, in which nodes scavenge energy\nfrom their environment. An EH receiver must harvest enough energy for\ndemodulating and decoding. The energy required depends upon factors, like code\nrate and signal-to-noise ratio, which can be adjusted dynamically. We consider\na receiver which harvests energy from ambient sources and the transmitter,\nmeaning the received signal is used for both EH and information decoding.\nAssuming a generalized function for energy consumption, we maximize the total\nnumber of information bits decoded, under both average and peak power\nconstraints at the transmitter, by carefully optimizing the power used for EH,\npower used for information transmission, fraction of time for EH, and code\nrate. For transmission over a single block, we find there exist problem\nparameters for which either maximizing power for information transmission or\nmaximizing power for EH is optimal. In the general case, the optimal solution\nis a tradeoff of the two. For transmission over multiple blocks, we give an\nupper bound on performance and give sufficient and necessary conditions to\nachieve this bound. Finally, we give some numerical results to illustrate our\nresults and analysis.\n",
"title": "Performance of Energy Harvesting Receivers with Power Optimization"
} | null | null | [
"Computer Science"
]
| null | true | null | 259 | null | Validated | null | null |
null | {
"abstract": " This paper studies a recently proposed continuous-time distributed\nself-appraisal model with time-varying interactions among a network of $n$\nindividuals which are characterized by a sequence of time-varying relative\ninteraction matrices. The model describes the evolution of the\nsocial-confidence levels of the individuals via a reflected appraisal mechanism\nin real time. We first show by example that when the relative interaction\nmatrices are stochastic (not doubly stochastic), the social-confidence levels\nof the individuals may not converge to a steady state. We then show that when\nthe relative interaction matrices are doubly stochastic, the $n$ individuals'\nself-confidence levels will all converge to $1/n$, which indicates a democratic\nstate, exponentially fast under appropriate assumptions, and provide an\nexplicit expression of the convergence rate.\n",
"title": "On Convergence Rate of a Continuous-Time Distributed Self-Appraisal Model with Time-Varying Relative Interaction Matrices"
} | null | null | null | null | true | null | 260 | null | Default | null | null |
null | {
"abstract": " When the brain receives input from multiple sensory systems, it is faced with\nthe question of whether it is appropriate to process the inputs in combination,\nas if they originated from the same event, or separately, as if they originated\nfrom distinct events. Furthermore, it must also have a mechanism through which\nit can keep sensory inputs calibrated to maintain the accuracy of its internal\nrepresentations. We have developed a neural network architecture capable of i)\napproximating optimal multisensory spatial integration, based on Bayesian\ncausal inference, and ii) recalibrating the spatial encoding of sensory\nsystems. The architecture is based on features of the dorsal processing\nhierarchy, including the spatial tuning properties of unisensory neurons and\nthe convergence of different sensory inputs onto multisensory neurons.\nFurthermore, we propose that these unisensory and multisensory neurons play\ndual roles in i) encoding spatial location as separate or integrated estimates\nand ii) accumulating evidence for the independence or relatedness of\nmultisensory stimuli. We further propose that top-down feedback connections\nspanning the dorsal pathway play key a role in recalibrating spatial encoding\nat the level of early unisensory cortices. Our proposed architecture provides\npossible explanations for a number of human electrophysiological and\nneuroimaging results and generates testable predictions linking neurophysiology\nwith behaviour.\n",
"title": "Closing the loop on multisensory interactions: A neural architecture for multisensory causal inference and recalibration"
} | null | null | null | null | true | null | 261 | null | Default | null | null |
null | {
"abstract": " A common problem in large-scale data analysis is to approximate a matrix\nusing a combination of specifically sampled rows and columns, known as CUR\ndecomposition. Unfortunately, in many real-world environments, the ability to\nsample specific individual rows or columns of the matrix is limited by either\nsystem constraints or cost. In this paper, we consider matrix approximation by\nsampling predefined \\emph{blocks} of columns (or rows) from the matrix. We\npresent an algorithm for sampling useful column blocks and provide novel\nguarantees for the quality of the approximation. This algorithm has application\nin problems as diverse as biometric data analysis to distributed computing. We\ndemonstrate the effectiveness of the proposed algorithms for computing the\nBlock CUR decomposition of large matrices in a distributed setting with\nmultiple nodes in a compute cluster, where such blocks correspond to columns\n(or rows) of the matrix stored on the same node, which can be retrieved with\nmuch less overhead than retrieving individual columns stored across different\nnodes. In the biometric setting, the rows correspond to different users and\ncolumns correspond to users' biometric reaction to external stimuli, {\\em\ne.g.,}~watching video content, at a particular time instant. There is\nsignificant cost in acquiring each user's reaction to lengthy content so we\nsample a few important scenes to approximate the biometric response. An\nindividual time sample in this use case cannot be queried in isolation due to\nthe lack of context that caused that biometric reaction. Instead, collections\nof time segments ({\\em i.e.,} blocks) must be presented to the user. The\npractical application of these algorithms is shown via experimental results\nusing real-world user biometric data from a content testing environment.\n",
"title": "Block CUR: Decomposing Matrices using Groups of Columns"
} | null | null | null | null | true | null | 262 | null | Default | null | null |
null | {
"abstract": " The unusually high surface tension of room temperature liquid metal is\nmolding it as unique material for diverse newly emerging areas. However, unlike\nits practices on earth, such metal fluid would display very different behaviors\nwhen working in space where gravity disappears and surface property dominates\nthe major physics. So far, few direct evidences are available to understand\nsuch effect which would impede further exploration of liquid metal use for\nspace. Here to preliminarily probe into this intriguing issue, a low cost\nexperimental strategy to simulate microgravity environment on earth was\nproposed through adopting bridges with high enough free falling distance as the\ntest platform. Then using digital cameras amounted along x, y, z directions on\noutside wall of the transparent container with liquid metal and allied solution\ninside, synchronous observations on the transient flow and transformational\nactivities of liquid metal were performed. Meanwhile, an unmanned aerial\nvehicle was adopted to record the whole free falling dynamics of the test\ncapsule from the far end which can help justify subsequent experimental\nprocedures. A series of typical fundamental phenomena were thus observed as:\n(a) A relatively large liquid metal object would spontaneously transform from\nits original planar pool state into a sphere and float in the container if\ninitiating the free falling; (b) The liquid metal changes its three-dimensional\nshape due to dynamic microgravity strength due to free falling and rebound of\nthe test capsule; and (c) A quick spatial transformation of liquid metal\nimmersed in the solution can easily be induced via external electrical fields.\nThe mechanisms of the surface tension driven liquid metal actuation in space\nwere interpreted. All these findings indicated that microgravity effect should\nbe fully treated in developing future generation liquid metal space\ntechnologies.\n",
"title": "Synchronous Observation on the Spontaneous Transformation of Liquid Metal under Free Falling Microgravity Situation"
} | null | null | null | null | true | null | 263 | null | Default | null | null |
null | {
"abstract": " Hamiltonian Monte Carlo (HMC) is a powerful Markov chain Monte Carlo (MCMC)\nmethod for performing approximate inference in complex probabilistic models of\ncontinuous variables. In common with many MCMC methods, however, the standard\nHMC approach performs poorly in distributions with multiple isolated modes. We\npresent a method for augmenting the Hamiltonian system with an extra continuous\ntemperature control variable which allows the dynamic to bridge between\nsampling a complex target distribution and a simpler unimodal base\ndistribution. This augmentation both helps improve mixing in multimodal targets\nand allows the normalisation constant of the target distribution to be\nestimated. The method is simple to implement within existing HMC code,\nrequiring only a standard leapfrog integrator. We demonstrate experimentally\nthat the method is competitive with annealed importance sampling and simulating\ntempering methods at sampling from challenging multimodal distributions and\nestimating their normalising constants.\n",
"title": "Continuously tempered Hamiltonian Monte Carlo"
} | null | null | null | null | true | null | 264 | null | Default | null | null |
null | {
"abstract": " We present a new method for the automated synthesis of digital controllers\nwith formal safety guarantees for systems with nonlinear dynamics, noisy output\nmeasurements, and stochastic disturbances. Our method derives digital\ncontrollers such that the corresponding closed-loop system, modeled as a\nsampled-data stochastic control system, satisfies a safety specification with\nprobability above a given threshold. The proposed synthesis method alternates\nbetween two steps: generation of a candidate controller pc, and verification of\nthe candidate. pc is found by maximizing a Monte Carlo estimate of the safety\nprobability, and by using a non-validated ODE solver for simulating the system.\nSuch a candidate is therefore sub-optimal but can be generated very rapidly. To\nrule out unstable candidate controllers, we prove and utilize Lyapunov's\nindirect method for instability of sampled-data nonlinear systems. In the\nsubsequent verification step, we use a validated solver based on SMT\n(Satisfiability Modulo Theories) to compute a numerically and statistically\nvalid confidence interval for the safety probability of pc. If the probability\nso obtained is not above the threshold, we expand the search space for\ncandidates by increasing the controller degree. We evaluate our technique on\nthree case studies: an artificial pancreas model, a powertrain control model,\nand a quadruple-tank process.\n",
"title": "Automated Synthesis of Safe Digital Controllers for Sampled-Data Stochastic Nonlinear Systems"
} | null | null | [
"Computer Science"
]
| null | true | null | 265 | null | Validated | null | null |
null | {
"abstract": " In the present paper we consider numerical methods to solve the discrete\nSchrödinger equation with a time dependent Hamiltonian (motivated by problems\nencountered in the study of spin systems). We will consider both short-range\ninteractions, which lead to evolution equations involving sparse matrices, and\nlong-range interactions, which lead to dense matrices. Both of these settings\nshow very different computational characteristics. We use Magnus integrators\nfor time integration and employ a framework based on Leja interpolation to\ncompute the resulting action of the matrix exponential. We consider both\ntraditional Magnus integrators (which are extensively used for these types of\nproblems in the literature) as well as the recently developed commutator-free\nMagnus integrators and implement them on modern CPU and GPU (graphics\nprocessing unit) based systems.\nWe find that GPUs can yield a significant speed-up (up to a factor of $10$ in\nthe dense case) for these types of problems. In the sparse case GPUs are only\nadvantageous for large problem sizes and the achieved speed-ups are more\nmodest. In most cases the commutator-free variant is superior but especially on\nthe GPU this advantage is rather small. In fact, none of the advantage of\ncommutator-free methods on GPUs (and on multi-core CPUs) is due to the\nelimination of commutators. This has important consequences for the design of\nmore efficient numerical methods.\n",
"title": "Magnus integrators on multicore CPUs and GPUs"
} | null | null | null | null | true | null | 266 | null | Default | null | null |
null | {
"abstract": " This paper re-investigates the estimation of multiple factor models relaxing\nthe convention that the number of factors is small and using a new approach for\nidentifying factors. We first obtain the collection of all possible factors and\nthen provide a simultaneous test, security by security, of which factors are\nsignificant. Since the collection of risk factors is large and highly\ncorrelated, high-dimension methods (including the LASSO and prototype\nclustering) have to be used. The multi-factor model is shown to have a\nsignificantly better fit than the Fama-French 5-factor model. Robustness tests\nare also provided.\n",
"title": "High Dimensional Estimation and Multi-Factor Models"
} | null | null | null | null | true | null | 267 | null | Default | null | null |
null | {
"abstract": " We experimentally confirmed the threshold behavior and scattering length\nscaling law of the three-body loss coefficients in an ultracold spin-polarized\ngas of $^6$Li atoms near a $p$-wave Feshbach resonance. We measured the\nthree-body loss coefficients as functions of temperature and scattering volume,\nand found that the threshold law and the scattering length scaling law hold in\nlimited temperature and magnetic field regions. We also found that the\nbreakdown of the scaling laws is due to the emergence of the effective-range\nterm. This work is an important first step toward full understanding of the\nloss of identical fermions with $p$-wave interactions.\n",
"title": "Scaling Law for Three-body Collisions in Identical Fermions with $p$-wave Interactions"
} | null | null | null | null | true | null | 268 | null | Default | null | null |
null | {
"abstract": " The paper proposes an expanded version of the Local Variance Gamma model of\nCarr and Nadtochiy by adding drift to the governing underlying process. Still\nin this new model it is possible to derive an ordinary differential equation\nfor the option price which plays a role of Dupire's equation for the standard\nlocal volatility model. It is shown how calibration of multiple smiles (the\nwhole local volatility surface) can be done in such a case. Further, assuming\nthe local variance to be a piecewise linear function of strike and piecewise\nconstant function of time this ODE is solved in closed form in terms of\nConfluent hypergeometric functions. Calibration of the model to market smiles\ndoes not require solving any optimization problem and, in contrast, can be done\nterm-by-term by solving a system of non-linear algebraic equations for each\nmaturity, which is fast.\n",
"title": "An Expanded Local Variance Gamma model"
} | null | null | null | null | true | null | 269 | null | Default | null | null |
null | {
"abstract": " In processing human produced text using natural language processing (NLP)\ntechniques, two fundamental subtasks that arise are (i) segmentation of the\nplain text into meaningful subunits (e.g., entities), and (ii) dependency\nparsing, to establish relations between subunits. In this paper, we develop a\nrelatively simple and effective neural joint model that performs both\nsegmentation and dependency parsing together, instead of one after the other as\nin most state-of-the-art works. We will focus in particular on the real estate\nad setting, aiming to convert an ad to a structured description, which we name\nproperty tree, comprising the tasks of (1) identifying important entities of a\nproperty (e.g., rooms) from classifieds and (2) structuring them into a tree\nformat. In this work, we propose a new joint model that is able to tackle the\ntwo tasks simultaneously and construct the property tree by (i) avoiding the\nerror propagation that would arise from the subtasks one after the other in a\npipelined fashion, and (ii) exploiting the interactions between the subtasks.\nFor this purpose, we perform an extensive comparative study of the pipeline\nmethods and the new proposed joint model, reporting an improvement of over\nthree percentage points in the overall edge F1 score of the property tree.\nAlso, we propose attention methods, to encourage our model to focus on salient\ntokens during the construction of the property tree. Thus we experimentally\ndemonstrate the usefulness of attentive neural architectures for the proposed\njoint model, showcasing a further improvement of two percentage points in edge\nF1 score for our application.\n",
"title": "An attentive neural architecture for joint segmentation and parsing and its application to real estate ads"
} | null | null | null | null | true | null | 270 | null | Default | null | null |
null | {
"abstract": " The asymptotic variance of the maximum likelihood estimate is proved to\ndecrease when the maximization is restricted to a subspace that contains the\ntrue parameter value. Maximum likelihood estimation allows a systematic fitting\nof covariance models to the sample, which is important in data assimilation.\nThe hierarchical maximum likelihood approach is applied to the spectral\ndiagonal covariance model with different parameterizations of eigenvalue decay,\nand to the sparse inverse covariance model with specified parameter values on\ndifferent sets of nonzero entries. It is shown computationally that using\nsmaller sets of parameters can decrease the sampling noise in high dimension\nsubstantially.\n",
"title": "Multilevel maximum likelihood estimation with application to covariance matrices"
} | null | null | null | null | true | null | 271 | null | Default | null | null |
null | {
"abstract": " Fully automating machine learning pipelines is one of the key challenges of\ncurrent artificial intelligence research, since practical machine learning\noften requires costly and time-consuming human-powered processes such as model\ndesign, algorithm development, and hyperparameter tuning. In this paper, we\nverify that automated architecture search synergizes with the effect of\ngradient-based meta learning. We adopt the progressive neural architecture\nsearch \\cite{liu:pnas_google:DBLP:journals/corr/abs-1712-00559} to find optimal\narchitectures for meta-learners. The gradient based meta-learner whose\narchitecture was automatically found achieved state-of-the-art results on the\n5-shot 5-way Mini-ImageNet classification problem with $74.65\\%$ accuracy,\nwhich is $11.54\\%$ improvement over the result obtained by the first\ngradient-based meta-learner called MAML\n\\cite{finn:maml:DBLP:conf/icml/FinnAL17}. To our best knowledge, this work is\nthe first successful neural architecture search implementation in the context\nof meta learning.\n",
"title": "Auto-Meta: Automated Gradient Based Meta Learner Search"
} | null | null | null | null | true | null | 272 | null | Default | null | null |
null | {
"abstract": " We provide a comprehensive study of the convergence of forward-backward\nalgorithm under suitable geometric conditions leading to fast rates. We present\nseveral new results and collect in a unified view a variety of results\nscattered in the literature, often providing simplified proofs. Novel\ncontributions include the analysis of infinite dimensional convex minimization\nproblems, allowing the case where minimizers might not exist. Further, we\nanalyze the relation between different geometric conditions, and discuss novel\nconnections with a priori conditions in linear inverse problems, including\nsource conditions, restricted isometry properties and partial smoothness.\n",
"title": "Convergence of the Forward-Backward Algorithm: Beyond the Worst Case with the Help of Geometry"
} | null | null | [
"Mathematics",
"Statistics"
]
| null | true | null | 273 | null | Validated | null | null |
null | {
"abstract": " Magnetic Particle Imaging (MPI) is a novel imaging modality with important\napplications such as angiography, stem cell tracking, and cancer imaging.\nRecently, there have been efforts to increase the functionality of MPI via\nmulti-color imaging methods that can distinguish the responses of different\nnanoparticles, or nanoparticles in different environmental conditions. The\nproposed techniques typically rely on extensive calibrations that capture the\ndifferences in the harmonic responses of the nanoparticles. In this work, we\npropose a method to directly estimate the relaxation time constant of the\nnanoparticles from the MPI signal, which is then used to generate a multi-color\nrelaxation map. The technique is based on the underlying mirror symmetry of the\nadiabatic MPI signal when the same region is scanned back and forth. We\nvalidate the proposed method via extensive simulations, and via experiments on\nour in-house Magnetic Particle Spectrometer (MPS) setup at 550 Hz and our\nin-house MPI scanner at 9.7 kHz. Our results show that nanoparticles can be\nsuccessfully distinguished with the proposed technique, without any calibration\nor prior knowledge about the nanoparticles.\n",
"title": "Calibration-Free Relaxation-Based Multi-Color Magnetic Particle Imaging"
} | null | null | [
"Physics"
]
| null | true | null | 274 | null | Validated | null | null |
null | {
"abstract": " Draft of textbook chapter on neural machine translation. a comprehensive\ntreatment of the topic, ranging from introduction to neural networks,\ncomputation graphs, description of the currently dominant attentional\nsequence-to-sequence model, recent refinements, alternative architectures and\nchallenges. Written as chapter for the textbook Statistical Machine\nTranslation. Used in the JHU Fall 2017 class on machine translation.\n",
"title": "Neural Machine Translation"
} | null | null | null | null | true | null | 275 | null | Default | null | null |
null | {
"abstract": " Let $D$ be a bounded domain $D$ in $\\mathbb R^n $ with infinitely smooth\nboundary and $n$ is odd. We prove that if the volume cut off from the domain by\na hyperplane is an algebraic function of the hyperplane, free of real singular\npoints, then the domain is an ellipsoid. This partially answers a question of\nV.I. Arnold: whether odd-dimensional ellipsoids are the only algebraically\nintegrable domains?\n",
"title": "On algebraically integrable domains in Euclidean spaces"
} | null | null | null | null | true | null | 276 | null | Default | null | null |
null | {
"abstract": " The novel unseen classes can be formulated as the extreme values of known\nclasses. This inspired the recent works on open-set recognition\n\\cite{Scheirer_2013_TPAMI,Scheirer_2014_TPAMIb,EVM}, which however can have no\nway of naming the novel unseen classes. To solve this problem, we propose the\nExtreme Value Learning (EVL) formulation to learn the mapping from visual\nfeature to semantic space. To model the margin and coverage distributions of\neach class, the Vocabulary-informed Learning (ViL) is adopted by using vast\nopen vocabulary in the semantic space. Essentially, by incorporating the EVL\nand ViL, we for the first time propose a novel semantic embedding paradigm --\nVocabulary-informed Extreme Value Learning (ViEVL), which embeds the visual\nfeatures into semantic space in a probabilistic way. The learned embedding can\nbe directly used to solve supervised learning, zero-shot and open set\nrecognition simultaneously. Experiments on two benchmark datasets demonstrate\nthe effectiveness of proposed frameworks.\n",
"title": "Vocabulary-informed Extreme Value Learning"
} | null | null | null | null | true | null | 277 | null | Default | null | null |
null | {
"abstract": " In this paper, we study the performance of two cross-layer optimized dynamic\nrouting techniques for radio interference mitigation across multiple coexisting\nwireless body area networks (BANs), based on real-life measurements. At the\nnetwork layer, the best route is selected according to channel state\ninformation from the physical layer, associated with low duty cycle TDMA at the\nMAC layer. The routing techniques (i.e., shortest path routing (SPR), and novel\ncooperative multi-path routing (CMR) incorporating 3-branch selection\ncombining) perform real-time and reliable data transfer across BANs operating\nnear the 2.4 GHz ISM band. An open-access experimental data set of 'everyday'\nmixed-activities is used for analyzing the proposed cross-layer optimization.\nWe show that CMR gains up to 14 dB improvement with 8.3% TDMA duty cycle, and\neven 10 dB improvement with 0.2% TDMA duty cycle over SPR, at 10% outage\nprobability at a realistic signal-to-interference-plus-noise ratio (SINR).\nAcceptable packet delivery ratios (PDR) and spectral efficiencies are obtained\nfrom SPR and CMR with reasonably sensitive receivers across a range of TDMA low\nduty cycles, with up to 9 dB improvement of CMR over SPR at 90% PDR. The\ndistribution fits for received SINR through routing are also derived and\nvalidated with theoretical analysis.\n",
"title": "Cross-layer optimized routing with low duty cycle TDMA across multiple wireless body area networks"
} | null | null | [
"Computer Science"
]
| null | true | null | 278 | null | Validated | null | null |
null | {
"abstract": " In recent years, the proliferation of online resumes and the need to evaluate\nlarge populations of candidates for on-site and virtual teams have led to a\ngrowing interest in automated team-formation. Given a large pool of candidates,\nthe general problem requires the selection of a team of experts to complete a\ngiven task. Surprisingly, while ongoing research has studied numerous\nvariations with different constraints, it has overlooked a factor with a\nwell-documented impact on team cohesion and performance: team faultlines.\nAddressing this gap is challenging, as the available measures for faultlines in\nexisting teams cannot be efficiently applied to faultline optimization. In this\nwork, we meet this challenge with a new measure that can be efficiently used\nfor both faultline measurement and minimization. We then use the measure to\nsolve the problem of automatically partitioning a large population into\nlow-faultline teams. By introducing faultlines to the team-formation\nliterature, our work creates exciting opportunities for algorithmic work on\nfaultline optimization, as well as on work that combines and studies the\nconnection of faultlines with other influential team characteristics.\n",
"title": "A Team-Formation Algorithm for Faultline Minimization"
} | null | null | null | null | true | null | 279 | null | Default | null | null |
null | {
"abstract": " Deep convolutional neural networks (CNNs) have recently achieved great\nsuccess in many visual recognition tasks. However, existing deep neural network\nmodels are computationally expensive and memory intensive, hindering their\ndeployment in devices with low memory resources or in applications with strict\nlatency requirements. Therefore, a natural thought is to perform model\ncompression and acceleration in deep networks without significantly decreasing\nthe model performance. During the past few years, tremendous progress has been\nmade in this area. In this paper, we survey the recent advanced techniques for\ncompacting and accelerating CNNs model developed. These techniques are roughly\ncategorized into four schemes: parameter pruning and sharing, low-rank\nfactorization, transferred/compact convolutional filters, and knowledge\ndistillation. Methods of parameter pruning and sharing will be described at the\nbeginning, after that the other techniques will be introduced. For each scheme,\nwe provide insightful analysis regarding the performance, related applications,\nadvantages, and drawbacks etc. Then we will go through a few very recent\nadditional successful methods, for example, dynamic capacity networks and\nstochastic depths networks. After that, we survey the evaluation matrix, the\nmain datasets used for evaluating the model performance and recent benchmarking\nefforts. Finally, we conclude this paper, discuss remaining challenges and\npossible directions on this topic.\n",
"title": "A Survey of Model Compression and Acceleration for Deep Neural Networks"
} | null | null | null | null | true | null | 280 | null | Default | null | null |
null | {
"abstract": " The task of calibration is to retrospectively adjust the outputs from a\nmachine learning model to provide better probability estimates on the target\nvariable. While calibration has been investigated thoroughly in classification,\nit has not yet been well-established for regression tasks. This paper considers\nthe problem of calibrating a probabilistic regression model to improve the\nestimated probability densities over the real-valued targets. We propose to\ncalibrate a regression model through the cumulative probability density, which\ncan be derived from calibrating a multi-class classifier. We provide three\nnon-parametric approaches to solve the problem, two of which provide empirical\nestimates and the third providing smooth density estimates. The proposed\napproaches are experimentally evaluated to show their ability to improve the\nperformance of regression models on the predictive likelihood.\n",
"title": "Non-Parametric Calibration of Probabilistic Regression"
} | null | null | null | null | true | null | 281 | null | Default | null | null |
null | {
"abstract": " Cyclotron resonant scattering features (CRSFs) are formed by scattering of\nX-ray photons off quantized plasma electrons in the strong magnetic field (of\nthe order 10^12 G) close to the surface of an accreting X-ray pulsar. The line\nprofiles of CRSFs cannot be described by an analytic expression. Numerical\nmethods such as Monte Carlo (MC) simulations of the scattering processes are\nrequired in order to predict precise line shapes for a given physical setup,\nwhich can be compared to observations to gain information about the underlying\nphysics in these systems.\nA versatile simulation code is needed for the generation of synthetic\ncyclotron lines. Sophisticated geometries should be investigatable by making\ntheir simulation possible for the first time.\nThe simulation utilizes the mean free path tables described in the first\npaper of this series for the fast interpolation of propagation lengths. The\ncode is parallelized to make the very time consuming simulations possible on\nconvenient time scales. Furthermore, it can generate responses to\nmono-energetic photon injections, producing Green's functions, which can be\nused later to generate spectra for arbitrary continua.\nWe develop a new simulation code to generate synthetic cyclotron lines for\ncomplex scenarios, allowing for unprecedented physical interpretation of the\nobserved data. An associated XSPEC model implementation is used to fit\nsynthetic line profiles to NuSTAR data of Cep X-4. The code has been developed\nwith the main goal of overcoming previous geometrical constraints in MC\nsimulations of CRSFs. By applying this code also to more simple, classic\ngeometries used in previous works, we furthermore address issues of code\nverification and cross-comparison of various models. The XSPEC model and the\nGreen's function tables are available online at\nthis http URL .\n",
"title": "Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process"
} | null | null | null | null | true | null | 282 | null | Default | null | null |
null | {
"abstract": " This paper is devoted to the study of the construction of new quantum MDS\ncodes. Based on constacyclic codes over Fq2 , we derive four new families of\nquantum MDS codes, one of which is an explicit generalization of the\nconstruction given in Theorem 7 in [22]. We also extend the result of Theorem\n3:3 given in [17].\n",
"title": "New quantum mds constacylıc codes"
} | null | null | null | null | true | null | 283 | null | Default | null | null |
null | {
"abstract": " We present a unified categorical treatment of completeness theorems for\nseveral classical and intuitionistic infinitary logics with a proposed\naxiomatization. This provides new completeness theorems and subsumes previous\nones by Gödel, Kripke, Beth, Karp, Joyal, Makkai and Fourman/Grayson. As an\napplication we prove, using large cardinals assumptions, the disjunction and\nexistence properties for infinitary intuitionistic first-order logics.\n",
"title": "Infinitary first-order categorical logic"
} | null | null | null | null | true | null | 284 | null | Default | null | null |
null | {
"abstract": " Recent advances in stochastic gradient techniques have made it possible to\nestimate posterior distributions from large datasets via Markov Chain Monte\nCarlo (MCMC). However, when the target posterior is multimodal, mixing\nperformance is often poor. This results in inadequate exploration of the\nposterior distribution. A framework is proposed to improve the sampling\nefficiency of stochastic gradient MCMC, based on Hamiltonian Monte Carlo. A\ngeneralized kinetic function is leveraged, delivering superior stationary\nmixing, especially for multimodal distributions. Techniques are also discussed\nto overcome the practical issues introduced by this generalization. It is shown\nthat the proposed approach is better at exploring complex multimodal posterior\ndistributions, as demonstrated on multiple applications and in comparison with\nother stochastic gradient MCMC methods.\n",
"title": "Stochastic Gradient Monomial Gamma Sampler"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 285 | null | Validated | null | null |
null | {
"abstract": " We study the problems related to the estimation of the Gini index in presence\nof a fat-tailed data generating process, i.e. one in the stable distribution\nclass with finite mean but infinite variance (i.e. with tail index\n$\\alpha\\in(1,2)$). We show that, in such a case, the Gini coefficient cannot be\nreliably estimated using conventional nonparametric methods, because of a\ndownward bias that emerges under fat tails. This has important implications for\nthe ongoing discussion about economic inequality.\nWe start by discussing how the nonparametric estimator of the Gini index\nundergoes a phase transition in the symmetry structure of its asymptotic\ndistribution, as the data distribution shifts from the domain of attraction of\na light-tailed distribution to that of a fat-tailed one, especially in the case\nof infinite variance. We also show how the nonparametric Gini bias increases\nwith lower values of $\\alpha$. We then prove that maximum likelihood estimation\noutperforms nonparametric methods, requiring a much smaller sample size to\nreach efficiency.\nFinally, for fat-tailed data, we provide a simple correction mechanism to the\nsmall sample bias of the nonparametric estimator based on the distance between\nthe mode and the mean of its asymptotic distribution.\n",
"title": "Gini estimation under infinite variance"
} | null | null | [
"Statistics"
]
| null | true | null | 286 | null | Validated | null | null |
null | {
"abstract": " Training a neural network using backpropagation algorithm requires passing\nerror gradients sequentially through the network. The backward locking prevents\nus from updating network layers in parallel and fully leveraging the computing\nresources. Recently, there are several works trying to decouple and parallelize\nthe backpropagation algorithm. However, all of them suffer from severe accuracy\nloss or memory explosion when the neural network is deep. To address these\nchallenging issues, we propose a novel parallel-objective formulation for the\nobjective function of the neural network. After that, we introduce features\nreplay algorithm and prove that it is guaranteed to converge to critical points\nfor the non-convex problem under certain conditions. Finally, we apply our\nmethod to training deep convolutional neural networks, and the experimental\nresults show that the proposed method achieves {faster} convergence, {lower}\nmemory consumption, and {better} generalization error than compared methods.\n",
"title": "Training Neural Networks Using Features Replay"
} | null | null | null | null | true | null | 287 | null | Default | null | null |
null | {
"abstract": " We study a diagrammatic categorification (the \"anti-spherical category\") of\nthe anti-spherical module for any Coxeter group. We deduce that Deodhar's\n(sign) parabolic Kazhdan-Lusztig polynomials have non-negative coefficients,\nand that a monotonicity conjecture of Brenti's holds. The main technical\nobservation is a localisation procedure for the anti-spherical category, from\nwhich we construct a \"light leaves\" basis of morphisms. Our techniques may be\nused to calculate many new elements of the $p$-canonical basis in the\nanti-spherical module. The results use generators and relations for Soergel\nbimodules (\"Soergel calculus\") in a crucial way.\n",
"title": "The anti-spherical category"
} | null | null | null | null | true | null | 288 | null | Default | null | null |
null | {
"abstract": " Recently, we have predicted that the modulation instability of optical vortex\nsolitons propagating in nonlinear colloidal suspensions with exponential\nsaturable nonlinearity leads to formation of necklace beams (NBs)\n[S.~Z.~Silahli, W.~Walasik and N.~M.~Litchinitser, Opt.~Lett., \\textbf{40},\n5714 (2015)]. Here, we investigate the dynamics of NB formation and\npropagation, and show that the distance at which the NB is formed depends on\nthe input power of the vortex beam. Moreover, we show that the NB trajectories\nare not necessarily tangent to the initial vortex ring, and that their\nvelocities have components stemming both from the beam diffraction and from the\nbeam orbital angular momentum. We also demonstrate the generation of twisted\nsolitons and analyze the influence of losses on their propagation. Finally, we\ninvestigate the conservation of the orbital angular momentum in necklace and\ntwisted beams. Our studies, performed in ideal lossless media and in realistic\ncolloidal suspensions with losses, provide a detailed description of NB\ndynamics and may be useful in studies of light propagation in highly scattering\ncolloids and biological samples.\n",
"title": "Trajectories and orbital angular momentum of necklace beams in nonlinear colloidal suspensions"
} | null | null | null | null | true | null | 289 | null | Default | null | null |
null | {
"abstract": " A three-dimensional spin current solver based on a generalised spin\ndrift-diffusion description, including the spin Hall effect, is integrated with\na magnetisation dynamics solver. The resulting model is shown to simultaneously\nreproduce the spin-orbit torques generated using the spin Hall effect, spin\npumping torques generated by magnetisation dynamics in multilayers, as well as\nthe spin transfer torques acting on magnetisation regions with spatial\ngradients, whilst field-like and spin-like torques are reproduced in a spin\nvalve geometry. Two approaches to modelling interfaces are analysed, one based\non the spin mixing conductance and the other based on continuity of spin\ncurrents where the spin dephasing length governs the absorption of transverse\nspin components. In both cases analytical formulas are derived for the\nspin-orbit torques in a heavy metal / ferromagnet bilayer geometry, showing in\ngeneral both field-like and damping-like torques are generated. The limitations\nof the analytical approach are discussed, showing that even in a simple bilayer\ngeometry, due to the non-uniformity of the spin currents, a full\nthree-dimensional treatment is required. Finally the model is applied to the\nquantitative analysis of the spin Hall angle in Pt by reproducing published\nexperimental data on the ferromagnetic resonance linewidth in the bilayer\ngeometry.\n",
"title": "Unified Treatment of Spin Torques using a Coupled Magnetisation Dynamics and Three-Dimensional Spin Current Solver"
} | null | null | [
"Physics"
]
| null | true | null | 290 | null | Validated | null | null |
null | {
"abstract": " This paper aims to explore models based on the extreme gradient boosting\n(XGBoost) approach for business risk classification. Feature selection (FS)\nalgorithms and hyper-parameter optimizations are simultaneously considered\nduring model training. The five most commonly used FS methods including weight\nby Gini, weight by Chi-square, hierarchical variable clustering, weight by\ncorrelation, and weight by information are applied to alleviate the effect of\nredundant features. Two hyper-parameter optimization approaches, random search\n(RS) and Bayesian tree-structured Parzen Estimator (TPE), are applied in\nXGBoost. The effect of different FS and hyper-parameter optimization methods on\nthe model performance are investigated by the Wilcoxon Signed Rank Test. The\nperformance of XGBoost is compared to the traditionally utilized logistic\nregression (LR) model in terms of classification accuracy, area under the curve\n(AUC), recall, and F1 score obtained from the 10-fold cross validation. Results\nshow that hierarchical clustering is the optimal FS method for LR while weight\nby Chi-square achieves the best performance in XG-Boost. Both TPE and RS\noptimization in XGBoost outperform LR significantly. TPE optimization shows a\nsuperiority over RS since it results in a significantly higher accuracy and a\nmarginally higher AUC, recall and F1 score. Furthermore, XGBoost with TPE\ntuning shows a lower variability than the RS method. Finally, the ranking of\nfeature importance based on XGBoost enhances the model interpretation.\nTherefore, XGBoost with Bayesian TPE hyper-parameter optimization serves as an\noperative while powerful approach for business risk modeling.\n",
"title": "A XGBoost risk model via feature selection and Bayesian hyper-parameter optimization"
} | null | null | [
"Computer Science",
"Statistics"
]
| null | true | null | 291 | null | Validated | null | null |
null | {
"abstract": " Immiscible fluids flowing at high capillary numbers in porous media may be\ncharacterized by an effective viscosity. We demonstrate that the effective\nviscosity is well described by the Lichtenecker-Rother equation. The exponent\n$\\alpha$ in this equation takes either the value 1 or 0.6 in two- and 0.5 in\nthree-dimensional systems depending on the pore geometry. Our arguments are\nbased on analytical and numerical methods.\n",
"title": "Rheology of High-Capillary Number Flow in Porous Media"
} | null | null | null | null | true | null | 292 | null | Default | null | null |
null | {
"abstract": " Quantum charge pumping phenomenon connects band topology through the dynamics\nof a one-dimensional quantum system. In terms of a microscopic model, the\nSu-Schrieffer-Heeger/Rice-Mele quantum pump continues to serve as a fruitful\nstarting point for many considerations of topological physics. Here we present\na generalized Creutz scheme as a distinct two-band quantum pump model. By\nnoting that it undergoes two kinds of topological band transitions accompanying\nwith a Zak-phase-difference of $\\pi$ and $2\\pi$, respectively, various charge\npumping schemes are studied by applying an elaborate Peierl's phase\nsubstitution. Translating into real space, the transportation of quantized\ncharges is a result of cooperative quantum interference effect. In particular,\nan all-flux quantum pump emerges which operates with time-varying fluxes only\nand transports two charge units. This puts cold atoms with artificial gauge\nfields as an unique system where this kind of phenomena can be realized.\n",
"title": "Quantum Charge Pumps with Topological Phases in Creutz Ladder"
} | null | null | [
"Physics"
]
| null | true | null | 293 | null | Validated | null | null |
null | {
"abstract": " Heart disease is the leading cause of death, and experts estimate that\napproximately half of all heart attacks and strokes occur in people who have\nnot been flagged as \"at risk.\" Thus, there is an urgent need to improve the\naccuracy of heart disease diagnosis. To this end, we investigate the potential\nof using data analysis, and in particular the design and use of deep neural\nnetworks (DNNs) for detecting heart disease based on routine clinical data. Our\nmain contribution is the design, evaluation, and optimization of DNN\narchitectures of increasing depth for heart disease diagnosis. This work led to\nthe discovery of a novel five layer DNN architecture - named Heart Evaluation\nfor Algorithmic Risk-reduction and Optimization Five (HEARO-5) -- that yields\nbest prediction accuracy. HEARO-5's design employs regularization optimization\nand automatically deals with missing data and/or data outliers. To evaluate and\ntune the architectures we use k-way cross-validation as well as Matthews\ncorrelation coefficient (MCC) to measure the quality of our classifications.\nThe study is performed on the publicly available Cleveland dataset of medical\ninformation, and we are making our developments open source, to further\nfacilitate openness and research on the use of DNNs in medicine. The HEARO-5\narchitecture, yielding 99% accuracy and 0.98 MCC, significantly outperforms\ncurrently published research in the area.\n",
"title": "On Deep Neural Networks for Detecting Heart Disease"
} | null | null | null | null | true | null | 294 | null | Default | null | null |
null | {
"abstract": " The theory of integral quadratic constraints (IQCs) allows verification of\nstability and gain-bound properties of systems containing nonlinear or\nuncertain elements. Gain bounds often imply exponential stability, but it can\nbe challenging to compute useful numerical bounds on the exponential decay\nrate. This work presents a generalization of the classical IQC results of\nMegretski and Rantzer that leads to a tractable computational procedure for\nfinding exponential rate certificates that are far less conservative than ones\ncomputed from $L_2$ gain bounds alone. An expanded library of IQCs for\ncertifying exponential stability is also provided and the effectiveness of the\ntechnique is demonstrated via numerical examples.\n",
"title": "Exponential Stability Analysis via Integral Quadratic Constraints"
} | null | null | null | null | true | null | 295 | null | Default | null | null |
null | {
"abstract": " Foreshock transients upstream of Earth's bow shock have been recently\nobserved to accelerate electrons to many times their thermal energy. How such\nacceleration occurs is unknown, however. Using THEMIS case studies, we examine\na subset of acceleration events (31 of 247 events) in foreshock transients with\ncores that exhibit gradual electron energy increases accompanied by low\nbackground magnetic field strength and large-amplitude magnetic fluctuations.\nUsing the evolution of electron distributions and the energy increase rates at\nmultiple spacecraft, we suggest that Fermi acceleration between a converging\nforeshock transient's compressional boundary and the bow shock is responsible\nfor the observed electron acceleration. We then show that a one-dimensional\ntest particle simulation of an ideal Fermi acceleration model in fluctuating\nfields prescribed by the observations can reproduce the observed evolution of\nelectron distributions, energy increase rate, and pitch-angle isotropy,\nproviding further support for our hypothesis. Thus, Fermi acceleration is\nlikely the principal electron acceleration mechanism in at least this subset of\nforeshock transient cores.\n",
"title": "Fermi acceleration of electrons inside foreshock transient cores"
} | null | null | null | null | true | null | 296 | null | Default | null | null |
null | {
"abstract": " The quantum speed limit (QSL), or the energy-time uncertainty relation,\ndescribes the fundamental maximum rate for quantum time evolution and has been\nregarded as being unique in quantum mechanics. In this study, we obtain a\nclassical speed limit corresponding to the QSL using the Hilbert space for the\nclassical Liouville equation. Thus, classical mechanics has a fundamental speed\nlimit, and QSL is not a purely quantum phenomenon but a universal dynamical\nproperty of the Hilbert space. Furthermore, we obtain similar speed limits for\nthe imaginary-time Schroedinger equations such as the master equation.\n",
"title": "Quantum Speed Limit is Not Quantum"
} | null | null | null | null | true | null | 297 | null | Default | null | null |
null | {
"abstract": " This paper mainly discusses the diffusion on complex networks with\ntime-varying couplings. We propose a model to describe the adaptive diffusion\nprocess of local topological and dynamical information, and find that the\nBarabasi-Albert scale-free network (BA network) is beneficial to the diffusion\nand leads nodes to arrive at a larger state value than other networks do. The\nability of diffusion for a node is related to its own degree. Specifically,\nnodes with smaller degrees are more likely to change their states and reach\nlarger values, while those with larger degrees tend to stick to their original\nstates. We introduce state entropy to analyze the thermodynamic mechanism of\nthe diffusion process, and interestingly find that this kind of diffusion\nprocess is a minimization process of state entropy. We use the inequality\nconstrained optimization method to reveal the restriction function of the\nminimization and find that it has the same form as the Gibbs free energy. The\nthermodynamical concept allows us to understand dynamical processes on complex\nnetworks from a brand-new perspective. The result provides a convenient means\nof optimizing relevant dynamical processes on practical circuits as well as\nrelated complex systems.\n",
"title": "Adaptive Diffusion Processes of Time-Varying Local Information on Networks"
} | null | null | null | null | true | null | 298 | null | Default | null | null |
null | {
"abstract": " This paper proposes a non-parallel many-to-many voice conversion (VC) method\nusing a variant of the conditional variational autoencoder (VAE) called an\nauxiliary classifier VAE (ACVAE). The proposed method has three key features.\nFirst, it adopts fully convolutional architectures to construct the encoder and\ndecoder networks so that the networks can learn conversion rules that capture\ntime dependencies in the acoustic feature sequences of source and target\nspeech. Second, it uses an information-theoretic regularization for the model\ntraining to ensure that the information in the attribute class label will not\nbe lost in the conversion process. With regular CVAEs, the encoder and decoder\nare free to ignore the attribute class label input. This can be problematic\nsince in such a situation, the attribute class label will have little effect on\ncontrolling the voice characteristics of input speech at test time. Such\nsituations can be avoided by introducing an auxiliary classifier and training\nthe encoder and decoder so that the attribute classes of the decoder outputs\nare correctly predicted by the classifier. Third, it avoids producing\nbuzzy-sounding speech at test time by simply transplanting the spectral details\nof the input speech into its converted version. Subjective evaluation\nexperiments revealed that this simple method worked reasonably well in a\nnon-parallel many-to-many speaker identity conversion task.\n",
"title": "ACVAE-VC: Non-parallel many-to-many voice conversion with auxiliary classifier variational autoencoder"
} | null | null | null | null | true | null | 299 | null | Default | null | null |
null | {
"abstract": " Informed by LES data and resolvent analysis of the mean flow, we examine the\nstructure of turbulence in jets in the subsonic, transonic, and supersonic\nregimes. Spectral (frequency-space) proper orthogonal decomposition is used to\nextract energy spectra and decompose the flow into energy-ranked coherent\nstructures. The educed structures are generally well predicted by the resolvent\nanalysis. Over a range of low frequencies and the first few azimuthal mode\nnumbers, these jets exhibit a low-rank response characterized by\nKelvin-Helmholtz (KH) type wavepackets associated with the annular shear layer\nup to the end of the potential core and that are excited by forcing in the\nvery-near-nozzle shear layer. These modes too the have been experimentally\nobserved before and predicted by quasi-parallel stability theory and other\napproximations--they comprise a considerable portion of the total turbulent\nenergy. At still lower frequencies, particularly for the axisymmetric mode, and\nagain at high frequencies for all azimuthal wavenumbers, the response is not\nlow rank, but consists of a family of similarly amplified modes. These modes,\nwhich are primarily active downstream of the potential core, are associated\nwith the Orr mechanism. They occur also as sub-dominant modes in the range of\nfrequencies dominated by the KH response. Our global analysis helps tie\ntogether previous observations based on local spatial stability theory, and\nexplains why quasi-parallel predictions were successful at some frequencies and\nazimuthal wavenumbers, but failed at others.\n",
"title": "Spectral analysis of jet turbulence"
} | null | null | null | null | true | null | 300 | null | Default | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.