text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " Using observations made with MOSFIRE on Keck I as part of the ZFIRE survey,\nwe present the stellar mass Tully-Fisher relation at 2.0 < z < 2.5. The sample\nwas drawn from a stellar mass limited, Ks-band selected catalog from ZFOURGE\nover the CANDELS area in the COSMOS field. We model the shear of the Halpha\nemission line to derive rotational velocities at 2.2X the scale radius of an\nexponential disk (V2.2). We correct for the blurring effect of a\ntwo-dimensional PSF and the fact that the MOSFIRE PSF is better approximated by\na Moffat than a Gaussian, which is more typically assumed for natural seeing.\nWe find for the Tully-Fisher relation at 2.0 < z < 2.5 that logV2.2 =(2.18 +/-\n0.051)+(0.193 +/- 0.108)(logM/Msun - 10) and infer an evolution of the\nzeropoint of Delta M/Msun = -0.25 +/- 0.16 dex or Delta M/Msun = -0.39 +/- 0.21\ndex compared to z = 0 when adopting a fixed slope of 0.29 or 1/4.5,\nrespectively. We also derive the alternative kinematic estimator S0.5, with a\nbest-fit relation logS0.5 =(2.06 +/- 0.032)+(0.211 +/- 0.086)(logM/Msun - 10),\nand infer an evolution of Delta M/Msun= -0.45 +/- 0.13 dex compared to z < 1.2\nif we adopt a fixed slope. We investigate and review various systematics,\nranging from PSF effects, projection effects, systematics related to stellar\nmass derivation, selection biases and slope. We find that discrepancies between\nthe various literature values are reduced when taking these into account. Our\nobservations correspond well with the gradual evolution predicted by\nsemi-analytic models.\n",
"title": "ZFIRE: The Evolution of the Stellar Mass Tully-Fisher Relation to Redshift 2.0 < Z < 2.5 with MOSFIRE"
}
| null | null |
[
"Physics"
] | null | true | null |
9201
| null |
Validated
| null | null |
null |
{
"abstract": " An organism's ability to move freely is a fundamental behaviour in the animal\nkingdom. To understand animal locomotion requires a characterisation of the\nmaterial properties, as well as the biomechanics and physiology. We present a\nbiomechanical model of C. elegans locomotion together with a novel finite\nelement method. We formulate our model as a nonlinear initial-boundary value\nproblem which allows the study of the dynamics of arbitrary body shapes,\nundulation gaits and the link between the animal's material properties and its\nperformance across a range of environments. Our model replicates behaviours\nacross a wide range of environments. It makes strong predictions on the viable\nrange of the worm's Young's modulus and suggests that animals can control speed\nvia the known mechanism of gait modulation that is observed across different\nmedia.\n",
"title": "A new computational method for a model of C. elegans biomechanics: Insights into elasticity and locomotion performance"
}
| null | null | null | null | true | null |
9202
| null |
Default
| null | null |
null |
{
"abstract": " The inverse relationship between the length of a word and the frequency of\nits use, first identified by G.K. Zipf in 1935, is a classic empirical law that\nholds across a wide range of human languages. We demonstrate that length is one\naspect of a much more general property of words: how distinctive they are with\nrespect to other words in a language. Distinctiveness plays a critical role in\nrecognizing words in fluent speech, in that it reflects the strength of\npotential competitors when selecting the best candidate for an ambiguous\nsignal. Phonological information content, a measure of a word's string\nprobability under a statistical model of a language's sound or character\nsequences, concisely captures distinctiveness. Examining large-scale corpora\nfrom 13 languages, we find that distinctiveness significantly outperforms word\nlength as a predictor of frequency. This finding provides evidence that\nlisteners' processing constraints shape fine-grained aspects of word forms\nacross languages.\n",
"title": "Word forms - not just their lengths- are optimized for efficient communication"
}
| null | null | null | null | true | null |
9203
| null |
Default
| null | null |
null |
{
"abstract": " In recent years, a number of artificial intelligent services have been\ndeveloped such as defect detection system or diagnosis system for customer\nservices. Unfortunately, the core in these services is a black-box in which\nhuman cannot understand the underlying decision making logic, even though the\ninspection of the logic is crucial before launching a commercial service. Our\ngoal in this paper is to propose an analytic method of a model explanation that\nis applicable to general classification models. To this end, we introduce the\nconcept of a contribution matrix and an explanation embedding in a constraint\nspace by using a matrix factorization. We extract a rule-like model explanation\nfrom the contribution matrix with the help of the nonnegative matrix\nfactorization. To validate our method, the experiment results provide with open\ndatasets as well as an industry dataset of a LTE network diagnosis and the\nresults show our method extracts reasonable explanations.\n",
"title": "Human Understandable Explanation Extraction for Black-box Classification Models Based on Matrix Factorization"
}
| null | null | null | null | true | null |
9204
| null |
Default
| null | null |
null |
{
"abstract": " The recent empirical success of cross-domain mapping algorithms, between two\ndomains that share common characteristics, is not well-supported by theoretical\njustifications. This lacuna is especially troubling, given the clear ambiguity\nin such mappings. We work with the adversarial training method called the\nWasserstein GAN. We derive a novel generalization bound, which limits the risk\nbetween the learned mapping $h$ and the target mapping $y$, by a sum of two\nterms: (i) the risk between $h$ and the most distant alternative mapping that\nwas learned by the same cross-domain mapping algorithm, and (ii) the minimal\nWasserstein GAN divergence between the target domain and the domain obtained by\napplying a hypothesis $h^*$ on the samples of the source domain, where $h^*$ is\na hypothesis selected by the same algorithm. The bound is directly related to\nOccam's razor and it encourages the selection of the minimal architecture that\nsupports a small Wasserstein GAN divergence. From the bound, we derive\nalgorithms for hyperparameter selection and early stopping in cross-domain\nmapping GANs. We also demonstrate a novel capability of estimating confidence\nin the mapping of every specific sample. Lastly, we show how non-minimal\narchitectures can be effectively trained by an inverted knowledge distillation\nin which a minimal architecture is used to train a larger one, leading to\nhigher quality outputs.\n",
"title": "Generalization Bounds for Unsupervised Cross-Domain Mapping with WGANs"
}
| null | null | null | null | true | null |
9205
| null |
Default
| null | null |
null |
{
"abstract": " We calculate the scrambling rate $\\lambda_L$ and the butterfly velocity $v_B$\nassociated with the growth of quantum chaos for a solvable large-$N$\nelectron-phonon system. We study a temperature regime in which the electrical\nresistivity of this system exceeds the Mott-Ioffe-Regel limit and increases\nlinearly with temperature - a sign that there are no long-lived charged\nquasiparticles - although the phonons remain well-defined quasiparticles. The\nlong-lived phonons determine $\\lambda_L$, rendering it parametrically smaller\nthan the theoretical upper-bound $\\lambda_L \\ll \\lambda_{max}=2\\pi T/\\hbar$.\nSignificantly, the chaos properties seem to be intrinsic - $\\lambda_L$ and\n$v_B$ are the same for electronic and phononic operators. We consider two\nmodels - one in which the phonons are dispersive, and one in which they are\ndispersionless. In either case, we find that $\\lambda_L$ is proportional to the\ninverse phonon lifetime, and $v_B$ is proportional to the effective phonon\nvelocity. The thermal and chaos diffusion constants, $D_E$ and $D_L\\equiv\nv_B^2/\\lambda_L$, are always comparable, $D_E \\sim D_L$. In the dispersive\nphonon case, the charge diffusion constant $D_C$ satisfies $D_L\\gg D_C$, while\nin the dispersionless case $D_L \\ll D_C$.\n",
"title": "Quantum chaos in an electron-phonon bad metal"
}
| null | null | null | null | true | null |
9206
| null |
Default
| null | null |
null |
{
"abstract": " Recently, (Blanchet, Kang, and Murhy 2016) showed that several machine\nlearning algorithms, such as square-root Lasso, Support Vector Machines, and\nregularized logistic regression, among many others, can be represented exactly\nas distributionally robust optimization (DRO) problems. The distributional\nuncertainty is defined as a neighborhood centered at the empirical\ndistribution. We propose a methodology which learns such neighborhood in a\nnatural data-driven way. We show rigorously that our framework encompasses\nadaptive regularization as a particular case. Moreover, we demonstrate\nempirically that our proposed methodology is able to improve upon a wide range\nof popular machine learning estimators.\n",
"title": "Data-driven Optimal Transport Cost Selection for Distributionally Robust Optimizatio"
}
| null | null | null | null | true | null |
9207
| null |
Default
| null | null |
null |
{
"abstract": " We propose a new formal criterion for secure compilation, providing strong\nsecurity guarantees for components written in unsafe, low-level languages with\nC-style undefined behavior. Our criterion goes beyond recent proposals, which\nprotect the trace properties of a single component against an adversarial\ncontext, to model dynamic compromise in a system of mutually distrustful\ncomponents. Each component is protected from all the others until it receives\nan input that triggers an undefined behavior, causing it to become compromised\nand attack the remaining uncompromised components. To illustrate this model, we\ndemonstrate a secure compilation chain for an unsafe language with buffers,\nprocedures, and components, compiled to a simple RISC abstract machine with\nbuilt-in compartmentalization. The protection guarantees offered by this\nabstract machine can be achieved at the machine-code level using either\nsoftware fault isolation or tag-based reference monitoring. We are working on\nmachine-checked proofs showing that this compiler satisfies our secure\ncompilation criterion.\n",
"title": "Formally Secure Compilation of Unsafe Low-Level Components (Extended Abstract)"
}
| null | null | null | null | true | null |
9208
| null |
Default
| null | null |
null |
{
"abstract": " Machine learning models are increasingly used in the industry to make\ndecisions such as credit insurance approval. Some people may be tempted to\nmanipulate specific variables, such as the age or the salary, in order to get\nbetter chances of approval. In this ongoing work, we propose to discuss, with a\nfirst proposition, the issue of detecting a potential local adversarial example\non classical tabular data by providing to a human expert the locally critical\nfeatures for the classifier's decision, in order to control the provided\ninformation and avoid a fraud.\n",
"title": "Detecting Potential Local Adversarial Examples for Human-Interpretable Defense"
}
| null | null |
[
"Statistics"
] | null | true | null |
9209
| null |
Validated
| null | null |
null |
{
"abstract": " The misalignment of the solar rotation axis and the magnetic axis of the Sun\nproduces a periodic reversal of the Parker spiral magnetic field and the\nsectored solar wind. The compression of the sectors is expected to lead to\nreconnection in the heliosheath (HS). We present particle-in-cell simulations\nof the sectored HS that reflect the plasma environment along the Voyager 1 and\n2 trajectories, specifically including unequal positive and negative azimuthal\nmagnetic flux as seen in the Voyager data \\citep{Burlaga03}. Reconnection\nproceeds on individual current sheets until islands on adjacent current layers\nmerge. At late time bands of the dominant flux survive, separated by bands of\ndeep magnetic field depletion. The ambient plasma pressure supports the strong\nmagnetic pressure variation so that pressure is anti-correlated with magnetic\nfield strength. There is little variation in the magnetic field direction\nacross the boundaries of the magnetic depressions. At irregular intervals\nwithin the magnetic depressions are long-lived pairs of magnetic islands where\nthe magnetic field direction reverses so that spacecraft data would reveal\nsharp magnetic field depressions with only occasional crossings with jumps in\nmagnetic field direction. This is typical of the magnetic field data from the\nVoyager spacecraft \\citep{Burlaga11,Burlaga16}. Voyager 2 data reveals that\nfluctuations in the density and magnetic field strength are anti-correlated in\nthe sector zone as expected from reconnection but not in unipolar regions. The\nconsequence of the annihilation of subdominant flux is a sharp reduction in the\n\"number of sectors\" and a loss in magnetic flux as documented from the Voyager\n1 magnetic field and flow data \\citep{Richardson13}.\n",
"title": "The formation of magnetic depletions and flux annihilation due to reconnection in the heliosheath"
}
| null | null |
[
"Physics"
] | null | true | null |
9210
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we consider filtering and smoothing of partially observed\nchaotic dynamical systems that are discretely observed, with an additive\nGaussian noise in the observation. These models are found in a wide variety of\nreal applications and include the Lorenz 96' model. In the context of a fixed\nobservation interval $T$, observation time step $h$ and Gaussian observation\nvariance $\\sigma_Z^2$, we show under assumptions that the filter and smoother\nare well approximated by a Gaussian with high probability when $h$ and\n$\\sigma^2_Z h$ are sufficiently small. Based on this result we show that the\nMaximum-a-posteriori (MAP) estimators are asymptotically optimal in mean square\nerror as $\\sigma^2_Z h$ tends to $0$. Given these results, we provide a batch\nalgorithm for the smoother and filter, based on Newton's method, to obtain the\nMAP. In particular, we show that if the initial point is close enough to the\nMAP, then Newton's method converges to it at a fast rate. We also provide a\nmethod for computing such an initial point. These results contribute to the\ntheoretical understanding of widely used 4D-Var data assimilation method. Our\napproach is illustrated numerically on the Lorenz 96' model with state vector\nup to 1 million dimensions, with code running in the order of minutes. To our\nknowledge the results in this paper are the first of their type for this class\nof models.\n",
"title": "Optimization Based Methods for Partially Observed Chaotic Systems"
}
| null | null | null | null | true | null |
9211
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents LongHCPulse: software which enables heat capacity to be\ncollected on a Quantum Design PPMS using a long-pulse method. This method,\nwherein heat capacity is computed from the time derivative of sample\ntemperature over long (30 min) measurement times, is necessary for probing\nfirst order transitions and shortens the measurement time by a factor of five.\nLongHCPulse also includes plotting utilities based on the Matplotlib library. I\nillustrate the use of LongHCPulse with the example of data taken on ${\\rm\nYb_{2}Ti_{2}O_{7}}$, and compare the results to the standard semi-adiabatic\nmethod.\n",
"title": "LongHCPulse: Long Pulse Heat Capacity on a Quantum Design PPMS"
}
| null | null | null | null | true | null |
9212
| null |
Default
| null | null |
null |
{
"abstract": " The Z-vector method in the relativistic coupled-cluster framework is employed\nto calculate the parallel and perpendicular components of the magnetic\nhyperfine structure constant of a few small alkaline earth hydrides (BeH, MgH,\nand CaH) and fluorides (MgF and CaF). We have compared our Z-vector results\nwith the values calculated by the extended coupled-cluster (ECC) method\nreported in Phys. Rev. A 91 022512 (2015). All these results are compared with\nthe available experimental values. The Z-vector results are found to be in\nbetter agreement with the experimental values than those of the ECC values.\n",
"title": "Calculation of hyperfine structure constants of small molecules using Z-vector method in the relativistic coupled-cluster framework"
}
| null | null |
[
"Physics"
] | null | true | null |
9213
| null |
Validated
| null | null |
null |
{
"abstract": " Modeling and interpreting spike train data is a task of central importance in\ncomputational neuroscience, with significant translational implications. Two\npopular classes of data-driven models for this task are autoregressive Point\nProcess Generalized Linear models (PPGLM) and latent State-Space models (SSM)\nwith point-process observations. In this letter, we derive a mathematical\nconnection between these two classes of models. By introducing an auxiliary\nhistory process, we represent exactly a PPGLM in terms of a latent, infinite\ndimensional dynamical system, which can then be mapped onto an SSM by basis\nfunction projections and moment closure. This representation provides a new\nperspective on widely used methods for modeling spike data, and also suggests\nnovel algorithmic approaches to fitting such models. We illustrate our results\non a phasic bursting neuron model, showing that our proposed approach provides\nan accurate and efficient way to capture neural dynamics.\n",
"title": "Autoregressive Point-Processes as Latent State-Space Models: a Moment-Closure Approach to Fluctuations and Autocorrelations"
}
| null | null | null | null | true | null |
9214
| null |
Default
| null | null |
null |
{
"abstract": " The dimerized Kane-Mele model with/without the strong interaction is studied\nusing analytical methods. The boundary of the topological phase transition of\nthe model without strong interaction is obtained. Our results show that the\noccurrence of the transition only depends on dimerized parameter . From the\none-particle spectrum, we obtain the completed phase diagram including the\nquantum spin Hall (QSH) state and the topologically trivial insulator. Then,\nusing different mean-field methods, we investigate the Mott transition and the\nmagnetic transition of the strongly correlated dimerized Kane-Mele model. In\nthe region between the two transitions, the topological Mott insulator (TMI)\nwith characters of Mott insulators and topological phases may be the most\ninteresting phase. In this work, effects of the hopping anisotropy and Hubbard\ninteraction U on boundaries of the two transitions are observed in detail. The\ncompleted phase diagram of the dimerized Kane-Mele-Hubbard model is also\nobtained in this work. Quantum fluctuations have extremely important influences\non a quantum system. However, investigations are under the framework of the\nmean field treatment in this work and the effects of fluctuations in this model\nwill be discussed in the future.\n",
"title": "Phase transitions of the dimerized Kane-Mele model with/without the strong interaction"
}
| null | null | null | null | true | null |
9215
| null |
Default
| null | null |
null |
{
"abstract": " Understanding why a model makes a certain prediction can be as crucial as the\nprediction's accuracy in many applications. However, the highest accuracy for\nlarge modern datasets is often achieved by complex models that even experts\nstruggle to interpret, such as ensemble or deep learning models, creating a\ntension between accuracy and interpretability. In response, various methods\nhave recently been proposed to help users interpret the predictions of complex\nmodels, but it is often unclear how these methods are related and when one\nmethod is preferable over another. To address this problem, we present a\nunified framework for interpreting predictions, SHAP (SHapley Additive\nexPlanations). SHAP assigns each feature an importance value for a particular\nprediction. Its novel components include: (1) the identification of a new class\nof additive feature importance measures, and (2) theoretical results showing\nthere is a unique solution in this class with a set of desirable properties.\nThe new class unifies six existing methods, notable because several recent\nmethods in the class lack the proposed desirable properties. Based on insights\nfrom this unification, we present new methods that show improved computational\nperformance and/or better consistency with human intuition than previous\napproaches.\n",
"title": "A Unified Approach to Interpreting Model Predictions"
}
| null | null | null | null | true | null |
9216
| null |
Default
| null | null |
null |
{
"abstract": " We consider large-scale Markov decision processes (MDPs) with a risk measure\nof variability in cost, under the risk-aware MDPs paradigm. Previous studies\nshowed that risk-aware MDPs, based on a minimax approach to handling risk, can\nbe solved using dynamic programming for small to medium sized problems.\nHowever, due to the \"curse of dimensionality\", MDPs that model real-life\nproblems are typically prohibitively large for such approaches. In this paper,\nwe employ an approximate dynamic programming approach, and develop a family of\nsimulation-based algorithms to approximately solve large-scale risk-aware MDPs.\nIn parallel, we develop a unified convergence analysis technique to derive\nsample complexity bounds for this new family of algorithms.\n",
"title": "Approximate Value Iteration for Risk-aware Markov Decision Processes"
}
| null | null | null | null | true | null |
9217
| null |
Default
| null | null |
null |
{
"abstract": " We overview dataflow matrix machines as a Turing complete generalization of\nrecurrent neural networks and as a programming platform. We describe vector\nspace of finite prefix trees with numerical leaves which allows us to combine\nexpressive power of dataflow matrix machines with simplicity of traditional\nrecurrent neural networks.\n",
"title": "Dataflow Matrix Machines as a Model of Computations with Linear Streams"
}
| null | null | null | null | true | null |
9218
| null |
Default
| null | null |
null |
{
"abstract": " Neural Machine Translation (NMT) models usually use large target vocabulary\nsizes to capture most of the words in the target language. The vocabulary size\nis a big factor when decoding new sentences as the final softmax layer\nnormalizes over all possible target words. To address this problem, it is\nwidely common to restrict the target vocabulary with candidate lists based on\nthe source sentence. Usually, the candidate lists are a combination of external\nword-to-word aligner, phrase table entries or most frequent words. In this\nwork, we propose a simple and yet novel approach to learn candidate lists\ndirectly from the attention layer during NMT training. The candidate lists are\nhighly optimized for the current NMT model and do not need any external\ncomputation of the candidate pool. We show significant decoding speedup\ncompared with using the entire vocabulary, without losing any translation\nquality for two language pairs.\n",
"title": "Attention-based Vocabulary Selection for NMT Decoding"
}
| null | null | null | null | true | null |
9219
| null |
Default
| null | null |
null |
{
"abstract": " Purpose: To develop a rapid imaging framework for balanced steady-state free\nprecession (bSSFP) that jointly reconstructs undersampled data (by a factor of\nR) across multiple coils (D) and multiple acquisitions (N). To devise a\nmulti-acquisition coil compression technique for improved computational\nefficiency.\nMethods: The bSSFP image for a given coil and acquisition is modeled to be\nmodulated by a coil sensitivity and a bSSFP profile. The proposed\nreconstruction by calibration over tensors (ReCat) recovers missing data by\ntensor interpolation over the coil and acquisition dimensions. Coil compression\nis achieved using a new method based on multilinear singular value\ndecomposition (MLCC). ReCat is compared with iterative self-consistent parallel\nimaging (SPIRiT) and profile encoding (PE-SSFP) reconstructions.\nResults: Compared to parallel imaging or profile-encoding methods, ReCat\nattains sensitive depiction of high-spatial-frequency information even at\nhigher R. In the brain, ReCat improves peak SNR (PSNR) by 1.1$\\pm$1.0 dB over\nSPIRiT and by 0.9$\\pm$0.3 dB over PE-SSFP (mean$\\pm$std across subjects;\naverage for N=2-8, R=8-16). Furthermore, reconstructions based on MLCC achieve\n0.8$\\pm$0.6 dB higher PSNR compared to those based on geometric coil\ncompression (GCC) (average for N=2-8, R=4-16).\nConclusion: ReCat is a promising acceleration framework for\nbanding-artifact-free bSSFP imaging with high image quality; and MLCC offers\nimproved computational efficiency for tensor-based reconstructions.\n",
"title": "Reconstruction by Calibration over Tensors for Multi-Coil Multi-Acquisition Balanced SSFP Imaging"
}
| null | null | null | null | true | null |
9220
| null |
Default
| null | null |
null |
{
"abstract": " As a generalization of the use of graphs to describe pairwise interactions,\nsimplicial complexes can be used to model higher-order interactions between\nthree or more objects in complex systems. There has been a recent surge in\nactivity for the development of data analysis methods applicable to simplicial\ncomplexes, including techniques based on computational topology, higher-order\nrandom processes, generalized Cheeger inequalities, isoperimetric inequalities,\nand spectral methods. In particular, spectral learning methods (e.g. label\npropagation and clustering) that directly operate on simplicial complexes\nrepresent a new direction for analyzing such complex datasets.\nTo apply spectral learning methods to massive datasets modeled as simplicial\ncomplexes, we develop a method for sparsifying simplicial complexes that\npreserves the spectrum of the associated Laplacian matrices. We show that the\ntheory of Spielman and Srivastava for the sparsification of graphs extends to\nsimplicial complexes via the up Laplacian. In particular, we introduce a\ngeneralized effective resistance for simplices, provide an algorithm for\nsparsifying simplicial complexes at a fixed dimension, and give a specific\nversion of the generalized Cheeger inequality for weighted simplicial\ncomplexes. Finally, we introduce higher-order generalizations of spectral\nclustering and label propagation for simplicial complexes and demonstrate via\nexperiments the utility of the proposed spectral sparsification method for\nthese applications.\n",
"title": "Spectral Sparsification of Simplicial Complexes for Clustering and Label Propagation"
}
| null | null | null | null | true | null |
9221
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we consider the class of K3 surfaces defined as hypersurfaces\nin weighted projective space, and admitting a non-symplectic automorphism of\nnon-prime order, excluding the orders 4, 8, and 12. We show that on these\nsurfaces the Berglund-Hübsch-Krawitz mirror construction and mirror symmetry\nfor lattice polarized K3 surfaces constructed by Dolgachev agree; that is, both\nversions of mirror symmetry define the same mirror K3 surface.\n",
"title": "BHK mirror symmetry for K3 surfaces with non-symplectic automorphism"
}
| null | null |
[
"Mathematics"
] | null | true | null |
9222
| null |
Validated
| null | null |
null |
{
"abstract": " Although deep learning models have proven effective at solving problems in\nnatural language processing, the mechanism by which they come to their\nconclusions is often unclear. As a result, these models are generally treated\nas black boxes, yielding no insight of the underlying learned patterns. In this\npaper we consider Long Short Term Memory networks (LSTMs) and demonstrate a new\napproach for tracking the importance of a given input to the LSTM for a given\noutput. By identifying consistently important patterns of words, we are able to\ndistill state of the art LSTMs on sentiment analysis and question answering\ninto a set of representative phrases. This representation is then\nquantitatively validated by using the extracted phrases to construct a simple,\nrule-based classifier which approximates the output of the LSTM.\n",
"title": "Automatic Rule Extraction from Long Short Term Memory Networks"
}
| null | null | null | null | true | null |
9223
| null |
Default
| null | null |
null |
{
"abstract": " We consider the set Bp of parametric block correlation matrices with p blocks\nof various (and possibly different) sizes, whose diagonal blocks are compound\nsymmetry (CS) correlation matrices and off-diagonal blocks are constant\nmatrices. Such matrices appear in probabilistic models on categorical data,\nwhen the levels are partitioned in p groups, assuming a constant correlation\nwithin a group and a constant correlation for each pair of groups. We obtain\ntwo necessary and sufficient conditions for positive definiteness of elements\nof Bp. Firstly we consider the block average map $\\phi$, consisting in\nreplacing a block by its mean value. We prove that for any A $\\in$ Bp , A is\npositive definite if and only if $\\phi$(A) is positive definite. Hence it is\nequivalent to check the validity of the covariance matrix of group means, which\nonly depends on the number of groups and not on their sizes. This theorem can\nbe extended to a wider set of block matrices. Secondly, we consider the subset\nof Bp for which the between group correlation is the same for all pairs of\ngroups. Positive definiteness then comes down to find the positive definite\ninterval of a matrix pencil on Sp. We obtain a simple characterization by\nlocalizing the roots of the determinant with within group correlation values.\n",
"title": "On the validity of parametric block correlation matrices with constant within and between group correlations"
}
| null | null | null | null | true | null |
9224
| null |
Default
| null | null |
null |
{
"abstract": " We develop a two-dimensional Lattice Boltzmann model for liquid-vapour\nsystems with variable temperature. Our model is based on a single particle\ndistribution function expanded with respect to the full-range Hermite\npolynomials. In order to ensure the recovery of the hydrodynamic equations for\nthermal flows, we use a fourth order expansion together with a set of momentum\nvectors with 25 elements whose Cartesian projections are the roots of the\nHermite polynomial of order Q = 5. Since these vectors are off-lattice, a\nfifth-order projection scheme is used to evolve the corresponding set of\ndistribution functions. A fourth order scheme employing a 49 point stencil is\nused to compute the gradient operators in the force term that ensures the\nliquid-vapour phase separation and diffuse reflection boundary conditions are\nused on the walls. We demonstrate at least fourth order convergence with\nrespect to the lattice spacing in the contexts of shear and longitudinal wave\npropagation through the van der Waals fluid. For the planar interface, fourth\norder convergence can be seen at small enough lattice spacings, while the\neffect of the spurious velocity on the temperature profile is found to be\nsmaller than 1.0%, even when T w ' 0.7 T c . We further validate our scheme by\nconsidering the Laplace pressure test. Galilean invariance is shown to be\npreserved up to second order with respect to the background velocity. We\nfurther investigate the liquid-vapour phase separation between two parallel\nwalls kept at a constant temperature T w smaller than the critical temperature\nT c and discuss the main features of this process.\n",
"title": "Two-dimensional off-lattice Boltzmann model for van der Waals fluids with variable temperature"
}
| null | null | null | null | true | null |
9225
| null |
Default
| null | null |
null |
{
"abstract": " It was shown that any $\\mathbb{Z}$-colorable link has a diagram which admits\na non-trivial $\\mathbb{Z}$-coloring with at most four colors. In this paper, we\nconsider minimal numbers of colors for non-trivial $\\mathbb{Z}$-colorings on\nminimal diagrams of $\\mathbb{Z}$-colorable links. We show, for any positive\ninteger $N$, there exists a minimal diagram of a $\\mathbb{Z}$-colorable link\nsuch that any $\\mathbb{Z}$-coloring on the diagram has at least $N$ colors. On\nthe other hand, it is shown that certain $\\mathbb{Z}$-colorable torus links\nhave minimal diagrams admitting $\\mathbb{Z}$-colorings with only four colors.\n",
"title": "Minimal coloring number on minimal diagrams for $\\mathbb{Z}$-colorable links"
}
| null | null |
[
"Mathematics"
] | null | true | null |
9226
| null |
Validated
| null | null |
null |
{
"abstract": " Automatic machine learning performs predictive modeling with high performing\nmachine learning tools without human interference. This is achieved by making\nmachine learning applications parameter-free, i.e. only a dataset is provided\nwhile the complete model selection and model building process is handled\ninternally through (often meta) optimization. Projects like Auto-WEKA and\nauto-sklearn aim to solve the Combined Algorithm Selection and Hyperparameter\noptimization (CASH) problem resulting in huge configuration spaces. However,\nfor most real-world applications, the optimization over only a few different\nkey learning algorithms can not only be sufficient, but also potentially\nbeneficial. The latter becomes apparent when one considers that models have to\nbe validated, explained, deployed and maintained. Here, less complex model are\noften preferred, for validation or efficiency reasons, or even a strict\nrequirement. Automatic gradient boosting simplifies this idea one step further,\nusing only gradient boosting as a single learning algorithm in combination with\nmodel-based hyperparameter tuning, threshold optimization and encoding of\ncategorical features. We introduce this general framework as well as a concrete\nimplementation called autoxgboost. It is compared to current AutoML projects on\n16 datasets and despite its simplicity is able to achieve comparable results on\nabout half of the datasets as well as performing best on two.\n",
"title": "Automatic Gradient Boosting"
}
| null | null | null | null | true | null |
9227
| null |
Default
| null | null |
null |
{
"abstract": " Recommender systems play an important role in many scenarios where users are\noverwhelmed with too many choices to make. In this context, Collaborative\nFiltering (CF) arises by providing a simple and widely used approach for\npersonalized recommendation. Memory-based CF algorithms mostly rely on\nsimilarities between pairs of users or items, which are posteriorly employed in\nclassifiers like k-Nearest Neighbor (kNN) to generalize for unknown ratings. A\nmajor issue regarding this approach is to build the similarity matrix.\nDepending on the dimensionality of the rating matrix, the similarity\ncomputations may become computationally intractable. To overcome this issue, we\npropose to represent users by their distances to preselected users, namely\nlandmarks. This procedure allows to drastically reduce the computational cost\nassociated with the similarity matrix. We evaluated our proposal on two\ndistinct distinguishing databases, and the results showed our method has\nconsistently and considerably outperformed eight CF algorithms (including both\nmemory-based and model-based) in terms of computational performance.\n",
"title": "Speeding up Memory-based Collaborative Filtering with Landmarks"
}
| null | null | null | null | true | null |
9228
| null |
Default
| null | null |
null |
{
"abstract": " In the last decades, the notion that cities are in a state of equilibrium\nwith a centralised organisation has given place to the viewpoint of cities in\ndisequilibrium and organised from bottom to up. In this perspective, cities are\nevolving systems that exhibit emergent phenomena built from local decisions.\nWhile urban evolution promotes the emergence of positive social phenomena such\nas the formation of innovation hubs and the increase in cultural diversity, it\nalso yields negative phenomena such as increases in criminal activity. Yet, we\nare still far from understanding the driving mechanisms of these phenomena. In\nparticular, approaches to analyse urban phenomena are limited in scope by\nneglecting both temporal non-stationarity and spatial heterogeneity. In the\ncase of criminal activity, we know for more than one century that crime peaks\nduring specific times of the year, but the literature still fails to\ncharacterise the mobility of crime. Here we develop an approach to describe the\nspatial, temporal, and periodic variations in urban quantities. With crime data\nfrom 12 cities, we characterise how the periodicity of crime varies spatially\nacross the city over time. We confirm one-year criminal cycles and show that\nthis periodicity occurs unevenly across the city. These `waves of crime' keep\ntravelling across the city: while cities have a stable number of regions with a\ncircannual period, the regions exhibit non-stationary series. Our findings\nsupport the concept of cities in a constant change, influencing urban\nphenomena---in agreement with the notion of cities not in equilibrium.\n",
"title": "Spatio-temporal variations in the urban rhythm: the travelling waves of crime"
}
| null | null |
[
"Computer Science"
] | null | true | null |
9229
| null |
Validated
| null | null |
null |
{
"abstract": " The simplest model of the magnetized infinitely thin electron beam is\nconsidered. The basic equations that describe the periodic solutions for a\nself-consistent system of a couple of Maxwell equations and equations for the\nmedium are obtained.\n",
"title": "Smith-Purcell Radiation"
}
| null | null | null | null | true | null |
9230
| null |
Default
| null | null |
null |
{
"abstract": " The system of dynamic equations for Bose-Einstein condensate at zero\ntemperature with account of pair correlations is obtained. The spectrum of\nsmall oscillations of the condensate in a spatially homogeneous state is\nexplored. It is shown that this spectrum has two branches: the sound wave\nbranch and branch with an energy gap.\n",
"title": "Dynamics of Bose-Einstein condensate with account of pair correlations"
}
| null | null | null | null | true | null |
9231
| null |
Default
| null | null |
null |
{
"abstract": " A polyellipse is a curve in the Euclidean plane all of whose points have the\nsame sum of distances from finitely many given points (focuses). The classical\nversion of Erdős-Vincze's theorem states that regular triangles can not be\npresented as the Hausdorff limit of polyellipses even if the number of the\nfocuses can be arbitrary large. In other words the topological closure of the\nset of polyellipses with respect to the Hausdorff distance does not contain any\nregular triangle and we have a negative answer to the problem posed by E.\nVázsonyi (Weissfeld) about the approximation of closed convex plane curves by\npolyellipses. It is the additive version of the approximation of simple closed\nplane curves by polynomial lemniscates all of whose points have the same\nproduct of distances from finitely many given points (focuses). Here we are\ngoing to generalize the classical version of Erdős-Vincze's theorem for\nregular polygons in the plane. We will conclude that the error of the\napproximation tends to zero as the number of the vertices of the regular\npolygon tends to the infinity. The decreasing tendency of the approximation\nerror gives the idea to construct curves in the topological closure of the set\nof polyellipses. If we use integration to compute the average distance of a\npoint from a given (focal) set in the plane then the curves all of whose points\nhave the same average distance from the focal set can be given as the Hausdorff\nlimit of polyellipses corresponding to partial sums.\n",
"title": "On the generalization of Erdős-Vincze's theorem about the approximation of closed convex plane curves by polyellipses"
}
| null | null | null | null | true | null |
9232
| null |
Default
| null | null |
null |
{
"abstract": " The LDA-1/2 method for self-energy correction is a powerful tool for\ncalculating accurate band structures of semiconductors, while keeping the\ncomputational load as low as standard LDA. Nevertheless, controversies remain\nregarding the arbitrariness of choice between (1/2)e and (1/4)e charge\nstripping from the atoms in group IV semiconductors, the incorrect direct band\ngap predicted for Ge, and inaccurate band structures for III-V semiconductors.\nHere we propose an improved method named shell-LDA-1/2 (shLDA-1/2 for short),\nwhich is based on a shell-like trimming function for the self-energy potential.\nWith the new approach, we obtained accurate band structures for group IV, and\nfor III-V and II-VI compound semiconductors. In particular, we reproduced the\ncomplete band structure of Ge in good agreement with experimental data.\nMoreover, we have defined clear rules for choosing when (1/2)e or (1/4)e charge\nought to be stripped in covalent semiconductors, and for identifying materials\nfor which shLDA-1/2 is expected to fail.\n",
"title": "Improved self-energy correction method for accurate and efficient band structure calculation"
}
| null | null | null | null | true | null |
9233
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the scaling of the ground state energy and optimal domain\npatterns in thin ferromagnetic films with strong uniaxial anisotropy and the\neasy axis perpendicular to the film plane. Starting from the full\nthree-dimensional micromagnetic model, we identify the critical scaling where\nthe transition from single domain to multidomain ground states such as bubble\nor maze patterns occurs. Furthermore, we analyze the asymptotic behavior of the\nenergy in two regimes separated by a transition. In the single domain regime,\nthe energy $\\Gamma$-converges towards a much simpler two-dimensional and local\nmodel. In the second regime, we derive the scaling of the minimal energy and\ndeduce a scaling law for the typical domain size.\n",
"title": "Magnetic domains in thin ferromagnetic films with strong perpendicular anisotropy"
}
| null | null | null | null | true | null |
9234
| null |
Default
| null | null |
null |
{
"abstract": " Object tracking is an essential task in computer vision that has been studied\nsince the early days of the field. Being able to follow objects that undergo\ndifferent transformations in the video sequence, including changes in scale,\nillumination, shape and occlusions, makes the problem extremely difficult. One\nof the real challenges is to keep track of the changes in objects appearance\nand not drift towards the background clutter. Different from previous\napproaches, we obtain robustness against background with a tracker model that\nis composed of many different parts. They are classifiers that respond at\ndifferent scales and locations. The tracker system functions as a society of\nparts, each having its own role and level of credibility. Reliable classifiers\ndecide the tracker's next move, while newcomers are first monitored before\ngaining the necessary level of reliability to participate in the decision\nprocess. Some parts that loose their consistency are rejected, while others\nthat show consistency for a sufficiently long time are promoted to permanent\nroles. The tracker system, as a whole, could also go through different phases,\nfrom the usual, normal functioning to states of weak agreement and even crisis.\nThe tracker system has different governing rules in each state. What truly\ndistinguishes our work from others is not necessarily the strength of\nindividual tracking parts, but the way in which they work together and build a\nstrong and robust organization. We also propose an efficient way to learn\nsimultaneously many tracking parts, with a single closed-form formulation. We\nobtain a fast and robust tracker with state of the art performance on the\nchallenging OTB50 dataset.\n",
"title": "Learning a Robust Society of Tracking Parts"
}
| null | null | null | null | true | null |
9235
| null |
Default
| null | null |
null |
{
"abstract": " We study the family of spin-S quantum spin chains with a nearest neighbor\ninteraction given by the negative of the singlet projection operator. Using a\nrandom loop representation of the partition function in the limit of zero\ntemperature and standard techniques of classical statistical mechanics, we\nprove dimerization for all sufficiently large values of S.\n",
"title": "A direct proof of dimerization in a family of SU(n)-invariant quantum spin chains"
}
| null | null | null | null | true | null |
9236
| null |
Default
| null | null |
null |
{
"abstract": " Next generation radio-interferometers, like the Square Kilometre Array, will\nacquire tremendous amounts of data with the goal of improving the size and\nsensitivity of the reconstructed images by orders of magnitude. The efficient\nprocessing of large-scale data sets is of great importance. We propose an\nacceleration strategy for a recently proposed primal-dual distributed\nalgorithm. A preconditioning approach can incorporate into the algorithmic\nstructure both the sampling density of the measured visibilities and the noise\nstatistics. Using the sampling density information greatly accelerates the\nconvergence speed, especially for highly non-uniform sampling patterns, while\nrelying on the correct noise statistics optimises the sensitivity of the\nreconstruction. In connection to CLEAN, our approach can be seen as including\nin the same algorithmic structure both natural and uniform weighting, thereby\nsimultaneously optimising both the resolution and the sensitivity. The method\nrelies on a new non-Euclidean proximity operator for the data fidelity term,\nthat generalises the projection onto the $\\ell_2$ ball where the noise lives\nfor naturally weighted data, to the projection onto a generalised ellipsoid\nincorporating sampling density information through uniform weighting.\nImportantly, this non-Euclidean modification is only an acceleration strategy\nto solve the convex imaging problem with data fidelity dictated only by noise\nstatistics. We showcase through simulations with realistic sampling patterns\nthe acceleration obtained using the preconditioning. We also investigate the\nalgorithm performance for the reconstruction of the 3C129 radio galaxy from\nreal visibilities and compare with multi-scale CLEAN, showing better\nsensitivity and resolution. Our MATLAB code is available online on GitHub.\n",
"title": "An accelerated splitting algorithm for radio-interferometric imaging: when natural and uniform weighting meet"
}
| null | null |
[
"Physics"
] | null | true | null |
9237
| null |
Validated
| null | null |
null |
{
"abstract": " We optimized the substrate temperature (Ts) and phosphorus concentration (x)\nof BaFe2(As1-xPx)2 films on practical metal-tape substrates for pulsed laser\ndeposition from the viewpoints of crystallinity, superconductor critical\ntemperature (Tc), and critical current density (Jc). It was found that the\noptimum Ts and x values are 1050 degree C and x = 0.28, respectively. The\noptimized film exhibits Tc_onset = 26.6 and Tc_zero = 22.4 K along with a high\nself-field Jc at 4 K (~1 MA/cm2) and relatively isotropic Jc under magnetic\nfields up to 9 T. Unexpectedly, we found that lower crystallinity samples,\nwhich were grown at a higher Ts of 1250 degree C than the optimized Ts = 1050\ndegree C, exhibit higher Jc along the ab plane under high magnetic fields than\nthe optimized samples. The presence of horizontal defects that act as strong\nvortex pinning centers, such as stacking faults, are a possible origin of the\nhigh Jc values in the poor crystallinity samples.\n",
"title": "BaFe2(As1-xPx)2 (x = 0.22-0.42) thin films grown on practical metal-tape substrates and their critical current densities"
}
| null | null |
[
"Physics"
] | null | true | null |
9238
| null |
Validated
| null | null |
null |
{
"abstract": " Solving Peierls-Boltzmann transport equation with interatomic force constants\n(IFCs) from first-principles calculations has been a widely used method for\npredicting lattice thermal conductivity of three-dimensional materials. With\nthe increasing research interests in two-dimensional materials, this method is\ndirectly applied to them but different works show quite different results. In\nthis work, classical potential was used to investigate the effect of the\naccuracy of IFCs on the predicted thermal conductivity. Inaccuracies were\nintroduced to the third-order IFCs by generating errors in the input forces.\nWhen the force error lies in the typical value from first-principles\ncalculations, the calculated thermal conductivity would be quite different from\nthe benchmark value. It is found that imposing translational invariance\nconditions cannot always guarantee a better thermal conductivity result. It is\nalso shown that Grüneisen parameters cannot be used as a necessary and\nsufficient criterion for the accuracy of third-order IFCs in the aspect of\npredicting thermal conductivity.\n",
"title": "How does the accuracy of interatomic force constants affect the prediction of lattice thermal conductivity"
}
| null | null | null | null | true | null |
9239
| null |
Default
| null | null |
null |
{
"abstract": " Region of interest (ROI) alignment in medical images plays a crucial role in\ndiagnostics, procedure planning, treatment, and follow-up. Frequently, a model\nis represented as triangulated mesh while the patient data is provided from CAT\nscanners as pixel or voxel data. Previously, we presented a 2D method for\ncurve-to-pixel registration. This paper contributes (i) a general\nmesh-to-raster (M2R) framework to register ROIs in multi-modal images; (ii) a\n3D surface-to-voxel application, and (iii) a comprehensive quantitative\nevaluation in 2D using ground truth provided by the simultaneous truth and\nperformance level estimation (STAPLE) method. The registration is formulated as\na minimization problem where the objective consists of a data term, which\ninvolves the signed distance function of the ROI from the reference image, and\na higher order elastic regularizer for the deformation. The evaluation is based\non quantitative light-induced fluoroscopy (QLF) and digital photography (DP) of\ndecalcified teeth. STAPLE is computed on 150 image pairs from 32 subjects, each\nshowing one corresponding tooth in both modalities. The ROI in each image is\nmanually marked by three experts (900 curves in total). In the QLF-DP setting,\nour approach significantly outperforms the mutual information-based\nregistration algorithm implemented with the Insight Segmentation and\nRegistration Toolkit (ITK) and Elastix.\n",
"title": "Mesh-to-raster based non-rigid registration of multi-modal images"
}
| null | null | null | null | true | null |
9240
| null |
Default
| null | null |
null |
{
"abstract": " Over the past few years, the use of camera-equipped robotic platforms for\ndata collection and visually monitoring applications has exponentially grown.\nCluttered construction sites with many objects (e.g., bricks, pipes, etc.) on\nthe ground are challenging environments for a mobile unmanned ground vehicle\n(UGV) to navigate. To address this issue, this study presents a mobile UGV\nequipped with a stereo camera and a robotic arm that can remove obstacles along\nthe UGV's path. To achieve this objective, the surrounding environment is\ncaptured by the stereo camera and obstacles are detected. The obstacle's\nrelative location to the UGV is sent to the robotic arm module through Robot\nOperating System (ROS). Then, the robotic arm picks up and removes the\nobstacle. The proposed method will greatly enhance the degree of automation and\nthe frequency of data collection for construction monitoring. The proposed\nsystem is validated through two case studies. The results successfully\ndemonstrate the detection and removal of obstacles, serving as one of the\nenabling factors for developing an autonomous UGV with various construction\noperating applications.\n",
"title": "Vision-based Obstacle Removal System for Autonomous Ground Vehicles Using a Robotic Arm"
}
| null | null | null | null | true | null |
9241
| null |
Default
| null | null |
null |
{
"abstract": " Trilobites are exotic giant dimers with enormous dipole moments. They consist\nof a Rydberg atom and a distant ground-state atom bound together by short-range\nelectron-neutral attraction. We show that highly polar, polyatomic trilobite\nstates unexpectedly persist and thrive in a dense ultracold gas of randomly\npositioned atoms. This is caused by perturbation-induced quantum scarring and\nthe localization of electron density on randomly occurring atom clusters. At\ncertain densities these states also mix with a s-state, overcoming selection\nrules that hinder the photoassociation of ordinary trilobites.\n",
"title": "Polyatomic trilobite Rydberg molecules in a dense random gas"
}
| null | null | null | null | true | null |
9242
| null |
Default
| null | null |
null |
{
"abstract": " Many debris discs reveal a two-component structure, with a cold outer and a\nwarm inner component. While the former are likely massive analogues of the\nKuiper belt, the origin of the latter is still a matter of debate. In this work\nwe investigate whether the warm dust may be a signature of asteroid belt\nanalogues. In the scenario tested here the current two-belt architecture stems\nfrom an originally extended protoplanetary disc, in which planets have opened a\ngap separating it into the outer and inner discs which, after the gas\ndispersal, experience a steady-state collisional decay. This idea is explored\nwith an analytic collisional evolution model for a sample of 225 debris discs\nfrom a Spitzer/IRS catalogue that are likely to possess a two-component\nstructure. We find that the vast majority of systems (220 out of 225, or 98%)\nare compatible with this scenario. For their progenitors, original\nprotoplanetary discs, we find an average surface density slope of\n$-0.93\\pm0.06$ and an average initial mass of\n$\\left(3.3^{+0.4}_{-0.3}\\right)\\times 10^{-3}$ solar masses, both of which are\nin agreement with the values inferred from submillimetre surveys. However, dust\nproduction by short-period comets and - more rarely - inward transport from the\nouter belts may be viable, and not mutually excluding, alternatives to the\nasteroid belt scenario. The remaining five discs (2% of the sample: HIP 11486,\nHIP 23497, HIP 57971, HIP 85790, HIP 89770) harbour inner components that\nappear inconsistent with dust production in an \"asteroid belt.\" Warm dust in\nthese systems must either be replenished from cometary sources or represent an\naftermath of a recent rare event, such as a major collision or planetary system\ninstability.\n",
"title": "Does warm debris dust stem from asteroid belts?"
}
| null | null | null | null | true | null |
9243
| null |
Default
| null | null |
null |
{
"abstract": " Multiple root estimation problems in statistical inference arise in many\ncontexts in the literature. In the context of maximum likelihood estimation,\nthe existence of multiple roots causes uncertainty in the computation of\nmaximum likelihood estimators using hill-climbing algorithms, and consequent\ndifficulties in the resulting statistical inference.\nIn this paper, we study the multiple roots phenomenon in maximum likelihood\nestimation for factor analysis. We prove that the corresponding likelihood\nequations have uncountably many feasible solutions even in the simplest cases.\nFor the case in which the observed data are two-dimensional and the unobserved\nfactor scores are one-dimensional, we prove that the solutions to the\nlikelihood equations form a one-dimensional real curve.\n",
"title": "The Multiple Roots Phenomenon in Maximum Likelihood Estimation for Factor Analysis"
}
| null | null | null | null | true | null |
9244
| null |
Default
| null | null |
null |
{
"abstract": " Let $\\mathbb{G}$ be a locally compact quantum group. We give a 1-1\ncorrespondence between group-like projections in $L^\\infty(\\mathbb{G})$\npreserved by the scaling group and idempotent states on the dual quantum group\n$\\widehat{\\mathbb{G}}$. As a byproduct we give a simple proof that normal\nintegrable coideals in $L^\\infty(\\mathbb{G})$ which are preserved by the\nscaling group are in 1-1 correspondence with compact quantum subgroups of\n$\\mathbb{G}$.\n",
"title": "Group-like projections for locally compact quantum groups"
}
| null | null | null | null | true | null |
9245
| null |
Default
| null | null |
null |
{
"abstract": " The temperature-dependent optical response of excitons in semiconductors is\ncontrolled by the exciton-phonon interaction. When the exciton-lattice coupling\nis weak, the excitonic line has a Lorentzian profile resulting from motional\nnarrowing, with a width increasing linearly with the lattice temperature $T$.\nIn contrast, when the exciton-lattice coupling is strong, the lineshape is\nGaussian with a width increasing sublinearly with the lattice temperature,\nproportional to $\\sqrt{T}$. While the former case is commonly reported in the\nliterature, here the latter is reported for the first time, for hexagonal boron\nnitride. Thus the theoretical predictions of Toyozawa [Progr. Theor. Phys. 20,\n53 (1958)] are supported by demonstrating that the exciton-phonon interaction\nis in the strong coupling regime in this Van der Waals crystal.\n",
"title": "Exciton-phonon interaction in the strong coupling regime in hexagonal boron nitride"
}
| null | null | null | null | true | null |
9246
| null |
Default
| null | null |
null |
{
"abstract": " The Lowest Landau Level (LLL) equation emerges as an accurate approximation\nfor a class of dynamical regimes of Bose-Einstein Condensates (BEC) in\ntwo-dimensional isotropic harmonic traps in the limit of weak interactions.\nBuilding on recent developments in the field of spatially confined extended\nHamiltonian systems, we find a fully nonlinear solution of this equation\nrepresenting periodically modulated precession of a single vortex. Motions of\nthis type have been previously seen in numerical simulations and experiments at\nmoderately weak coupling. Our work provides the first controlled analytic\nprediction for trajectories of a single vortex, suggests new targets for\nexperiments, and opens up the prospect of finding analytic multi-vortex\nsolutions.\n",
"title": "Exact lowest-Landau-level solutions for vortex precession in Bose-Einstein condensates"
}
| null | null | null | null | true | null |
9247
| null |
Default
| null | null |
null |
{
"abstract": " In this work we present a novel framework that uses deep learning to predict\nobject feature points that are out-of-view in the input image. This system was\ndeveloped with the application of model-based tracking in mind, particularly in\nthe case of autonomous inspection robots, where only partial views of the\nobject are available. Out-of-view prediction is enabled by applying scaling to\nthe feature point labels during network training. This is combined with a\nrecurrent neural network architecture designed to provide the final prediction\nlayers with rich feature information from across the spatial extent of the\ninput image. To show the versatility of these out-of-view predictions, we\ndescribe how to integrate them in both a particle filter tracker and an\noptimisation based tracker. To evaluate our work we compared our framework with\none that predicts only points inside the image. We show that as the amount of\nthe object in view decreases, being able to predict outside the image bounds\nadds robustness to the final pose estimation.\n",
"title": "Predicting Out-of-View Feature Points for Model-Based Camera Pose Estimation"
}
| null | null | null | null | true | null |
9248
| null |
Default
| null | null |
null |
{
"abstract": " Recently, Renault (2016) studied the dual bin packing problem in the\nper-request advice model of online algorithms. He showed that given\n$O(1/\\epsilon)$ advice bits for each input item allows approximating the dual\nbin packing problem online to within a factor of $1+\\epsilon$. Renault asked\nabout the advice complexity of dual bin packing in the tape-advice model of\nonline algorithms. We make progress on this question. Let $s$ be the maximum\nbit size of an input item weight. We present a conceptually simple online\nalgorithm that with total advice $O\\left(\\frac{s + \\log n}{\\epsilon^2}\\right)$\napproximates the dual bin packing to within a $1+\\epsilon$ factor. To this end,\nwe describe and analyze a simple offline PTAS for the dual bin packing problem.\nAlthough a PTAS for a more general problem was known prior to our work\n(Kellerer 1999, Chekuri and Khanna 2006), our PTAS is arguably simpler to state\nand analyze. As a result, we could easily adapt our PTAS to obtain the\nadvice-complexity result.\nWe also consider whether the dependence on $s$ is necessary in our algorithm.\nWe show that if $s$ is unrestricted then for small enough $\\epsilon > 0$\nobtaining a $1+\\epsilon$ approximation to the dual bin packing requires\n$\\Omega_\\epsilon(n)$ bits of advice. To establish this lower bound we analyze\nan online reduction that preserves the advice complexity and approximation\nratio from the binary separation problem due to Boyar et al. (2016). We define\ntwo natural advice complexity classes that capture the distinction similar to\nthe Turing machine world distinction between pseudo polynomial time algorithms\nand polynomial time algorithms. Our results on the dual bin packing problem\nimply the separation of the two classes in the advice complexity world.\n",
"title": "A Simple PTAS for the Dual Bin Packing Problem and Advice Complexity of Its Online Version"
}
| null | null | null | null | true | null |
9249
| null |
Default
| null | null |
null |
{
"abstract": " An algorithmic proof of the General Néron Desingularization theorem and its\nuniform version is given for morphisms with big smooth locus. This generalizes\nthe results for the one-dimensional case.\n",
"title": "Constructive Néron Desingularization of algebras with big smooth locus"
}
| null | null | null | null | true | null |
9250
| null |
Default
| null | null |
null |
{
"abstract": " We study the problem of list-decodable Gaussian mean estimation and the\nrelated problem of learning mixtures of separated spherical Gaussians. We\ndevelop a set of techniques that yield new efficient algorithms with\nsignificantly improved guarantees for these problems.\n{\\bf List-Decodable Mean Estimation.} Fix any $d \\in \\mathbb{Z}_+$ and $0<\n\\alpha <1/2$. We design an algorithm with runtime $O\n(\\mathrm{poly}(n/\\alpha)^{d})$ that outputs a list of $O(1/\\alpha)$ many\ncandidate vectors such that with high probability one of the candidates is\nwithin $\\ell_2$-distance $O(\\alpha^{-1/(2d)})$ from the true mean. The only\nprevious algorithm for this problem achieved error $\\tilde O(\\alpha^{-1/2})$\nunder second moment conditions. For $d = O(1/\\epsilon)$, our algorithm runs in\npolynomial time and achieves error $O(\\alpha^{\\epsilon})$. We also give a\nStatistical Query lower bound suggesting that the complexity of our algorithm\nis qualitatively close to best possible.\n{\\bf Learning Mixtures of Spherical Gaussians.} We give a learning algorithm\nfor mixtures of spherical Gaussians that succeeds under significantly weaker\nseparation assumptions compared to prior work. For the prototypical case of a\nuniform mixture of $k$ identity covariance Gaussians we obtain: For any\n$\\epsilon>0$, if the pairwise separation between the means is at least\n$\\Omega(k^{\\epsilon}+\\sqrt{\\log(1/\\delta)})$, our algorithm learns the unknown\nparameters within accuracy $\\delta$ with sample complexity and running time\n$\\mathrm{poly} (n, 1/\\delta, (k/\\epsilon)^{1/\\epsilon})$. The previously best\nknown polynomial time algorithm required separation at least $k^{1/4}\n\\mathrm{polylog}(k/\\delta)$.\nOur main technical contribution is a new technique, using degree-$d$\nmultivariate polynomials, to remove outliers from high-dimensional datasets\nwhere the majority of the points are corrupted.\n",
"title": "List-Decodable Robust Mean Estimation and Learning Mixtures of Spherical Gaussians"
}
| null | null |
[
"Computer Science",
"Mathematics",
"Statistics"
] | null | true | null |
9251
| null |
Validated
| null | null |
null |
{
"abstract": " For a (possibly infinite) fixed family of graphs F, we say that a graph G\noverlays F on a hypergraph H if V(H) is equal to V(G) and the subgraph of G\ninduced by every hyperedge of H contains some member of F as a spanning\nsubgraph.While it is easy to see that the complete graph on |V(H)| overlays F\non a hypergraph H whenever the problem admits a solution, the Minimum F-Overlay\nproblem asks for such a graph with the minimum number of edges.This problem\nallows to generalize some natural problems which may arise in practice. For\ninstance, if the family F contains all connected graphs, then Minimum F-Overlay\ncorresponds to the Minimum Connectivity Inference problem (also known as Subset\nInterconnection Design problem) introduced for the low-resolution\nreconstruction of macro-molecular assembly in structural biology, or for the\ndesign of networks.Our main contribution is a strong dichotomy result regarding\nthe polynomial vs. NP-hard status with respect to the considered family F.\nRoughly speaking, we show that the easy cases one can think of (e.g. when\nedgeless graphs of the right sizes are in F, or if F contains only cliques) are\nthe only families giving rise to a polynomial problem: all others are\nNP-complete.We then investigate the parameterized complexity of the problem and\ngive similar sufficient conditions on F that give rise to W[1]-hard, W[2]-hard\nor FPT problems when the parameter is the size of the solution.This yields an\nFPT/W[1]-hard dichotomy for a relaxed problem, where every hyperedge of H must\ncontain some member of F as a (non necessarily spanning) subgraph.\n",
"title": "Complexity Dichotomies for the Minimum F-Overlay Problem"
}
| null | null | null | null | true | null |
9252
| null |
Default
| null | null |
null |
{
"abstract": " Using a new and general method, we prove the existence of random attractor\nfor the three dimensional stochastic primitive equations defined on a manifold\n$\\D\\subset\\R^3$ improving the existence of weak attractor for the deterministic\nmodel. Furthermore, we show the existence of the invariant measure.\n",
"title": "Asymptotic behavior of 3-D stochastic primitive equations of large-scale moist atmosphere with additive noise"
}
| null | null | null | null | true | null |
9253
| null |
Default
| null | null |
null |
{
"abstract": " We consider the chemotaxis problem for a one-dimensional system. To analyze\nthe interaction of bacteria and attractant we use a modified Keller-Segel model\nwhich accounts attractant absorption. To describe the system we use the\nchemotaxis sensitivity function, which characterizes nonuniformity of bacteria\ndistribution. In particular, we investigate how the chemotaxis sensitivity\nfunction depends on the concentration of attractant at the boundary of the\nsystem. It is known that in the system without absorption the chemotaxis\nsensitivity function has a bell shape maximum. Here we show that attractant\nabsorption and special boundary conditions for bacteria can cause the\nappearance of an additional maximum in the chemotaxis sensitivity function. The\nvalue of this maximum is determined by the intensity of absorption.\n",
"title": "Analytical Approach for Calculating Chemotaxis Sensitivity Function"
}
| null | null | null | null | true | null |
9254
| null |
Default
| null | null |
null |
{
"abstract": " Is it possible to generally construct a dynamical system to simulate a black\nsystem without recovering the equations of motion of the latter? Here we show\nthat this goal can be approached by a learning machine. Trained by a set of\ninput-output responses or a segment of time series of a black system, a\nlearning machine can be served as a copy system to mimic the dynamics of\nvarious black systems. It can not only behave as the black system at the\nparameter set that the training data are made, but also recur the evolution\nhistory of the black system. As a result, the learning machine provides an\neffective way for prediction, and enables one to probe the global dynamics of a\nblack system. These findings have significance for practical systems whose\nequations of motion cannot be approached accurately. Examples of copying the\ndynamics of an artificial neural network, the Lorenz system, and a variable\nstar are given. Our idea paves a possible way towards copy a living brain.\n",
"title": "Copy the dynamics using a learning machine"
}
| null | null | null | null | true | null |
9255
| null |
Default
| null | null |
null |
{
"abstract": " The implementation of the algebraic Bethe ansatz for the XXZ Heisenberg spin\nchain, of arbitrary spin-$s$, in the case, when both reflection matrices have\nthe upper-triangular form is analyzed. The general form of the Bethe vectors is\nstudied. In the particular form, Bethe vectors admit the recurrent procedure,\nwith an appropriate modification, used previously in the case of the XXX\nHeisenberg chain. As expected, these Bethe vectors yield the strikingly simple\nexpression for the off-shell action of the transfer matrix of the chain as well\nas the spectrum of the transfer matrix and the corresponding Bethe equations.\nAs in the XXX case, the so-called quasi-classical limit gives the off-shell\naction of the generating function of the corresponding trigonometric Gaudin\nHamiltonians with boundary terms.\n",
"title": "Algebraic Bethe ansatz for the XXZ Heisenberg spin chain with triangular boundaries and the corresponding Gaudin model"
}
| null | null | null | null | true | null |
9256
| null |
Default
| null | null |
null |
{
"abstract": " A graphene-based spin-diffusive (GrSD) neural network is presented in this\nwork that takes advantage of the locally tunable spin transport of graphene and\nthe non-volatility of nanomagnets. By using electrostatically gated graphene as\nspintronic synapses, a weighted summation operation can be performed in the\nspin domain while the weights can be programmed using circuits in the charge\ndomain. Four-component spin/charge circuit simulations coupled to magnetic\ndynamics are used to show the feasibility of the neuron-synapse functionality\nand quantify the analog weighting capability of the graphene under different\nspin relaxation mechanisms. By realizing transistor-free weight implementation,\nthe graphene spin-diffusive neural network reduces the energy consumption to\n0.08-0.32 fJ per cell-synapse and achieves significantly better scalability\ncompared to its digital counterparts, particularly as the number and bit\naccuracy of the synapses increases.\n",
"title": "Using Programmable Graphene Channels as Weights in Spin-Diffusive Neuromorphic Computing"
}
| null | null | null | null | true | null |
9257
| null |
Default
| null | null |
null |
{
"abstract": " Quasi-cyclic (QC) low-density parity-check (LDPC) codes which are known as\nQC-LDPC codes, have many applications due to their simple encoding\nimplementation by means of cyclic shift registers. In this paper, we construct\nQC-LDPC codes from group rings. A group ring is a free module (at the same time\na ring) constructed in a natural way from any given ring and any given group.\nWe present a structure based on the elements of a group ring for constructing\nQC-LDPC codes. Some of the previously addressed methods for constructing\nQC-LDPC codes based on finite fields are special cases of the proposed\nconstruction method. The constructed QC-LDPC codes perform very well over the\nadditive white Gaussian noise (AWGN) channel with iterative decoding in terms\nof bit-error probability and block-error probability. Simulation results\ndemonstrate that the proposed codes have competitive performance in comparison\nwith the similar existing LDPC codes. Finally, we propose a new encoding method\nfor the proposed group ring based QC-LDPC codes that can be implemented faster\nthan the current encoding methods. The encoding complexity of the proposed\nmethod is analyzed mathematically, and indicates a significate reduction in the\nrequired number of operations, even when compared to the available efficient\nencoding methods that have linear time and space complexities.\n",
"title": "Construction and Encoding of QC-LDPC Codes Using Group Rings"
}
| null | null | null | null | true | null |
9258
| null |
Default
| null | null |
null |
{
"abstract": " The GW method is a many-body approach capable of providing quasiparticle\nbands for realistic systems spanning physics, chemistry, and materials science.\nDespite its power, GW is not routinely applied to large complex materials due\nto its computational expense. We perform an exact recasting of the GW\npolarizability and the self-energy as Laplace integrals over imaginary time\npropagators. We then \"shred\" the propagators (via energy windowing) and\napproximate them in a controlled manner by using Gauss-Laguerre quadrature and\ndiscrete variable methods to treat the imaginary time propagators in real\nspace. The resulting cubic scaling GW method has a sufficiently small prefactor\nto outperform standard quartic scaling methods on small systems (>=10 atoms)\nand also represents a substantial improvement over several other cubic methods\ntested. This approach is useful for evaluating quantum mechanical response\nfunction involving large sums containing energy (difference) denominators.\n",
"title": "Imaginary time, shredded propagator method for large-scale GW calculations"
}
| null | null | null | null | true | null |
9259
| null |
Default
| null | null |
null |
{
"abstract": " This paper considers the problem of predicting the number of claims that have\nalready incurred in past exposure years, but which have not yet been reported\nto the insurer. This is an important building block in the risk management\nstrategy of an insurer since the company should be able to fulfill its\nliabilities with respect to such claims. Our approach puts emphasis on modeling\nthe time between the occurrence and reporting of claims, the so-called\nreporting delay. Using data at a daily level we propose a micro-level model for\nthe heterogeneity in reporting delay caused by calendar day effects in the\nreporting process, such as the weekday pattern and holidays. A simulation study\nidentifies the strengths and weaknesses of our approach in several scenarios\ncompared to traditional methods to predict the number of incurred but not\nreported claims from aggregated data (i.e. the chain ladder method). We also\nillustrate our model on a European general liability insurance data set and\nconclude that the granular approach compared to the chain ladder method is more\nrobust with respect to volatility in the occurrence process. Our framework can\nbe extended to other predictive problems where interest goes to events that\nincurred in the past but which are subject to an observation delay (e.g. the\nnumber of infections during an epidemic).\n",
"title": "A time change strategy to model reporting delay dynamics in claims reserving"
}
| null | null | null | null | true | null |
9260
| null |
Default
| null | null |
null |
{
"abstract": " The solution of inverse problems in a variational setting finds best\nestimates of the model parameters by minimizing a cost function that penalizes\nthe mismatch between model outputs and observations. The gradients required by\nthe numerical optimization process are computed using adjoint models.\nExponential integrators are a promising family of time discretizations for\nevolutionary partial differential equations. In order to allow the use of these\ndiscretizations in the context of inverse problems adjoints of exponential\nintegrators are required. This work derives the discrete adjoint formulae for a\nW-type exponential propagation iterative methods of Runge-Kutta type (EPIRK-W).\nThese methods allow arbitrary approximations of the Jacobian while maintaining\nthe overall accuracy of the forward integration. The use of Jacobian\napproximation matrices that do not depend on the model state avoids the complex\ncalculation of Hessians in the discrete adjoint formulae, and allows efficient\nadjoint code generation via algorithmic differentiation. We use the discrete\nEPIRK-W adjoints to solve inverse problems with the Lorenz-96 model and a\ncomputational magnetics benchmark test. Numerical results validate our\ntheoretical derivations.\n",
"title": "Solving Parameter Estimation Problems with Discrete Adjoint Exponential Integrators"
}
| null | null | null | null | true | null |
9261
| null |
Default
| null | null |
null |
{
"abstract": " We study the geometry of Finsler submanifolds using the pulled-back approach.\nWe define the Finsler normal pulled-back bundle and obtain the induced\ngeometric objects, namely, induced pullback Finsler connection, normal pullback\nFinsler connection, second fundamental form and shape operator. Under a certain\ncondition, we prove that induced and intrinsic Hashiguchi connections coincide\non the pulled-back bundle of Finsler submanifold.\n",
"title": "Induced and intrinsic Hashiguchi connections on Finsler submanifolds"
}
| null | null |
[
"Mathematics"
] | null | true | null |
9262
| null |
Validated
| null | null |
null |
{
"abstract": " PET image reconstruction is challenging due to the ill-poseness of the\ninverse problem and limited number of detected photons. Recently deep neural\nnetworks have been widely and successfully used in computer vision tasks and\nattracted growing interests in medical imaging. In this work, we trained a deep\nresidual convolutional neural network to improve PET image quality by using the\nexisting inter-patient information. An innovative feature of the proposed\nmethod is that we embed the neural network in the iterative reconstruction\nframework for image representation, rather than using it as a post-processing\ntool. We formulate the objective function as a constraint optimization problem\nand solve it using the alternating direction method of multipliers (ADMM)\nalgorithm. Both simulation data and hybrid real data are used to evaluate the\nproposed method. Quantification results show that our proposed iterative neural\nnetwork method can outperform the neural network denoising and conventional\npenalized maximum likelihood methods.\n",
"title": "Iterative PET Image Reconstruction Using Convolutional Neural Network Representation"
}
| null | null | null | null | true | null |
9263
| null |
Default
| null | null |
null |
{
"abstract": " Round functions used as building blocks for iterated block ciphers, both in\nthe case of Substitution-Permutation Networks and Feistel Networks, are often\nobtained as the composition of different layers which provide confusion and\ndiffusion, and key additions. The bijectivity of any encryption function,\ncrucial in order to make the decryption possible, is guaranteed by the use of\ninvertible layers or by the Feistel structure. In this work a new family of\nciphers, called wave ciphers, is introduced. In wave ciphers, round functions\nfeature wave functions, which are vectorial Boolean functions obtained as the\ncomposition of non-invertible layers, where the confusion layer enlarges the\nmessage which returns to its original size after the diffusion layer is\napplied. This is motivated by the fact that relaxing the requirement that all\nthe layers are invertible allows to consider more functions which are optimal\nwith regard to non-linearity. In particular it allows to consider injective APN\nS-boxes. In order to guarantee efficient decryption we propose to use wave\nfunctions in Feistel Networks. With regard to security, the immunity from some\ngroup-theoretical attacks is investigated. In particular, it is shown how to\navoid that the group generated by the round functions acts imprimitively, which\nrepresent a serious flaw for the cipher.\n",
"title": "Wave-Shaped Round Functions and Primitive Groups"
}
| null | null | null | null | true | null |
9264
| null |
Default
| null | null |
null |
{
"abstract": " Time crystals, a phase showing spontaneous breaking of time-translation\nsymmetry, has been an intriguing subject for systems far away from equilibrium.\nRecent experiments found such a phase both in the presence and absence of\nlocalization, while in theories localization by disorder is usually assumed a\npriori. In this work, we point out that time crystals can generally exist in\nsystems without disorder. A series of clean quasi-one-dimensional models under\nFloquet driving are proposed to demonstrate this unexpected result in\nprinciple. Robust time crystalline orders are found in the strongly interacting\nregime along with the emergent integrals of motion in the dynamical system,\nwhich can be characterized by level statistics and the out-of-time-ordered\ncorrelators. We propose two cold atom experimental schemes to realize the clean\nFloquet time crystals, one by making use of dipolar gases and another by\nsynthetic dimensions.\n",
"title": "Clean Floquet Time Crystals: Models and Realizations in Cold Atoms"
}
| null | null | null | null | true | null |
9265
| null |
Default
| null | null |
null |
{
"abstract": " The paper is devoted to the development of control procedures with a guide\nfor conflict-controlled dynamical systems described by ordinary fractional\ndifferential equations with the Caputo derivative of an order $\\alpha \\in (0,\n1).$ For the case when the guide is in a certain sense a copy of the system, a\nmutual aiming procedure between the initial system and the guide is elaborated.\nThe proof of proximity between motions of the systems is based on the estimate\nof the fractional derivative of the superposition of a convex Lyapunov function\nand a function represented by the fractional integral of an essentially bounded\nmeasurable function. This estimate can be considered as a generalization of the\nknown estimates of such type. An example is considered which illustrates the\nworkability of the proposed control procedures.\n",
"title": "Fractional Derivatives of Convex Lyapunov Functions and Control Problems in Fractional Order Systems"
}
| null | null | null | null | true | null |
9266
| null |
Default
| null | null |
null |
{
"abstract": " To investigate the role of tachysterol in the photophysical/chemical\nregulation of vitamin D photosynthesis, we studied its electronic absorption\nproperties and excited state dynamics using time-dependent density functional\ntheory (TDDFT), coupled cluster theory (CC2), and non-adiabatic molecular\ndynamics. In excellent agreement with experiments, the simulated electronic\nspectrum shows a broad absorption band covering the spectra of the other\nvitamin D photoisomers. The broad band stems from the spectral overlap of four\ndifferent ground state rotamers. After photoexcitation, the first excited\nsinglet state (S1) decays within 882 fs. The S1 dynamics is characterized by a\nstrong twisting of the central double bond. 96% of all trajectories relax\nwithout chemical transformation to the ground state. In 2.3 % of the\ntrajectories we observed [1,5]-sigmatropic hydrogen shift forming the partly\ndeconjugated toxisterol D1. 1.4 % previtamin D formation is observed via\nhula-twist double bond isomerization. We find a strong dependence between\nphotoreactivity and dihedral angle conformation: hydrogen shift only occurs in\ncEc and cEt rotamers and double bond isomerization occurs mainly in cEc\nrotamers. Our study confirms the hypothesis that cEc rotamers are more prone to\nprevitamin D formation than other isomers. We also observe the formation of a\ncyclobutene-toxisterol in the hot ground state (0.7 %). Due to its strong\nabsorption and unreactive behavior, tachysterol acts mainly as a sun shield\nsuppressing previtamin D formation. Tachysterol shows stronger toxisterol\nformation than previtamin D. Absorption of low energy UV light by the cEc\nrotamer can lead to previtamin D formation. Our study reinforces a recent\nhypothesis that tachysterol can act as a previtamin D source when only low\nenergy ultraviolet light is available, as it is the case in winter or in the\nmorning and evening hours of the day.\n",
"title": "The role of tachysterol in vitamin D photosynthesis - A non-adiabatic molecular dynamics study"
}
| null | null | null | null | true | null |
9267
| null |
Default
| null | null |
null |
{
"abstract": " Online social platforms are beset with hateful speech - content that\nexpresses hatred for a person or group of people. Such content can frighten,\nintimidate, or silence platform users, and some of it can inspire other users\nto commit violence. Despite widespread recognition of the problems posed by\nsuch content, reliable solutions even for detecting hateful speech are lacking.\nIn the present work, we establish why keyword-based methods are insufficient\nfor detection. We then propose an approach to detecting hateful speech that\nuses content produced by self-identifying hateful communities as training data.\nOur approach bypasses the expensive annotation process often required to train\nkeyword systems and performs well across several established platforms, making\nsubstantial improvements over current state-of-the-art approaches.\n",
"title": "A Web of Hate: Tackling Hateful Speech in Online Social Spaces"
}
| null | null | null | null | true | null |
9268
| null |
Default
| null | null |
null |
{
"abstract": " An integral power series is called lacunary modulo $M$ if almost all of its\ncoefficients are divisible by $M$. Motivated by the parity problem for the\npartition function, $p(n)$, Gordon and Ono studied the generating functions for\n$t$-regular partitions, and determined conditions for when these functions are\nlacunary modulo powers of primes. We generalize their results in a number of\nways by studying infinite products called Dedekind eta-quotients and\ngeneralized Dedekind eta-quotients. We then apply our results to the generating\nfunctions for the partition functions considered by Nekrasov, Okounkov, and\nHan.\n",
"title": "Lacunary Eta-quotients Modulo Powers of Primes"
}
| null | null | null | null | true | null |
9269
| null |
Default
| null | null |
null |
{
"abstract": " We describe sofic groupoids in elementary terms and prove several permanence\nproperties for sofcity. We show that sofcity can be determined in terms of the\nfull group alone, answering a question by Conley, Kechris and Tucker-Drob.\n",
"title": "An elementary approach to sofic groupoids"
}
| null | null |
[
"Mathematics"
] | null | true | null |
9270
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper, we propose a new combined message passing algorithm which\nallows belief propagation (BP) and mean filed (MF) applied on a same factor\nnode, so that MF can be applied to hard constraint factors. Based on the\nproposed message passing algorithm, a iterative receiver is designed for\nMIMO-OFDM systems. Both BP and MF are exploited to deal with the hard\nconstraint factor nodes involving the multiplication of channel coefficients\nand data symbols to reduce the complexity of the only BP used. The numerical\nresults show that the BER performance of the proposed low complexity receiver\nclosely approach that of the state-of-the-art receiver, where only BP is used\nto handled the hard constraint factors, in the high SNRs.\n",
"title": "A New Combination of Message Passing Techniques for Receiver Design in MIMO-OFDM Systems"
}
| null | null | null | null | true | null |
9271
| null |
Default
| null | null |
null |
{
"abstract": " The LSST software systems make extensive use of Python, with almost all of it\ninitially being developed solely in Python 2. Since LSST will be commissioned\nwhen Python 2 is end-of-lifed it is critical that we have all our code support\nPython 3 before commissioning begins. Over the past year we have made\nsignificant progress in migrating the bulk of the code from the Data Management\nsystem onto Python 3. This paper presents our migration methodology, and the\ncurrent status of the port, with our eventual aim to be running completely on\nPython 3 by early 2018. We also discuss recent modernizations to our Python\ncodebase.\n",
"title": "Modern Python at the Large Synoptic Survey Telescope"
}
| null | null |
[
"Physics"
] | null | true | null |
9272
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we study methods for estimating causal effects in settings with\npanel data, where a subset of units are exposed to a treatment during a subset\nof periods, and the goal is estimating counterfactual (untreated) outcomes for\nthe treated unit/period combinations. We develop a class of matrix completion\nestimators that uses the observed elements of the matrix of control outcomes\ncorresponding to untreated unit/periods to predict the \"missing\" elements of\nthe matrix, corresponding to treated units/periods. The approach estimates a\nmatrix that well-approximates the original (incomplete) matrix, but has lower\ncomplexity according to the nuclear norm for matrices. From a technical\nperspective, we generalize results from the matrix completion literature by\nallowing the patterns of missing data to have a time series dependency\nstructure. We also present novel insights concerning the connections between\nthe matrix completion literature, the literature on interactive fixed effects\nmodels and the literatures on program evaluation under unconfoundedness and\nsynthetic control methods.\n",
"title": "Matrix Completion Methods for Causal Panel Data Models"
}
| null | null | null | null | true | null |
9273
| null |
Default
| null | null |
null |
{
"abstract": " We give an explicit formula for singular surfaces of revolution with\nprescribed unbounded mean curvature. Using it, we give conditions for\nsingularities of that surfaces. Periodicity of that surface is also discussed.\n",
"title": "Singular surfaces of revolution with prescribed unbounded mean curvature"
}
| null | null | null | null | true | null |
9274
| null |
Default
| null | null |
null |
{
"abstract": " The $\\kappa$-mechanism has been successful in explaining the origin of\nobserved oscillations of many types of \"classical\" pulsating variable stars.\nHere we examine quantitatively if that same process is prominent enough to\nexcite the potential global oscillations within Jupiter, whose energy flux is\npowered by gravitational collapse rather than nuclear fusion. Additionally, we\nexamine whether external radiative forcing, i.e. starlight, could be a driver\nfor global oscillations in hot Jupiters orbiting various main-sequence stars at\ndefined orbital semimajor axes. Using planetary models generated by the Modules\nfor Experiments in Stellar Astrophysics (MESA) and nonadiabatic oscillation\ncalculations, we confirm that Jovian oscillations cannot be driven via the\n$\\kappa$-mechanism. However, we do show that in hot Jupiters oscillations can\nlikely be excited via the suppression of radiative cooling due to external\nradiation given a large enough stellar flux and the absence of a significant\noscillatory damping zone within the planet. This trend seems to not be\ndependent on the planetary mass. In future observations we can thus expect that\nsuch planets may be pulsating, thereby giving greater insight into the internal\nstructure of these bodies.\n",
"title": "A Possible Mechanism for Driving Oscillations in Hot Giant Planets"
}
| null | null | null | null | true | null |
9275
| null |
Default
| null | null |
null |
{
"abstract": " We classify all cubic extensions of any field of arbitrary characteristic, up\nto isomorphism, via an explicit construction involving three fundamental types\nof cubic forms. We deduce a classification of any Galois cubic extension of a\nfield. The splitting and ramification of places in a separable cubic extension\nof any global function field are completely determined, and precise\nRiemann-Hurwitz formulae are given. In doing so, we determine the decomposition\nof any cubic polynomial over a finite field.\n",
"title": "Cubic Fields: A Primer"
}
| null | null | null | null | true | null |
9276
| null |
Default
| null | null |
null |
{
"abstract": " With the installation of the Argus 16-pixel receiver covering 75-115 GHz on\nthe Green Bank Telescope (GBT), it is now possible to characterize the antenna\nbeam at very high frequencies, where the use of the active surface and\nout-of-focus holography are critical to the telescope's performance. A recent\nmeasurement in good weather conditions (low atmospheric opacity, low winds, and\nstable night-time thermal conditions) at 109.4 GHz yielded a FWHM beam of\n6.7\"x6.4\" in azimuth and elevation, respectively. This corresponds to\n1.16+/-0.03 Lambda/D at 109.4 GHz. The derived ratio agrees well with the\nlow-frequency value of 1.18+/-0.03 Lambda/D measured at 9.0 GHz. There are no\ndetectable side-lobes at either frequency. In good weather conditions and after\napplying the standard antenna corrections (pointing, focus, and the active\nsurface corrections for gravity and thermal effects), there is no measurable\ndegradation of the beam of the GBT at its highest operational frequencies.\n",
"title": "The GBT Beam Shape at 109 GHz"
}
| null | null | null | null | true | null |
9277
| null |
Default
| null | null |
null |
{
"abstract": " It is challenging to develop stochastic gradient based scalable inference for\ndeep discrete latent variable models (LVMs), due to the difficulties in not\nonly computing the gradients, but also adapting the step sizes to different\nlatent factors and hidden layers. For the Poisson gamma belief network (PGBN),\na recently proposed deep discrete LVM, we derive an alternative representation\nthat is referred to as deep latent Dirichlet allocation (DLDA). Exploiting data\naugmentation and marginalization techniques, we derive a block-diagonal Fisher\ninformation matrix and its inverse for the simplex-constrained global model\nparameters of DLDA. Exploiting that Fisher information matrix with stochastic\ngradient MCMC, we present topic-layer-adaptive stochastic gradient Riemannian\n(TLASGR) MCMC that jointly learns simplex-constrained global parameters across\nall layers and topics, with topic and layer specific learning rates.\nState-of-the-art results are demonstrated on big data sets.\n",
"title": "Deep Latent Dirichlet Allocation with Topic-Layer-Adaptive Stochastic Gradient Riemannian MCMC"
}
| null | null | null | null | true | null |
9278
| null |
Default
| null | null |
null |
{
"abstract": " There are two general views in causal analysis of experimental data: the\nsuper population view that the units are an independent sample from some\nhypothetical infinite populations, and the finite population view that the\npotential outcomes of the experimental units are fixed and the randomness comes\nsolely from the physical randomization of the treatment assignment. These two\nviews differs conceptually and mathematically, resulting in different sampling\nvariances of the usual difference-in-means estimator of the average causal\neffect. Practically, however, these two views result in identical variance\nestimators. By recalling a variance decomposition and exploiting a\ncompleteness-type argument, we establish a connection between these two views\nin completely randomized experiments. This alternative formulation could serve\nas a template for bridging finite and super population causal inference in\nother scenarios.\n",
"title": "Bridging Finite and Super Population Causal Inference"
}
| null | null | null | null | true | null |
9279
| null |
Default
| null | null |
null |
{
"abstract": " In recent years, significant progress has been made in solving challenging\nproblems across various domains using deep reinforcement learning (RL).\nReproducing existing work and accurately judging the improvements offered by\nnovel methods is vital to sustaining this progress. Unfortunately, reproducing\nresults for state-of-the-art deep RL methods is seldom straightforward. In\nparticular, non-determinism in standard benchmark environments, combined with\nvariance intrinsic to the methods, can make reported results tough to\ninterpret. Without significance metrics and tighter standardization of\nexperimental reporting, it is difficult to determine whether improvements over\nthe prior state-of-the-art are meaningful. In this paper, we investigate\nchallenges posed by reproducibility, proper experimental techniques, and\nreporting procedures. We illustrate the variability in reported metrics and\nresults when comparing against common baselines and suggest guidelines to make\nfuture results in deep RL more reproducible. We aim to spur discussion about\nhow to ensure continued progress in the field by minimizing wasted effort\nstemming from results that are non-reproducible and easily misinterpreted.\n",
"title": "Deep Reinforcement Learning that Matters"
}
| null | null | null | null | true | null |
9280
| null |
Default
| null | null |
null |
{
"abstract": " The main goal for this article is to compare performance penalties when using\nKVM virtualization and Docker containers for creating isolated environments for\nHPC applications. The article provides both data obtained using commonly\naccepted synthetic tests (High Performance Linpack) and real life applications\n(OpenFOAM). The article highlights the influence on resulting application\nperformance of major infrastructure configuration options: CPU type presented\nto VM, networking connection type used.\n",
"title": "Testing Docker Performance for HPC Applications"
}
| null | null | null | null | true | null |
9281
| null |
Default
| null | null |
null |
{
"abstract": " We present the first real-world application of methods for improving neural\nmachine translation (NMT) with human reinforcement, based on explicit and\nimplicit user feedback collected on the eBay e-commerce platform. Previous work\nhas been confined to simulation experiments, whereas in this paper we work with\nreal logged feedback for offline bandit learning of NMT parameters. We conduct\na thorough analysis of the available explicit user judgments---five-star\nratings of translation quality---and show that they are not reliable enough to\nyield significant improvements in bandit learning. In contrast, we successfully\nutilize implicit task-based feedback collected in a cross-lingual search task\nto improve task-specific and machine translation quality metrics.\n",
"title": "Can Neural Machine Translation be Improved with User Feedback?"
}
| null | null |
[
"Statistics"
] | null | true | null |
9282
| null |
Validated
| null | null |
null |
{
"abstract": " We demonstrate the successful experimental implementation of a multi-stage\nZeeman decelerator utilizing the new concept described in the accompanying\npaper. The decelerator consists of an array of 25 hexapoles and 24 solenoids.\nThe performance of the decelerator in acceleration, deceleration and guiding\nmodes is characterized using beams of metastable Helium ($^3S$) atoms. Up to\n60% of the kinetic energy was removed for He atoms that have an initial\nvelocity of 520 m/s. The hexapoles consist of permanent magnets, whereas the\nsolenoids are produced from a single hollow copper capillary through which\ncooling liquid is passed. The solenoid design allows for excellent thermal\nproperties, and enables the use of readily available and cheap electronics\ncomponents to pulse high currents through the solenoids. The Zeeman decelerator\ndemonstrated here is mechanically easy to build, can be operated with\ncost-effective electronics, and can run at repetition rates up to 10 Hz.\n",
"title": "A new concept multi-stage Zeeman decelerator: experimental implementation"
}
| null | null | null | null | true | null |
9283
| null |
Default
| null | null |
null |
{
"abstract": " Simulation-based training (SBT) is gaining popularity as a low-cost and\nconvenient training technique in a vast range of applications. However, for a\nSBT platform to be fully utilized as an effective training tool, it is\nessential that feedback on performance is provided automatically in real-time\nduring training. It is the aim of this paper to develop an efficient and\neffective feedback generation method for the provision of real-time feedback in\nSBT. Existing methods either have low effectiveness in improving novice skills\nor suffer from low efficiency, resulting in their inability to be used in\nreal-time. In this paper, we propose a neural network based method to generate\nfeedback using the adversarial technique. The proposed method utilizes a\nbounded adversarial update to minimize a L1 regularized loss via\nback-propagation. We empirically show that the proposed method can be used to\ngenerate simple, yet effective feedback. Also, it was observed to have high\neffectiveness and efficiency when compared to existing methods, thus making it\na promising option for real-time feedback generation in SBT.\n",
"title": "Adversarial Generation of Real-time Feedback with Neural Networks for Simulation-based Training"
}
| null | null | null | null | true | null |
9284
| null |
Default
| null | null |
null |
{
"abstract": " We present gravitational lens models of the multiply imaged quasar DES\nJ0408-5354, recently discovered in the Dark Energy Survey (DES) footprint, with\nthe aim of interpreting its remarkable quad-like configuration. We first model\nthe DES single-epoch $grizY$ images as a superposition of a lens galaxy and\nfour point-like objects, obtaining spectral energy distributions (SEDs) and\nrelative positions for the objects. Three of the point sources (A,B,D) have\nSEDs compatible with the discovery quasar spectra, while the faintest\npoint-like image (G2/C) shows significant reddening and a `grey' dimming of\n$\\approx0.8$mag. In order to understand the lens configuration, we fit\ndifferent models to the relative positions of A,B,D. Models with just a single\ndeflector predict a fourth image at the location of G2/C but considerably\nbrighter and bluer. The addition of a small satellite galaxy ($R_{\\rm\nE}\\approx0.2$\") in the lens plane near the position of G2/C suppresses the flux\nof the fourth image and can explain both the reddening and grey dimming. All\nmodels predict a main deflector with Einstein radius between $1.7\"$ and $2.0\",$\nvelocity dispersion $267-280$km/s and enclosed mass $\\approx\n6\\times10^{11}M_{\\odot},$ even though higher resolution imaging data are needed\nto break residual degeneracies in model parameters. The longest time-delay\n(B-A) is estimated as $\\approx 85$ (resp. $\\approx125$) days by models with\n(resp. without) a perturber near G2/C. The configuration and predicted\ntime-delays of J0408-5354 make it an excellent target for follow-up aimed at\nunderstanding the source quasar host galaxy and substructure in the lens, and\nmeasuring cosmological parameters. We also discuss some lessons learnt from\nJ0408-5354 on lensed quasar finding strategies, due to its chromaticity and\nmorphology.\n",
"title": "Models of the strongly lensed quasar DES J0408-5354"
}
| null | null | null | null | true | null |
9285
| null |
Default
| null | null |
null |
{
"abstract": " Variable selection plays a fundamental role in high-dimensional data\nanalysis. Various methods have been developed for variable selection in recent\nyears. Well-known examples are forward stepwise regression (FSR) and least\nangle regression (LARS), among others. These methods typically add variables\ninto the model one by one. For such selection procedures, it is crucial to find\na stopping criterion that controls model complexity. One of the most commonly\nused techniques to this end is cross-validation (CV) which, in spite of its\npopularity, has two major drawbacks: expensive computational cost and lack of\nstatistical interpretation. To overcome these drawbacks, we introduce a\nflexible and efficient test-based variable selection approach that can be\nincorporated into any sequential selection procedure. The test, which is on the\noverall signal in the remaining inactive variables, is based on the maximal\nabsolute partial correlation between the inactive variables and the response\ngiven active variables. We develop the asymptotic null distribution of the\nproposed test statistic as the dimension tends to infinity uniformly in the\nsample size. We also show that the test is consistent. With this test, at each\nstep of the selection, a new variable is included if and only if the $p$-value\nis below some pre-defined level. Numerical studies show that the proposed\nmethod delivers very competitive performance in terms of variable selection\naccuracy and computational complexity compared to CV.\n",
"title": "Efficient Test-based Variable Selection for High-dimensional Linear Models"
}
| null | null | null | null | true | null |
9286
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the formation and early evolution of star clusters assuming\nthat they form from a turbulent starless clump of given mass bounded inside a\nparent self-gravitating molecular cloud characterized by a particular mass\nsurface density. As a first step we assume instantaneous star cluster formation\nand gas expulsion. We draw our initial conditions from observed properties of\nstarless clumps. We follow the early evolution of the clusters up to 20 Myr,\ninvestigating effects of different star formation efficiencies, primordial\nbinary fractions and eccentricities and primordial mass segregation levels. We\ninvestigate clumps with initial masses of $M_{\\rm cl}=3000\\:{\\rm M}_\\odot$\nembedded in ambient cloud environments with mass surface densities,\n$\\Sigma_{\\rm cloud}=0.1$ and $1\\:{\\rm g\\:cm^{-2}}$. We show that these models\nof fast star cluster formation result, in the fiducial case, in clusters that\nexpand rapidly, even considering only the bound members. Clusters formed from\nhigher $\\Sigma_{\\rm cloud}$ environments tend to expand more quickly, so are\nsoon larger than clusters born from lower $\\Sigma_{\\rm cloud}$ conditions. To\nform a young cluster of a given age, stellar mass and mass surface density,\nthese models need to assume a parent molecular clump that is many times denser,\nwhich is unrealistic compared to observed systems. We also show that in these\nmodels the initial binary properties are only slightly modified by\ninteractions, meaning that binary properties, e.g., at 20 Myr, are very similar\nto those at birth. With this study we set up the basis of future work where we\nwill investigate more realistic models of star formation compared to this\ninstantaneous, baseline case.\n",
"title": "Star Cluster Formation from Turbulent Clumps. I. The Fast Formation Limit"
}
| null | null | null | null | true | null |
9287
| null |
Default
| null | null |
null |
{
"abstract": " Sleep stage classification constitutes an important preliminary exam in the\ndiagnosis of sleep disorders. It is traditionally performed by a sleep expert\nwho assigns to each 30s of signal a sleep stage, based on the visual inspection\nof signals such as electroencephalograms (EEG), electrooculograms (EOG),\nelectrocardiograms (ECG) and electromyograms (EMG). We introduce here the first\ndeep learning approach for sleep stage classification that learns end-to-end\nwithout computing spectrograms or extracting hand-crafted features, that\nexploits all multivariate and multimodal Polysomnography (PSG) signals (EEG,\nEMG and EOG), and that can exploit the temporal context of each 30s window of\ndata. For each modality the first layer learns linear spatial filters that\nexploit the array of sensors to increase the signal-to-noise ratio, and the\nlast layer feeds the learnt representation to a softmax classifier. Our model\nis compared to alternative automatic approaches based on convolutional networks\nor decisions trees. Results obtained on 61 publicly available PSG records with\nup to 20 EEG channels demonstrate that our network architecture yields\nstate-of-the-art performance. Our study reveals a number of insights on the\nspatio-temporal distribution of the signal of interest: a good trade-off for\noptimal classification performance measured with balanced accuracy is to use 6\nEEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting one\nminute of data before and after each data segment offers the strongest\nimprovement when a limited number of channels is available. As sleep experts,\nour system exploits the multivariate and multimodal nature of PSG signals in\norder to deliver state-of-the-art classification performance with a small\ncomputational cost.\n",
"title": "A deep learning architecture for temporal sleep stage classification using multivariate and multimodal time series"
}
| null | null | null | null | true | null |
9288
| null |
Default
| null | null |
null |
{
"abstract": " A gambler moves on the vertices $1, \\ldots, n$ of a graph using the\nprobability distribution $p_{1}, \\ldots, p_{n}$. A cop pursues the gambler on\nthe graph, only being able to move between adjacent vertices. What is the\nexpected number of moves that the gambler can make until the cop catches them?\nKomarov and Winkler proved an upper bound of approximately $1.97n$ for the\nexpected capture time on any connected $n$-vertex graph when the cop does not\nknow the gambler's distribution. We improve this upper bound to approximately\n$1.95n$ by modifying the cop's pursuit algorithm.\n",
"title": "An anti-incursion algorithm for unknown probabilistic adversaries on connected graphs"
}
| null | null | null | null | true | null |
9289
| null |
Default
| null | null |
null |
{
"abstract": " Efficient, reliable trapping of execution in a program at the desired\nlocation is a hot area of research for security professionals. The progression\nof debuggers and malware is akin to a game of cat and mouse - each are\nconstantly in a state of trying to thwart one another. At the core of most\nefficient debuggers today is a combination of virtual machines and traditional\nbinary modification breakpoints (int3). In this paper, we present a design for\nVirtual Breakpoints, a modification to the x86 MMU which brings breakpoint\nmanagement into hardware alongside page tables. We demonstrate the fundamental\nabstraction failures of current trapping methods, and rebuild the mechanism\nfrom the ground up. Our design delivers fast, reliable trapping without the\npitfalls of binary modification.\n",
"title": "Virtual Breakpoints for x86/64"
}
| null | null | null | null | true | null |
9290
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies mechanism of preconcentration of charged particles in a\nstraight micro-channel embedded with permselective membranes, by numerically\nsolving coupled transport equations of ions, charged particles and solvent\nfluid without any simplifying assumptions. It is demonstrated that trapping and\npreconcentration of charged particles are determined by the interplay between\ndrag force from the electroosmotic fluid flow and the electrophoretic force\napplied trough the electric field. Several insightful characteristics are\nrevealed, including the diverse dynamics of co-ions and counter ions,\nreplacement of co-ions by focused particles, lowered ion concentrations in\nparticle enriched zone, and enhanced electroosmotic pumping effect etc.\nConditions for particles that may be concentrated are identified in terms of\ncharges, sizes and electrophoretic mobilities of particles and co-ions.\nDependences of enrichment factor on cross-membrane voltage, initial particle\nconcentration and buffer ion concentrations are analyzed and the underlying\nreasons are elaborated. Finally, post priori a condition for validity of\ndecoupled simulation model is given based on charges carried by focused charge\nparticles and that by buffer co-ions. These results provide important guidance\nin the design and optimization of nanofluidic preconcentration and other\nrelated devices.\n",
"title": "Accurate Multi-physics Numerical Analysis of Particle Preconcentration Based on Ion Concentration Polarization"
}
| null | null | null | null | true | null |
9291
| null |
Default
| null | null |
null |
{
"abstract": " We present models for embedding words in the context of surrounding words.\nSuch models, which we refer to as token embeddings, represent the\ncharacteristics of a word that are specific to a given context, such as word\nsense, syntactic category, and semantic role. We explore simple, efficient\ntoken embedding models based on standard neural network architectures. We learn\ntoken embeddings on a large amount of unannotated text and evaluate them as\nfeatures for part-of-speech taggers and dependency parsers trained on much\nsmaller amounts of annotated data. We find that predictors endowed with token\nembeddings consistently outperform baseline predictors across a range of\ncontext window and training set sizes.\n",
"title": "Learning to Embed Words in Context for Syntactic Tasks"
}
| null | null | null | null | true | null |
9292
| null |
Default
| null | null |
null |
{
"abstract": " Data storage systems and their availability play a crucial role in\ncontemporary datacenters. Despite using mechanisms such as automatic fail-over\nin datacenters, the role of human agents and consequently their destructive\nerrors is inevitable. Due to very large number of disk drives used in exascale\ndatacenters and their high failure rates, the disk subsystem in storage systems\nhas become a major source of Data Unavailability (DU) and Data Loss (DL)\ninitiated by human errors. In this paper, we investigate the effect of\nIncorrect Disk Replacement Service (IDRS) on the availability and reliability\nof data storage systems. To this end, we analyze the consequences of IDRS in a\ndisk array, and conduct Monte Carlo simulations to evaluate DU and DL during\nmission time. The proposed modeling framework can cope with a) different\nstorage array configurations and b) Data Object Survivability (DOS),\nrepresenting the effect of system level redundancies such as remote backups and\nmirrors. In the proposed framework, the model parameters are obtained from\nindustrial and scientific reports alongside field data which have been\nextracted from a datacenter operating with 70 storage racks. The results show\nthat ignoring the impact of IDRS leads to unavailability underestimation by up\nto three orders of magnitude. Moreover, our study suggests that by considering\nthe effect of human errors, the conventional beliefs about the dependability of\ndifferent Redundant Array of Independent Disks (RAID) mechanisms should be\nrevised. The results show that RAID1 can result in lower availability compared\nto RAID5 in the presence of human errors. The results also show that employing\nautomatic fail-over policy (using hot spare disks) can reduce the drastic\nimpacts of human errors by two orders of magnitude.\n",
"title": "Modeling Impact of Human Errors on the Data Unavailability and Data Loss of Storage Systems"
}
| null | null | null | null | true | null |
9293
| null |
Default
| null | null |
null |
{
"abstract": " The theory of sparse stochastic processes offers a broad class of statistical\nmodels to study signals. In this framework, signals are represented as\nrealizations of random processes that are solution of linear stochastic\ndifferential equations driven by white Lévy noises. Among these processes,\ngeneralized Poisson processes based on compound-Poisson noises admit an\ninterpretation as random L-splines with random knots and weights. We\ndemonstrate that every generalized Lévy process-from Gaussian to sparse-can\nbe understood as the limit in law of a sequence of generalized Poisson\nprocesses. This enables a new conceptual understanding of sparse processes and\nsuggests simple algorithms for the numerical generation of such objects.\n",
"title": "Gaussian and Sparse Processes Are Limits of Generalized Poisson Processes"
}
| null | null | null | null | true | null |
9294
| null |
Default
| null | null |
null |
{
"abstract": " Princess Kaguya is a heroine of a famous folk tale, as every Japanese knows.\nShe was assumed to be confined in a bamboo cavity with cylindrical shape, and\nthen fortuitously discovered by an elderly man in the forest. Here, we pose a\nquestion as to how long she could have survived in an enclosed space such as\nthe bamboo chamber, which had no external oxygen supply at all. We demonstrate\nthat the survival time should be determined by three geometric quantities: the\ninner volume of the bamboo chamber, the volumetric size of her body, and her\nbody's total surface area that governs the rate of oxygen consumption in the\nbody. We also emphasize that this geometric problem shed light on an\ninteresting scaling relation between biological quantities for living\norganisms.\n",
"title": "Survival time of Princess Kaguya in an air-tight bamboo chamber"
}
| null | null | null | null | true | null |
9295
| null |
Default
| null | null |
null |
{
"abstract": " The Descriptor System Tools (DSTOOLS) is a collection of MATLAB functions for\nthe operation on and manipulation of rational transfer function matrices via\ntheir descriptor system realizations. The DSTOOLS collection relies on the\nControl System Toolbox and several mex-functions based on the Systems and\nControl Library SLICOT. Many of the implemented functions are based on the\ncomputational procedures described in Chapter 10 of the book: \"A. Varga,\nSolving Fault Diagnosis Problems - Linear Synthesis Techniques, Springer,\n2017\". This document is the User's Guide for the version V0.71 of DSTOOLS.\nFirst, we present the mathematical background on rational matrices and\ndescriptor systems. Then, we give in-depth information on the command syntax of\nthe main computational functions. Several examples illustrate the use of the\nmain functions of DSTOOLS.\n",
"title": "Descriptor System Tools (DSTOOLS) User's Guide"
}
| null | null | null | null | true | null |
9296
| null |
Default
| null | null |
null |
{
"abstract": " In the classic sparsity-driven problems, the fundamental L-1 penalty method\nhas been shown to have good performance in reconstructing signals for a wide\nrange of problems. However this performance relies on a good choice of penalty\nweight which is often found from empirical experiments. We propose an algorithm\ncalled the Laplacian variational automatic relevance determination (Lap-VARD)\nthat takes this penalty weight as a parameter of a prior Laplace distribution.\nOptimization of this parameter using an automatic relevance determination\nframework results in a balance between the sparsity and accuracy of signal\nreconstruction. Our algorithm is implemented in a transmission tomography model\nwith sparsity constraint in wavelet domain.\n",
"title": "Laplacian Prior Variational Automatic Relevance Determination for Transmission Tomography"
}
| null | null | null | null | true | null |
9297
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we propose a content-based recommendation approach to increase\nexposure to opposing beliefs and opinions. Our aim is to help provide users\nwith more diverse viewpoints on issues, which are discussed in partisan groups\nfrom different perspectives. Since due to the backfire effect, people's\noriginal beliefs tend to strengthen when challenged with counter evidence, we\nneed to expose them to opposing viewpoints at the right time. The preliminary\nwork presented here describes our first step into this direction. As\nillustrative showcase, we take the political debate on Twitter around the\npresidency of Donald Trump.\n",
"title": "Mitigating Confirmation Bias on Twitter by Recommending Opposing Views"
}
| null | null | null | null | true | null |
9298
| null |
Default
| null | null |
null |
{
"abstract": " The work describes a first-principles-based computational strategy for\nstudying structural phase transitions, and in particular, for determination of\nthe so-called Landau-Devonshire potential - the classical zero-temperature\nlimit of the Gibbs energy, expanded in terms of order parameters. It exploits\nthe configuration space attached to the eigenvectors of the modes frozen in the\nground state, rather than the space spanned by the unstable modes of the\nhigh-symmetry phase, as done usually. This allows us to carefully probe the\npart of the energy surface in the vicinity of the ground state, which is most\nrelevant for the properties of the ordered phase. We apply this procedure to\nBiFeO$_3$ and perform ab-initio calculations in order to determine potential\nenergy contributions associated with strain, polarization and oxygen octahedra\ntilt degrees of freedom, compatible with its two-formula unit cell periodic\nboundary conditions.\n",
"title": "First-principles based Landau-Devonshire potential for BiFeO$_3$"
}
| null | null |
[
"Physics"
] | null | true | null |
9299
| null |
Validated
| null | null |
null |
{
"abstract": " In [15], V. Jimenez and J. Llibre characterized, up to homeomorphism, the\nomega limit sets of analytic vector fields on the sphere and the projective\nplane. The authors also studied the same problem for open subsets of these\nsurfaces.\nUnfortunately, an essential lemma in their programme for general surfaces has\na gap. Although the proof of this lemma can be amended in the case of the\nsphere, the plane, the projective plane and the projective plane minus one\npoint (and therefore the characterizations for these surfaces in [8] are\ncorrect), the lemma is not generally true, see [15].\nConsequently, the topological characterization for analytic vector fields on\nopen subsets of the sphere and the projective plane is still pending. In this\npaper, we close this problem in the case of open subsets of the sphere.\n",
"title": "A topological characterization of the omega-limit sets of analytic vector fields on open subsets of the sphere"
}
| null | null | null | null | true | null |
9300
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.