text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " In this paper, a generalized nonlinear Camassa-Holm equation with time- and\nspace-dependent coefficients is considered. We show that the control of the\nhigher order dispersive term is possible by using an adequate weight function\nto define the energy. The existence and uniqueness of solutions are obtained\nvia a Picard iterative method.\n",
"title": "On the generalized nonlinear Camassa-Holm equation"
}
| null | null | null | null | true | null |
4701
| null |
Default
| null | null |
null |
{
"abstract": " Despite the growing popularity of 802.11 wireless networks, users often\nsuffer from connectivity problems and performance issues due to unstable radio\nconditions and dynamic user behavior among other reasons. Anomaly detection and\ndistinction are in the thick of major challenges that network managers\nencounter. Complication of monitoring the broaden and complex WLANs, that often\nrequires heavy instrumentation of the user devices, makes the anomaly detection\nanalysis even harder. In this paper we exploit 802.11 access point usage data\nand propose an anomaly detection technique based on Hidden Markov Model (HMM)\nand Universal Background Model (UBM) on data that is inexpensive to obtain. We\nthen generate a number of network anomalous scenarios in OMNeT++/INET network\nsimulator and compare the detection outcomes with those in baseline approaches\n(RawData and PCA). The experimental results show the superiority of HMM and\nHMM-UBM models in detection precision and sensitivity.\n",
"title": "802.11 Wireless Simulation and Anomaly Detection using HMM and UBM"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4702
| null |
Validated
| null | null |
null |
{
"abstract": " While most schemes for automatic cover song identification have focused on\nnote-based features such as HPCP and chord profiles, a few recent papers\nsurprisingly showed that local self-similarities of MFCC-based features also\nhave classification power for this task. Since MFCC and HPCP capture\ncomplementary information, we design an unsupervised algorithm that combines\nnormalized, beat-synchronous blocks of these features using cross-similarity\nfusion before attempting to locally align a pair of songs. As an added bonus,\nour scheme naturally incorporates structural information in each song to fill\nin alignment gaps where both feature sets fail. We show a striking jump in\nperformance over MFCC and HPCP alone, achieving a state of the art mean\nreciprocal rank of 0.87 on the Covers80 dataset. We also introduce a new\nmedium-sized hand designed benchmark dataset called \"Covers 1000,\" which\nconsists of 395 cliques of cover songs for a total of 1000 songs, and we show\nthat our algorithm achieves an MRR of 0.9 on this dataset for the first\ncorrectly identified song in a clique. We provide the precomputed HPCP and MFCC\nfeatures, as well as beat intervals, for all songs in the Covers 1000 dataset\nfor use in further research.\n",
"title": "Early MFCC And HPCP Fusion for Robust Cover Song Identification"
}
| null | null | null | null | true | null |
4703
| null |
Default
| null | null |
null |
{
"abstract": " We present an algorithm that ensures in finite time the gathering of two\nrobots in the non-rigid ASYNC model. To circumvent established impossibility\nresults, we assume robots are equipped with 2-colors lights and are able to\nmeasure distances between one another. Aside from its light, a robot has no\nmemory of its past actions, and its protocol is deterministic. Since, in the\nsame model, gathering is impossible when lights have a single color, our\nsolution is optimal with respect to the number of used colors.\n",
"title": "Optimally Gathering Two Robots"
}
| null | null | null | null | true | null |
4704
| null |
Default
| null | null |
null |
{
"abstract": " Let $(X,\\omega)$ be a compact Hermitian manifold of complex dimension $n$. In\nthis article, we first survey recent progress towards Grauert-Riemenschneider\ntype criterions. Secondly, we give a simplified proof of Boucksom's conjecture\ngiven by the author under the assumption that the Hermitian metric $\\omega$\nsatisfies $\\partial\\overline{\\partial}\\omega^l=$ for all $l$, i.e., if $T$ is a\nclosed positive current on $X$ such that $\\int_XT_{ac}^n>0$, then the class\n$\\{T\\}$ is big and $X$ is Kähler. Finally, as an easy observation, we point\nout that Nguyen's result can be generalized as follows: if\n$\\partial\\overline{\\partial}\\omega=0$, and $T$ is a closed positive current\nwith analytic singularities, such that $\\int_XT^n_{ac}>0$, then the class\n$\\{T\\}$ is big and $X$ is Kähler.\n",
"title": "On Grauert-Riemenschneider type criterions"
}
| null | null | null | null | true | null |
4705
| null |
Default
| null | null |
null |
{
"abstract": " Information and Communication Technology (ICT) has been playing a pivotal\nrole since the last decade in developing countries that brings citizen services\nto the doorsteps and connecting people. With this aspiration ICT has introduced\nseveral technologies of citizen services towards all categories of people. The\npurpose of this study is to examine the Governance technology perspectives for\npolitical party, emphasizing on the basic critical steps through which it could\nbe operationalized. We call it P-Governance. P-Governance shows technologies to\nensure governance, management, interaction communication in a political party\nby improving decision making processes using big data. P-Governance challenges\nthe competence perspective to apply itself more assiduously to\noperationalization, including the need to choose and give definition to one or\nmore units of analysis (of which the routine is a promising candidate). This\npaper is to focus on research challenges posed by competence to which\nP-Governance can and should respond include different strategy issues faced by\nparticular sections. Both the qualitative as well as quantitative research\napproaches were conducted. The standard of citizen services, choice &\nconsultation, courtesy & consultation, entrance & information, and value for\nmoney have found the positive relation with citizen's satisfaction. This study\nresults how can be technology make important roles on political movements in\ndeveloping countries using big data.\n",
"title": "P-Governance Technology: Using Big Data for Political Party Management"
}
| null | null | null | null | true | null |
4706
| null |
Default
| null | null |
null |
{
"abstract": " A number of visual question answering approaches have been proposed recently,\naiming at understanding the visual scenes by answering the natural language\nquestions. While the image question answering has drawn significant attention,\nvideo question answering is largely unexplored.\nVideo-QA is different from Image-QA since the information and the events are\nscattered among multiple frames. In order to better utilize the temporal\nstructure of the videos and the phrasal structures of the answers, we propose\ntwo mechanisms: the re-watching and the re-reading mechanisms and combine them\ninto the forgettable-watcher model. Then we propose a TGIF-QA dataset for video\nquestion answering with the help of automatic question generation. Finally, we\nevaluate the models on our dataset. The experimental results show the\neffectiveness of our proposed models.\n",
"title": "The Forgettable-Watcher Model for Video Question Answering"
}
| null | null | null | null | true | null |
4707
| null |
Default
| null | null |
null |
{
"abstract": " The $p$th degree Hilbert symbol $(\\cdot,\\cdot )_p:K^\\times/K^{\\times p}\\times\nK^\\times/K^{\\times p}\\to{}_p{\\rm Br}(K)$ from characteristic $\\neq p$ has two\nanalogues in characteristic $p$, $$[\\cdot,\\cdot )_p:K/\\wp (K)\\times\nK^\\times/K^{\\times p}\\to{}_p{\\rm Br}(K),$$ where $\\wp$ is the Artin-Schreier\nmap $x\\mapsto x^p-x$, and $$((\\cdot,\\cdot ))_p:K/K^p\\times K/K^p\\to{}_p{\\rm\nBr}(K).$$\nThe symbol $[\\cdot,\\cdot )_p$ generalizes to an analogue of $(\\cdot,\\cdot\n)_{p^n}$ via the Witt vectors, $$[\\cdot,\\cdot )_{p^n}:W_n(K)/\\wp (W_n(K))\\times\nK^\\times/K^{\\times p^n}\\to{}_{p^n}{\\rm Br}(K).$$\nHere $W_n(K)$ is the truncation of length $n$ of the ring of $p$-typical Witt\nwectors, i.e. $W_{\\{1,p,\\ldots,p^{n-1}\\}}(K)$.\nIn this paper we construct similar generalizations for $((\\cdot,\\cdot ))_p$.\nOur construction involves Witt vectors and Weyl algebras. In the process we\nobtain a new kind of Weyl algebras in characteristic $p$, with many interesting\nproperties.\nThe symbols we introduce, $((\\cdot,\\cdot ))_{p^n}$ and, more generally,\n$((\\cdot,\\cdot ))_{p^m,p^n}$, which here are defined in terms of central simple\nalgebras, coincide with the homonymous symbols we introduced in\n[arXiv:1711.00980] in terms of the symbols $[\\cdot,\\cdot )_{p^n}$. This will be\nproved in a future paper. In the present paper we only introduce the symbols\nand we prove that they have the same properties with the symbols from\n[arXiv:1711.00980]. These properies are enough to obtain the representation\ntheorem for ${}_{p^n}{\\rm Br}(K)$ from [arXiv:1711.00980], Theorem 4.10.\n",
"title": "Analogues of the $p^n$th Hilbert symbol in characteristic $p$ (updated)"
}
| null | null | null | null | true | null |
4708
| null |
Default
| null | null |
null |
{
"abstract": " We present GAMER-2, a GPU-accelerated adaptive mesh refinement (AMR) code for\nastrophysics. It provides a rich set of features, including adaptive\ntime-stepping, several hydrodynamic schemes, magnetohydrodynamics,\nself-gravity, particles, star formation, chemistry and radiative processes with\nGRACKLE, data analysis with yt, and memory pool for efficient object\nallocation. GAMER-2 is fully bitwise reproducible. For the performance\noptimization, it adopts hybrid OpenMP/MPI/GPU parallelization and utilizes\noverlapping CPU computation, GPU computation, and CPU-GPU communication. Load\nbalancing is achieved using a Hilbert space-filling curve on a level-by-level\nbasis without the need to duplicate the entire AMR hierarchy on each MPI\nprocess. To provide convincing demonstrations of the accuracy and performance\nof GAMER-2, we directly compare with Enzo on isolated disk galaxy simulations\nand with FLASH on galaxy cluster merger simulations. We show that the physical\nresults obtained by different codes are in very good agreement, and GAMER-2\noutperforms Enzo and FLASH by nearly one and two orders of magnitude,\nrespectively, on the Blue Waters supercomputers using $1-256$ nodes. More\nimportantly, GAMER-2 exhibits similar or even better parallel scalability\ncompared to the other two codes. We also demonstrate good weak and strong\nscaling using up to 4096 GPUs and 65,536 CPU cores, and achieve a uniform\nresolution as high as $10{,}240^3$ cells. Furthermore, GAMER-2 can be adopted\nas an AMR+GPUs framework and has been extensively used for the wave dark matter\n($\\psi$DM) simulations. GAMER-2 is open source (available at\nthis https URL) and new contributions are welcome.\n",
"title": "GAMER-2: a GPU-accelerated adaptive mesh refinement code -- accuracy, performance, and scalability"
}
| null | null | null | null | true | null |
4709
| null |
Default
| null | null |
null |
{
"abstract": " Assume $\\mathsf{M}_n$ is the $n$-dimensional permutation module for the\nsymmetric group $\\mathsf{S}_n$, and let $\\mathsf{M}_n^{\\otimes k}$ be its\n$k$-fold tensor power. The partition algebra $\\mathsf{P}_k(n)$ maps\nsurjectively onto the centralizer algebra\n$\\mathsf{End}_{\\mathsf{S}_n}(\\mathsf{M}_n^{\\otimes k})$ for all $k, n \\in\n\\mathbb{Z}_{\\ge 1}$ and isomorphically when $n \\ge 2k$. We describe the image\nof the surjection $\\Phi_{k,n}:\\mathsf{P}_k(n) \\to\n\\mathsf{End}_{\\mathsf{S}_n}(\\mathsf{M}_n^{\\otimes k})$ explicitly in terms of\nthe orbit basis of $\\mathsf{P}_k(n)$ and show that when $2k > n$ the kernel of\n$\\Phi_{k,n}$ is generated by a single essential idempotent $\\mathsf{e}_{k,n}$,\nwhich is an orbit basis element. We obtain a presentation for\n$\\mathsf{End}_{\\mathsf{S}_n}(\\mathsf{M}_n^{\\otimes k})$ by imposing one\nadditional relation, $\\mathsf{e}_{k,n} = 0$, to the standard presentation of\nthe partition algebra $\\mathsf{P}_k(n)$ when $2k > n$. As a consequence, we\nobtain the fundamental theorems of invariant theory for the symmetric group\n$\\mathsf{S}_n$. We show under the natural embedding of the partition algebra\n$\\mathsf{P}_n(n)$ into $\\mathsf{P}_k(n)$ for $k \\ge n$ that the essential\nidempotent $\\mathsf{e}_{n,n}$ generates the kernel of $\\Phi_{k,n}$. Therefore,\nthe relation $\\mathsf{e}_{n,n} = 0$ can replace $\\mathsf{e}_{k,n} = 0$ when $k\n\\ge n$.\n",
"title": "Partition algebras $\\mathsf{P}_k(n)$ with $2k>n$ and the fundamental theorems of invariant theory for the symmetric group $\\mathsf{S}_n$"
}
| null | null | null | null | true | null |
4710
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we consider distributed optimization design for resource\nallocation problems over weight-balanced graphs. With the help of singular\nperturbation analysis, we propose a simple sub-optimal continuous-time\noptimization algorithm. Moreover, we prove the existence and uniqueness of the\nalgorithm equilibrium, and then show the convergence with an exponential rate.\nFinally, we verify the sub-optimality of the algorithm, which can approach the\noptimal solution as an adjustable parameter tends to zero.\n",
"title": "Distributed sub-optimal resource allocation over weight-balanced graph via singular perturbation"
}
| null | null | null | null | true | null |
4711
| null |
Default
| null | null |
null |
{
"abstract": " The increasing uptake of distributed energy resources (DERs) in distribution\nsystems and the rapid advance of technology have established new scenarios in\nthe operation of low-voltage networks. In particular, recent trends in\ncryptocurrencies and blockchain have led to a proliferation of peer-to-peer\n(P2P) energy trading schemes, which allow the exchange of energy between the\nneighbors without any intervention of a conventional intermediary in the\ntransactions. Nevertheless, far too little attention has been paid to the\ntechnical constraints of the network under this scenario. A major challenge to\nimplementing P2P energy trading is that of ensuring that network constraints\nare not violated during the energy exchange. This paper proposes a methodology\nbased on sensitivity analysis to assess the impact of P2P transactions on the\nnetwork and to guarantee an exchange of energy that does not violate network\nconstraints. The proposed method is tested on a typical UK low-voltage network.\nThe results show that our method ensures that energy is exchanged between users\nunder the P2P scheme without violating the network constraints, and that users\ncan still capture the economic benefits of the P2P architecture.\n",
"title": "Decentralized P2P Energy Trading under Network Constraints in a Low-Voltage Network"
}
| null | null | null | null | true | null |
4712
| null |
Default
| null | null |
null |
{
"abstract": " Generative adversarial networks (GANs) are powerful tools for learning\ngenerative models. In practice, the training may suffer from lack of\nconvergence. GANs are commonly viewed as a two-player zero-sum game between two\nneural networks. Here, we leverage this game theoretic view to study the\nconvergence behavior of the training process. Inspired by the fictitious play\nlearning process, a novel training method, referred to as Fictitious GAN, is\nintroduced. Fictitious GAN trains the deep neural networks using a mixture of\nhistorical models. Specifically, the discriminator (resp. generator) is updated\naccording to the best-response to the mixture outputs from a sequence of\npreviously trained generators (resp. discriminators). It is shown that\nFictitious GAN can effectively resolve some convergence issues that cannot be\nresolved by the standard training approach. It is proved that asymptotically\nthe average of the generator outputs has the same distribution as the data\nsamples.\n",
"title": "Fictitious GAN: Training GANs with Historical Models"
}
| null | null | null | null | true | null |
4713
| null |
Default
| null | null |
null |
{
"abstract": " Understanding the pseudogap phase in hole-doped high temperature cuprate\nsuperconductors remains a central challenge in condensed matter physics. From a\nhost of recent experiments there is now compelling evidence of translational\nsymmetry breaking charge density wave (CDW) order in a wide range of doping\ninside this phase. Two distinct types of incommensurate charge order --\nbidirectional at zero or low magnetic fields and unidirectional at high\nmagnetic fields close to the upper critical field $H_{c2}$ -- have been\nreported so far in approximately the same doping range between $p\\simeq 0.08$\nand $p\\simeq 0.16$. In concurrent developments, recent high field Hall\nexperiments have also revealed two indirect but striking signatures of Fermi\nsurface reconstruction in the pseudogap phase, namely, a sign change of the\nHall coefficient to negative values at low temperatures at intermediate range\nof hole doping and a rapid suppression of the positive Hall number without\nchange in sign near optimal doping $p \\sim 0.19$. We show that the assumption\nof a unidirectional incommensurate CDW (with or without a coexisting weak\nbidirectional order) at high magnetic fields near optimal doping and a\ncoexistence of both types of orders of approximately equal magnitude at high\nmagnetic fields at intermediate range of doping may help explain the striking\nbehavior of low temperature Hall effect in the entire pseudogap phase.\n",
"title": "Suppression of Hall number due to charge density wave order in high-$T_c$ cuprates"
}
| null | null | null | null | true | null |
4714
| null |
Default
| null | null |
null |
{
"abstract": " Bose-Einstein condensates with tunable interatomic interactions have been\nstudied intensely in recent experiments. The investigation of the collapse of a\ncondensate following a sudden change in the nature of the interaction from\nrepulsive to attractive has led to the observation of a remnant condensate that\ndid not undergo further collapse. We suggest that this high-density remnant is\nin fact the absolute minimum of the energy, if the attractive atomic\ninteractions are nonlocal, and is therefore inherently stable. We show that a\nvariational trial function consisting of a superposition of two distinct\ngaussians is an accurate representation of the wavefunction of the ground state\nof the conventional local Gross-Pitaevskii field equation for an attractive\ncondensate and gives correctly the points of emergence of instability. We then\nuse such a superposition of two gaussians as a variational trial function in\norder to calculate the minima of the energy when it includes a nonlocal\ninteraction term. We use experimental data in order to study the long range of\nthe nonlocal interaction, showing that they agree very well with a\ndimensionally derived expression for this range.\n",
"title": "The Wavefunction of the Collapsing Bose-Einstein Condensate"
}
| null | null | null | null | true | null |
4715
| null |
Default
| null | null |
null |
{
"abstract": " We propose a principled method for kernel learning, which relies on a\nFourier-analytic characterization of translation-invariant or\nrotation-invariant kernels. Our method produces a sequence of feature maps,\niteratively refining the SVM margin. We provide rigorous guarantees for\noptimality and generalization, interpreting our algorithm as online\nequilibrium-finding dynamics in a certain two-player min-max game. Evaluations\non synthetic and real-world datasets demonstrate scalability and consistent\nimprovements over related random features-based methods.\n",
"title": "Not-So-Random Features"
}
| null | null | null | null | true | null |
4716
| null |
Default
| null | null |
null |
{
"abstract": " In recent years, there has been a surge of interest in developing deep\nlearning methods for non-Euclidean structured data such as graphs. In this\npaper, we propose Dual-Primal Graph CNN, a graph convolutional architecture\nthat alternates convolution-like operations on the graph and its dual. Our\napproach allows to learn both vertex- and edge features and generalizes the\nprevious graph attention (GAT) model. We provide extensive experimental\nvalidation showing state-of-the-art results on a variety of tasks tested on\nestablished graph benchmarks, including CORA and Citeseer citation networks as\nwell as MovieLens, Flixter, Douban and Yahoo Music graph-guided recommender\nsystems.\n",
"title": "Dual-Primal Graph Convolutional Networks"
}
| null | null | null | null | true | null |
4717
| null |
Default
| null | null |
null |
{
"abstract": " Direct cDNA preamplification protocols developed for single-cell RNA-seq\n(scRNA-seq) have enabled transcriptome profiling of rare cells without having\nto pool multiple samples or to perform RNA extraction. We term this approach\nlimiting-cell RNA-seq (lcRNA-seq). Unlike scRNA-seq, which focuses on\n'cell-atlasing', lcRNA-seq focuses on identifying differentially expressed\ngenes (DEGs) between experimental groups. This requires accounting for systems\nnoise which can obscure biological differences. We present CLEAR, a workflow\nthat identifies robust transcripts in lcRNA-seq data for between-group\ncomparisons. To develop CLEAR, we compared DEGs from RNA extracted from\nFACS-derived CD5+ and CD5- cells from a single chronic lymphocytic leukemia\npatient diluted to input RNA levels of 10-, 100- and 1,000pg. Data quality at\nultralow input levels are known to be noisy. When using CLEAR transcripts vs.\nusing all available transcripts, downstream analyses reveal more shared DEGs,\nimproved Principal Component Analysis separation of cell type, and increased\nsimilarity between results across different input RNA amounts. CLEAR was\napplied to two publicly available ultralow input RNA-seq data and an in-house\nmurine neural cell lcRNA-seq dataset. CLEAR provides a novel way to visualize\nthe public datasets while validates cell phenotype markers for astrocytes,\nneural stem and progenitor cells.\n",
"title": "CLEAR: Coverage-based Limiting-cell Experiment Analysis for RNA-seq"
}
| null | null | null | null | true | null |
4718
| null |
Default
| null | null |
null |
{
"abstract": " This paper proposes a novel model for the rating prediction task in\nrecommender systems which significantly outperforms previous state-of-the art\nmodels on a time-split Netflix data set. Our model is based on deep autoencoder\nwith 6 layers and is trained end-to-end without any layer-wise pre-training. We\nempirically demonstrate that: a) deep autoencoder models generalize much better\nthan the shallow ones, b) non-linear activation functions with negative parts\nare crucial for training deep models, and c) heavy use of regularization\ntechniques such as dropout is necessary to prevent over-fiting. We also propose\na new training algorithm based on iterative output re-feeding to overcome\nnatural sparseness of collaborate filtering. The new algorithm significantly\nspeeds up training and improves model performance. Our code is available at\nthis https URL\n",
"title": "Training Deep AutoEncoders for Collaborative Filtering"
}
| null | null | null | null | true | null |
4719
| null |
Default
| null | null |
null |
{
"abstract": " The sample matrix inversion (SMI) beamformer implements Capon's minimum\nvariance distortionless (MVDR) beamforming using the sample covariance matrix\n(SCM). In a snapshot limited environment, the SCM is poorly conditioned\nresulting in a suboptimal performance from the SMI beamformer. Imposing\nstructural constraints on the SCM estimate to satisfy known theoretical\nproperties of the ensemble MVDR beamformer mitigates the impact of limited\nsnapshots on the SMI beamformer performance. Toeplitz rectification and\nbounding the norm of weight vector are common approaches for such constrains.\nThis paper proposes the unit circle rectification technique which constraints\nthe SMI beamformer to satisfy a property of the ensemble MVDR beamformer: for\nnarrowband planewave beamforming on a uniform linear array, the zeros of the\nMVDR weight array polynomial must fall on the unit circle. Numerical\nsimulations show that the resulting unit circle MVDR (UC MVDR) beamformer\nfrequently improves the suppression of both discrete interferers and white\nbackground noise compared to the classic SMI beamformer. Moreover, the UC MVDR\nbeamformer is shown to suppress discrete interferers better than the MVDR\nbeamformer diagonally loaded to maximize the SINR.\n",
"title": "Unit circle rectification of the MVDR beamformer"
}
| null | null | null | null | true | null |
4720
| null |
Default
| null | null |
null |
{
"abstract": " Wind has the potential to make a significant contribution to future energy\nresources. Locating the sources of this renewable energy on a global scale is\nhowever extremely challenging, given the difficulty to store very large data\nsets generated by modern computer models. We propose a statistical model that\naims at reproducing the data-generating mechanism of an ensemble of runs via a\nStochastic Generator (SG) of global annual wind data. We introduce an\nevolutionary spectrum approach with spatially varying parameters based on\nlarge-scale geographical descriptors such as altitude to better account for\ndifferent regimes across the Earth's orography. We consider a multi-step\nconditional likelihood approach to estimate the parameters that explicitly\naccounts for nonstationary features while also balancing memory storage and\ndistributed computation. We apply the proposed model to more than 18 million\npoints of yearly global wind speed. The proposed SG requires orders of\nmagnitude less storage for generating surrogate ensemble members from wind than\ndoes creating additional wind fields from the climate model, even if an\neffective lossy data compression algorithm is applied to the simulation output.\n",
"title": "Reducing Storage of Global Wind Ensembles with Stochastic Generators"
}
| null | null | null | null | true | null |
4721
| null |
Default
| null | null |
null |
{
"abstract": " Widely used income inequality measure, Gini index is extended to form a\nfamily of income inequality measures known as Single-Series Gini (S-Gini)\nindices. In this study, we develop empirical likelihood (EL) and jackknife\nempirical likelihood (JEL) based inference for S-Gini indices. We prove that\nthe limiting distribution of both EL and JEL ratio statistics are Chi-square\ndistribution with one degree of freedom. Using the asymptotic distribution we\nconstruct EL and JEL based confidence intervals for realtive S-Gini indices. We\nalso give bootstrap-t and bootstrap calibrated empirical likelihood confidence\nintervals for S-Gini indices. A numerical study is carried out to compare the\nperformances of the proposed confidence interval with the bootstrap methods. A\ntest for S-Gini indices based on jackknife empirical likelihood ratio is also\nproposed. Finally we illustrate the proposed method using an income data.\n",
"title": "Jackknife Empirical Likelihood-based inference for S-Gini indices"
}
| null | null | null | null | true | null |
4722
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents a constructive algorithm that achieves successful\none-shot learning of hidden spike-patterns in a competitive detection task. It\nhas previously been shown (Masquelier et al., 2008) that spike-timing-dependent\nplasticity (STDP) and lateral inhibition can result in neurons competitively\ntuned to repeating spike-patterns concealed in high rates of overall\npresynaptic activity. One-shot construction of neurons with synapse weights\ncalculated as estimates of converged STDP outcomes results in immediate\nselective detection of hidden spike-patterns. The capability of continual\nlearning is demonstrated through the successful one-shot detection of new sets\nof spike-patterns introduced after long intervals in the simulation time.\nSimulation expansion (Lightheart et al., 2013) has been proposed as an approach\nto the development of constructive algorithms that are compatible with\nsimulations of biological neural networks. A simulation of a biological neural\nnetwork may have orders of magnitude fewer neurons and connections than the\nrelated biological neural systems; therefore, simulated neural networks can be\nassumed to be a subset of a larger neural system. The constructive algorithm is\ndeveloped using simulation expansion concepts to perform an operation\nequivalent to the exchange of neurons between the simulation and the larger\nhypothetical neural system. The dynamic selection of neurons to simulate within\na larger neural system (hypothetical or stored in memory) may be a starting\npoint for a wide range of developments and applications in machine learning and\nthe simulation of biology.\n",
"title": "Continual One-Shot Learning of Hidden Spike-Patterns with Neural Network Simulation Expansion and STDP Convergence Predictions"
}
| null | null | null | null | true | null |
4723
| null |
Default
| null | null |
null |
{
"abstract": " In the fields of neuroimaging and genetics, a key goal is testing the\nassociation of a single outcome with a very high-dimensional imaging or genetic\nvariable. Often, summary measures of the high-dimensional variable are created\nto sequentially test and localize the association with the outcome. In some\ncases, the results for summary measures are significant, but subsequent tests\nused to localize differences are underpowered and do not identify regions\nassociated with the outcome. Here, we propose a generalization of Rao's score\ntest based on projecting the score statistic onto a linear subspace of a\nhigh-dimensional parameter space. In addition, we provide methods to localize\nsignal in the high-dimensional space by projecting the scores to the subspace\nwhere the score test was performed. This allows for inference in the\nhigh-dimensional space to be performed on the same degrees of freedom as the\nscore test, effectively reducing the number of comparisons. Simulation results\ndemonstrate the test has competitive power relative to others commonly used. We\nillustrate the method by analyzing a subset of the Alzheimer's Disease\nNeuroimaging Initiative dataset. Results suggest cortical thinning of the\nfrontal and temporal lobes may be a useful biological marker of Alzheimer's\nrisk.\n",
"title": "Interpretable High-Dimensional Inference Via Score Projection with an Application in Neuroimaging"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
4724
| null |
Validated
| null | null |
null |
{
"abstract": " We establish a boundary maximum principle for free boundary minimal\nsubmanifolds in a Riemannian manifold with boundary, in any dimension and\ncodimension. Our result holds more generally in the context of varifolds.\n",
"title": "A maximum principle for free boundary minimal varieties of arbitrary codimension"
}
| null | null | null | null | true | null |
4725
| null |
Default
| null | null |
null |
{
"abstract": " Spectral sparsification is a general technique developed by Spielman et al.\nto reduce the number of edges in a graph while retaining its structural\nproperties. We investigate the use of spectral sparsification to produce good\nvisual representations of big graphs. We evaluate spectral sparsification\napproaches on real-world and synthetic graphs. We show that spectral\nsparsifiers are more effective than random edge sampling. Our results lead to\nguidelines for using spectral sparsification in big graph visualization.\n",
"title": "Drawing Big Graphs using Spectral Sparsification"
}
| null | null | null | null | true | null |
4726
| null |
Default
| null | null |
null |
{
"abstract": " We analyze the statistics of the shortest and fastest paths on the road\nnetwork between randomly sampled end points. To a good approximation, these\noptimal paths are found to be directed in that their lengths (at large scales)\nare linearly proportional to the absolute distance between them. This motivates\ncomparisons to universal features of directed polymers in random media. There\nare similarities in scalings of fluctuations in length/time and transverse\nwanderings, but also important distinctions in the scaling exponents, likely\ndue to long-range correlations in geographic and man-made features. At short\nscales the optimal paths are not directed due to circuitous excursions governed\nby a fat-tailed (power-law) probability distribution.\n",
"title": "Optimal paths on the road network as directed polymers"
}
| null | null | null | null | true | null |
4727
| null |
Default
| null | null |
null |
{
"abstract": " In this work we investigate the optimal proportional reinsurance-investment\nstrategy of an insurance company which wishes to maximize the expected\nexponential utility of its terminal wealth in a finite time horizon. Our goal\nis to extend the classical Cramer-Lundberg model introducing a stochastic\nfactor which affects the intensity of the claims arrival process, described by\na Cox process, as well as the insurance and reinsurance premia. Using the\nclassical stochastic control approach based on the Hamilton-Jacobi-Bellman\nequation we characterize the optimal strategy and provide a verification result\nfor the value function via classical solutions of two backward partial\ndifferential equations. Existence and uniqueness of these solutions are\ndiscussed. Results under various premium calculation principles are illustrated\nand a new premium calculation rule is proposed in order to get more realistic\nstrategies and to better fit our stochastic factor model. Finally, numerical\nsimulations are performed to obtain sensitivity analyses.\n",
"title": "Optimal proportional reinsurance and investment for stochastic factor models"
}
| null | null | null | null | true | null |
4728
| null |
Default
| null | null |
null |
{
"abstract": " The Internet infrastructure relies entirely on open standards for its routing\nprotocols. However, the majority of routers on the Internet are closed-source.\nHence, there is no straightforward way to analyze them. Specifically, one\ncannot easily identify deviations of a router's routing functionality from the\nrouting protocol's standard. Such deviations (either deliberate or inadvertent)\nare particularly important to identify since they may degrade the security or\nresiliency of the network.\nA model-based testing procedure is a technique that allows to systematically\ngenerate tests based on a model of the system to be tested; thereby finding\ndeviations in the system compared to the model. However, applying such an\napproach to a complex multi-party routing protocol requires a prohibitively\nhigh number of tests to cover the desired functionality. We propose efficient\nand practical optimizations to the model-based testing procedure that are\ntailored to the analysis of routing protocols. These optimizations allow to\ndevise a formal black-box method to unearth deviations in closed-source routing\nprotocols' implementations. The method relies only on the ability to test the\ntargeted protocol implementation and observe its output. Identification of the\ndeviations is fully automatic.\nWe evaluate our method against one of the complex and widely used routing\nprotocols on the Internet -- OSPF. We search for deviations in the OSPF\nimplementation of Cisco. Our evaluation identified numerous significant\ndeviations that can be abused to compromise the security of a network. The\ndeviations were confirmed by Cisco. We further employed our method to analyze\nthe OSPF implementation of the Quagga Routing Suite. The analysis revealed one\nsignificant deviation. Subsequent to the disclosure of the deviations some of\nthem were also identified by IBM, Lenovo and Huawei in their own products.\n",
"title": "Formal Black-Box Analysis of Routing Protocol Implementations"
}
| null | null | null | null | true | null |
4729
| null |
Default
| null | null |
null |
{
"abstract": " Quantitative loop invariants are an essential element in the verification of\nprobabilistic programs. Recently, multivariate Lagrange interpolation has been\napplied to synthesizing polynomial invariants. In this paper, we propose an\nalternative approach. First, we fix a polynomial template as a candidate of a\nloop invariant. Using Stengle's Positivstellensatz and a transformation to a\nsum-of-squares problem, we find sufficient conditions on the coefficients.\nThen, we solve a semidefinite programming feasibility problem to synthesize the\nloop invariants. If the semidefinite program is unfeasible, we backtrack after\nincreasing the degree of the template. Our approach is semi-complete in the\nsense that it will always lead us to a feasible solution if one exists and\nnumerical errors are small. Experimental results show the efficiency of our\napproach.\n",
"title": "Finding polynomial loop invariants for probabilistic programs"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4730
| null |
Validated
| null | null |
null |
{
"abstract": " This paper is on active learning where the goal is to reduce the data\nannotation burden by interacting with a (human) oracle during training.\nStandard active learning methods ask the oracle to annotate data samples.\nInstead, we take a profoundly different approach: we ask for annotations of the\ndecision boundary. We achieve this using a deep generative model to create\nnovel instances along a 1d line. A point on the decision boundary is revealed\nwhere the instances change class. Experimentally we show on three data sets\nthat our method can be plugged-in to other active learning schemes, that human\noracles can effectively annotate points on the decision boundary, that our\nmethod is robust to annotation noise, and that decision boundary annotations\nimprove over annotating data samples.\n",
"title": "Active Decision Boundary Annotation with Deep Generative Models"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4731
| null |
Validated
| null | null |
null |
{
"abstract": " We theoretically and experimentally demonstrate a multifrequency excitation\nand detection scheme in apertureless near field optical microscopy, that\nexceeds current state of the art sensitivity and background suppression. By\nexciting the AFM tip at its two first flexural modes, and demodulating the\ndetected signal at the harmonics of their sum, we extract a near field signal\nwith a twofold improved sensitivity and deep sub-wavelength resolution,\nreaching $\\lambda/230$. Furthermore, the method offers rich control over\nexperimental degrees of freedom, expanding the parameter space for achieving\ncomplete optical background suppression. This approach breaks the ground for\nnon-interferometric complete phase and amplitude retrieval of the near field\nsignal, and is suitable for any multimodal excitation and higher harmonic\ndemodulation.\n",
"title": "Multifrequency Excitation and Detection Scheme in Apertureless Scattering Near Field Scanning Optical Microscopy"
}
| null | null | null | null | true | null |
4732
| null |
Default
| null | null |
null |
{
"abstract": " We consider the Kitaev chain model with finite and infinite range in the\nhopping and pairing parameters, looking in particular at the appearance of\nMajorana zero energy modes and massive edge modes. We study the system both in\nthe presence and in the absence of time reversal symmetry, by means of\ntopological invariants and exact diagonalization, disclosing very rich phase\ndiagrams. In particular, for extended hopping and pairing terms, we can get as\nmany Majorana modes at each end of the chain as the neighbors involved in the\ncouplings. Finally we generalize the transfer matrix approach useful to\ncalculate the zero-energy Majorana modes at the edges for a generic number of\ncoupled neighbors.\n",
"title": "Extended Kitaev chain with longer-range hopping and pairing"
}
| null | null | null | null | true | null |
4733
| null |
Default
| null | null |
null |
{
"abstract": " We study the motion of isentropic gas in nozzles. This is a major subject in\nfluid dynamics. In fact, the nozzle is utilized to increase the thrust of\nrocket engines. Moreover, the nozzle flow is closely related to astrophysics.\nThese phenomena are governed by the compressible Euler equation, which is one\nof crucial equations in inhomogeneous conservation laws.\nIn this paper, we consider its unsteady flow and devote to proving the global\nexistence and stability of solutions to the Cauchy problem for the general\nnozzle. The theorem has been proved in (Tsuge in Arch. Ration. Mech. Anal.\n209:365-400 (2013)). However, this result is limited to small data. Our aim in\nthe present paper is to remove this restriction, that is, we consider large\ndata. Although the subject is important in Mathematics, Physics and\nengineering, it remained open for a long time. The problem seems to lie in a\nbounded estimate of approximate solutions, because we have only method to\ninvestigate the behavior with respect to the time variable. To solve this, we\nfirst introduce a generalized invariant region. Compared with the existing\nones, its upper and lower bounds are extended constants to functions of the\nspace variable. However, we cannot apply the new invariant region to the\ntraditional difference method. Therefore, we invent the modified Godunov\nscheme. The approximate solutions consist of some functions corresponding to\nthe upper and lower bounds of the invariant regions. These methods enable us to\ninvestigate the behavior of approximate solutions with respect to the space\nvariable. The ideas are also applicable to other nonlinear problems involving\nsimilar difficulties.\n",
"title": "Global entropy solutions to the compressible Euler equations in the isentropic nozzle flow for large data: Application of the modified Godunov scheme and the generalized invariant regions"
}
| null | null | null | null | true | null |
4734
| null |
Default
| null | null |
null |
{
"abstract": " Atrial fibrillation (AF) is the most common form of arrhythmia with\naccelerated and irregular heart rate (HR), leading to both heart failure and\nstroke and being responsible for an increase in cardiovascular morbidity and\nmortality. In spite of its importance, the direct effects of AF on the arterial\nhemodynamic patterns are not completely known to date. Based on a multiscale\nmodelling approach, the proposed work investigates the effects of AF on the\nlocal arterial fluid dynamics. AF and normal sinus rhythm (NSR) conditions are\nsimulated extracting 2000 $\\mathrm{RR}$ heartbeats and comparing the most\nrelevant cardiac and vascular parameters at the same HR (75 bpm). Present\noutcomes evidence that the arterial system is not able to completely absorb the\nAF-induced variability, which can be even amplified towards the peripheral\ncirculation. AF is also able to locally alter the wave dynamics, by modifying\nthe interplay between forward and backward signals. The sole heart rhythm\nvariation (i.e., from NSR to AF) promotes an alteration of the regular dynamics\nat the arterial level which, in terms of pressure and peripheral perfusion,\nsuggests a modification of the physiological phenomena ruled by periodicity\n(e.g., regular organ perfusion)and a possible vascular dysfunction due to the\nprolonged exposure to irregular and extreme values. The present study\nrepresents a first modeling approach to characterize the variability of\narterial hemodynamics in presence of AF, which surely deserves further clinical\ninvestigation.\n",
"title": "Effects of atrial fibrillation on the arterial fluid dynamics: a modelling perspective"
}
| null | null | null | null | true | null |
4735
| null |
Default
| null | null |
null |
{
"abstract": " We study controllability of a Partial Differential Equation of transport\ntype, that arises in crowd models. We are interested in controlling such system\nwith a control being a Lipschitz vector field on a fixed control set $\\omega$.\nWe prove that, for each initial and final configuration, one can steer one to\nanother with such class of controls only if the uncontrolled dynamics allows to\ncross the control set $\\omega$. We also prove a minimal time result for such\nsystems. We show that the minimal time to steer one initial configuration to\nanother is related to the condition of having enough mass in $\\omega$ to feed\nthe desired final configuration.\n",
"title": "Controllability and optimal control of the transport equation with a localized vector field"
}
| null | null | null | null | true | null |
4736
| null |
Default
| null | null |
null |
{
"abstract": " Gravitational instabilities (GIs) are most likely a fundamental process\nduring the early stages of protoplanetary disc formation. Recently, there have\nbeen detections of spiral features in young, embedded objects that appear\nconsistent with GI-driven structure. It is crucial to perform hydrodynamic and\nradiative transfer simulations of gravitationally unstable discs in order to\nassess the validity of GIs in such objects, and constrain optimal targets for\nfuture observations. We utilise the radiative transfer code LIME to produce\ncontinuum emission maps of a $0.17\\,\\mathrm{M}_{\\odot}$ self-gravitating\nprotosolar-like disc. We note the limitations of using LIME as is and explore\nmethods to improve upon the default gridding. We use CASA to produce synthetic\nobservations of 270 continuum emission maps generated across different\nfrequencies, inclinations and dust opacities. We find that the spiral structure\nof our protosolar-like disc model is distinguishable across the majority of our\nparameter space after 1 hour of observation, and is especially prominent at\n230$\\,$GHz due to the favourable combination of angular resolution and\nsensitivity. Disc mass derived from the observations is sensitive to the\nassumed dust opacities and temperatures, and therefore can be underestimated by\na factor of at least 30 at 850$\\,$GHz and 2.5 at 90$\\,$GHz. As a result, this\neffect could retrospectively validate GIs in discs previously thought not\nmassive enough to be gravitationally unstable, which could have a significant\nimpact on the understanding of the formation and evolution of protoplanetary\ndiscs.\n",
"title": "Gravitational instabilities in a protosolar-like disc II: continuum emission and mass estimates"
}
| null | null | null | null | true | null |
4737
| null |
Default
| null | null |
null |
{
"abstract": " We suggest a model of a multi-agent society of decision makers taking\ndecisions being based on two criteria, one is the utility of the prospects and\nthe other is the attractiveness of the considered prospects. The model is the\ngeneralization of quantum decision theory, developed earlier for single\ndecision makers realizing one-step decisions, in two principal aspects. First,\nseveral decision makers are considered simultaneously, who interact with each\nother through information exchange. Second, a multistep procedure is treated,\nwhen the agents exchange information many times. Several decision makers\nexchanging information and forming their judgement, using quantum rules, form a\nkind of a quantum information network, where collective decisions develop in\ntime as a result of information exchange. In addition to characterizing\ncollective decisions that arise in human societies, such networks can describe\ndynamical processes occurring in artificial quantum intelligence composed of\nseveral parts or in a cluster of quantum computers. The practical usage of the\ntheory is illustrated on the dynamic disjunction effect for which three\nquantitative predictions are made: (i) the probabilistic behavior of decision\nmakers at the initial stage of the process is described; (ii) the decrease of\nthe difference between the initial prospect probabilities and the related\nutility factors is proved; (iii) the existence of a common consensus after\nmultiple exchange of information is predicted. The predicted numerical values\nare in very good agreement with empirical data.\n",
"title": "Information Processing by Networks of Quantum Decision Makers"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4738
| null |
Validated
| null | null |
null |
{
"abstract": " Recent initiatives by regulatory agencies to increase spectrum resources\navailable for broadband access include rules for sharing spectrum with\nhigh-priority incumbents. We study a model in which wireless Service Providers\n(SPs) charge for access to their own exclusive-use (licensed) band along with\naccess to an additional shared band. The total, or delivered price in each band\nis the announced price plus a congestion cost, which depends on the load, or\ntotal users normalized by the bandwidth. The shared band is intermittently\navailable with some probability, due to incumbent activity, and when\nunavailable, any traffic carried on that band must be shifted to licensed\nbands. The SPs then compete for quantity of users. We show that the value of\nthe shared band depends on the relative sizes of the SPs: large SPs with more\nbandwidth are better able to absorb the variability caused by intermittency\nthan smaller SPs. However, as the amount of shared spectrum increases, the\nlarge SPs may not make use of it. In that scenario shared spectrum creates more\nvalue than splitting it among the SPs for exclusive use. We also show that\nfixing the average amount of available shared bandwidth, increasing the\nreliability of the band is preferable to increasing the bandwidth.\n",
"title": "The Value of Sharing Intermittent Spectrum"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4739
| null |
Validated
| null | null |
null |
{
"abstract": " We study the origin of layer dependence in band structures of two-dimensional\nmaterials. We find that the layer dependence, at the density functional theory\n(DFT) level, is a result of quantum confinement and the non-linearity of the\nexchange-correlation functional. We use this to develop an efficient scheme for\nperforming DFT and GW calculations of multilayer systems. We show that the DFT\nand quasiparticle band structures of a multilayer system can be derived from a\nsingle calculation on a monolayer of the material. We test this scheme on\nmultilayers of MoS$_2$, graphene and phosphorene. This new scheme yields\nresults in excellent agreement with the standard methods at a fraction of the\ncomputation cost. This helps overcome the challenge of performing fully\nconverged GW calculations on multilayers of 2D materials, particularly in the\ncase of transition metal dichalcogenides which involve very stringent\nconvergence parameters.\n",
"title": "Origin of layer dependence in band structures of two-dimensional materials"
}
| null | null |
[
"Physics"
] | null | true | null |
4740
| null |
Validated
| null | null |
null |
{
"abstract": " A complete proof is given of relative interpretability of Adjunctive Set\nTheory with Extensionality in an elementary concatenation theory.\n",
"title": "From Strings to Sets"
}
| null | null |
[
"Mathematics"
] | null | true | null |
4741
| null |
Validated
| null | null |
null |
{
"abstract": " We prove that H-type Carnot groups of rank $k$ and dimension $n$ satisfy the\n$\\mathrm{MCP}(K,N)$ if and only if $K\\leq 0$ and $N \\geq k+3(n-k)$. The latter\ninteger coincides with the geodesic dimension of the Carnot group. The same\nresult holds true for the larger class of generalized H-type Carnot groups\nintroduced in this paper, and for which we compute explicitly the optimal\nsynthesis. This constitutes the largest class of Carnot groups for which the\ncurvature exponent coincides with the geodesic dimension. We stress that\ngeneralized H-type Carnot groups have step 2, include all corank 1 groups and,\nin general, admit abnormal minimizing curves.\nAs a corollary, we prove the absolute continuity of the Wasserstein geodesics\nfor the quadratic cost on all generalized H-type Carnot groups.\n",
"title": "Sharp measure contraction property for generalized H-type Carnot groups"
}
| null | null |
[
"Mathematics"
] | null | true | null |
4742
| null |
Validated
| null | null |
null |
{
"abstract": " We generalize the twisted quantum double model of topological orders in two\ndimensions to the case with boundaries by systematically constructing the\nboundary Hamiltonians. Given the bulk Hamiltonian defined by a gauge group $G$\nand a three-cocycle in the third cohomology group of $G$ over $U(1)$, a\nboundary Hamiltonian can be defined by a subgroup $K$ of $G$ and a two-cochain\nin the second cochain group of $K$ over $U(1)$. The consistency between the\nbulk and boundary Hamiltonians is dictated by what we call the Frobenius\ncondition that constrains the two-cochain given the three-cocyle. We offer a\nclosed-form formula computing the ground state degeneracy of the model on a\ncylinder in terms of the input data only, which can be naturally generalized to\nsurfaces with more boundaries. We also explicitly write down the ground-state\nwavefunction of the model on a disk also in terms of the input data only.\n",
"title": "Twisted Quantum Double Model of Topological Orders with Boundaries"
}
| null | null |
[
"Physics",
"Mathematics"
] | null | true | null |
4743
| null |
Validated
| null | null |
null |
{
"abstract": " We study the fragmentation-coagulation (or merging and splitting)\nevolutionary control model as introduced recently by one of the authors, where\n$N$ small players can form coalitions to resist to the pressure exerted by the\nprincipal. It is a Markov chain in continuous time and the players have a\ncommon reward to optimize. We study the behavior as $N$ grows and show that the\nproblem converges to a (one player) deterministic optimization problem in\ncontinuous time, in the infinite dimensional state space.\n",
"title": "Evolutionary game of coalition building under external pressure"
}
| null | null |
[
"Mathematics"
] | null | true | null |
4744
| null |
Validated
| null | null |
null |
{
"abstract": " In Bagchi (2010) main effect plans \"orthogonal through the block factor\"\n(POTB) have been constructed. The main advantages of a POTB are that (a) it may\nexist in a set up where an \"usual\" orthogonal main effect plan (OMEP) cannot\nexist and (b) the data analysis is nearly as simple as an OMEP. In the present\npaper we extend this idea and define the concept of orthogonality between a\npair of factorial effects ( main effects or interactions) \"through the block\nfactor\" in the context of a symmetrical experiment. We consider plans generated\nfrom an initial plan by adding runs. For such a plan we have derived necessary\nand sufficient conditions for a pair of effects to be orthogonal through the\nblock factor in terms of the generators. We have also derived a sufficient\ncondition on the generators so as to turn a pair of effects aliased in the\ninitial plan separated in the final plan. The theory developed is illustrated\nwith plans for experiments with three-level factors in situations where\ninteractions between three or more factors are absent. We have constructed\nplans with blocks of size four and fewer runs than a resolution $V$ plan\nestimating all main effects and all but at most one two-factor interactions.\n",
"title": "Nearly resolution V plans on blocks of small size"
}
| null | null | null | null | true | null |
4745
| null |
Default
| null | null |
null |
{
"abstract": " The electron transport layer (ETL) plays a fundamental role in perovskite\nsolar cells. Recently, graphene-based ETLs have been proved to be good\ncandidate for scalable fabrication processes and to achieve higher carrier\ninjection with respect to most commonly used ETLs. In this work we\nexperimentally study the effects of different graphene-based ETLs in sensitized\nMAPI solar cells. By means of time-integrated and picosecond time-resolved\nphotoluminescence techniques, the carrier recombination dynamics in MAPI films\nembedded in different ETLs is investigated. Using graphene doped mesoporous\nTiO2 (G+mTiO2) with the addition of a lithium-neutralized graphene oxide\n(GO-Li) interlayer as ETL, we find that the carrier collection efficiency is\nincreased by about a factor two with respect to standard mTiO2. Taking\nadvantage of the absorption coefficient dispersion, we probe the MAPI layer\nmorphology, along the thickness, finding that the MAPI embedded in the ETL\ncomposed by G+mTiO2 plus GO-Li brings to a very good crystalline quality of the\nMAPI layer with a trap density about one order of magnitude lower than that\nfound with the other ETLs. In addition, this ETL freezes MAPI at the tetragonal\nphase, regardless of the temperature. Graphene-based ETLs can open the way to\nsignificant improvement of perovskite solar cells.\n",
"title": "Graphene-based electron transport layers in perovskite solar cells: a step-up for an efficient carrier collection"
}
| null | null | null | null | true | null |
4746
| null |
Default
| null | null |
null |
{
"abstract": " Many machine intelligence techniques are developed in E-commerce and one of\nthe most essential components is the representation of IDs, including user ID,\nitem ID, product ID, store ID, brand ID, category ID etc. The classical\nencoding based methods (like one-hot encoding) are inefficient in that it\nsuffers sparsity problems due to its high dimension, and it cannot reflect the\nrelationships among IDs, either homogeneous or heterogeneous ones. In this\npaper, we propose an embedding based framework to learn and transfer the\nrepresentation of IDs. As the implicit feedbacks of users, a tremendous amount\nof item ID sequences can be easily collected from the interactive sessions. By\njointly using these informative sequences and the structural connections among\nIDs, all types of IDs can be embedded into one low-dimensional semantic space.\nSubsequently, the learned representations are utilized and transferred in four\nscenarios: (i) measuring the similarity between items, (ii) transferring from\nseen items to unseen items, (iii) transferring across different domains, (iv)\ntransferring across different tasks. We deploy and evaluate the proposed\napproach in Hema App and the results validate its effectiveness.\n",
"title": "Learning and Transferring IDs Representation in E-commerce"
}
| null | null | null | null | true | null |
4747
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we propose a new differentiable neural network alignment\nmechanism for text-dependent speaker verification which uses alignment models\nto produce a supervector representation of an utterance. Unlike previous works\nwith similar approaches, we do not extract the embedding of an utterance from\nthe mean reduction of the temporal dimension. Our system replaces the mean by a\nphrase alignment model to keep the temporal structure of each phrase which is\nrelevant in this application since the phonetic information is part of the\nidentity in the verification task. Moreover, we can apply a convolutional\nneural network as front-end, and thanks to the alignment process being\ndifferentiable, we can train the whole network to produce a supervector for\neach utterance which will be discriminative with respect to the speaker and the\nphrase simultaneously. As we show, this choice has the advantage that the\nsupervector encodes the phrase and speaker information providing good\nperformance in text-dependent speaker verification tasks. In this work, the\nprocess of verification is performed using a basic similarity metric, due to\nsimplicity, compared to other more elaborate models that are commonly used. The\nnew model using alignment to produce supervectors was tested on the\nRSR2015-Part I database for text-dependent speaker verification, providing\ncompetitive results compared to similar size networks using the mean to extract\nembeddings.\n",
"title": "Differentiable Supervector Extraction for Encoding Speaker and Phrase Information in Text Dependent Speaker Verification"
}
| null | null | null | null | true | null |
4748
| null |
Default
| null | null |
null |
{
"abstract": " The core accretion hypothesis posits that planets with significant gaseous\nenvelopes accreted them from their protoplanetary discs after the formation of\nrocky/icy cores. Observations indicate that such exoplanets exist at a broad\nrange of orbital radii, but it is not known whether they accreted their\nenvelopes in situ, or originated elsewhere and migrated to their current\nlocations. We consider the evolution of solid cores embedded in evolving\nviscous discs that undergo gaseous envelope accretion in situ with orbital\nradii in the range $0.1-10\\rm au$. Additionally, we determine the long-term\nevolution of the planets that had no runaway gas accretion phase after disc\ndispersal. We find: (i) Planets with $5 \\rm M_{\\oplus}$ cores never undergo\nrunaway accretion. The most massive envelope contained $2.8 \\rm M_{\\oplus}$\nwith the planet orbiting at $10 \\rm au$. (ii) Accretion is more efficient onto\n$10 \\rm M_{\\oplus}$ and $15 \\rm M_{\\oplus}$ cores. For orbital radii $a_{\\rm p}\n\\ge 0.5 \\rm au$, $15 \\rm M_{\\oplus}$ cores always experienced runaway gas\naccretion. For $a_{\\rm p} \\ge 5 \\rm au$, all but one of the $10 \\rm M_{\\oplus}$\ncores experienced runaway gas accretion. No planets experienced runaway growth\nat $a_{\\rm p} = 0.1 \\rm au$. (iii) We find that, after disc dispersal, planets\nwith significant gaseous envelopes cool and contract on Gyr time-scales, the\ncontraction time being sensitive to the opacity assumed. Our results indicate\nthat Hot Jupiters with core masses $\\lesssim 15 \\rm M_{\\oplus}$ at $\\lesssim\n0.1 \\rm au$ likely accreted their gaseous envelopes at larger distances and\nmigrated inwards. Consistently with the known exoplanet population,\nSuper-Earths and mini-Neptunes at small radii during the disc lifetime, accrete\nonly modest gaseous envelopes.\n",
"title": "In situ accretion of gaseous envelopes on to planetary cores embedded in evolving protoplanetary discs"
}
| null | null | null | null | true | null |
4749
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents a novel method that allows to generalise the use of the\nAdam-Bashforth to Partial Differential Equations with local and non local\noperator. The Method derives a two step Adam-Bashforth numerical scheme in\nLaplace space and the solution is taken back into the real space via inverse\nLaplace transform. The method yields a powerful numerical algorithm for\nfractional order derivative where the usually very difficult to manage\nsummation in the numerical scheme disappears. Error Analysis of the method is\nalso presented. Applications of the method and numerical simulations are\npresented on a wave-equation like, and on a fractional order diffusion\nequation.\n",
"title": "New Two Step Laplace Adam-Bashforth Method for Integer an Non integer Order Partial Differential Equations"
}
| null | null | null | null | true | null |
4750
| null |
Default
| null | null |
null |
{
"abstract": " Alternating automata have been widely used to model and verify systems that\nhandle data from finite domains, such as communication protocols or hardware.\nThe main advantage of the alternating model of computation is that\ncomplementation is possible in linear time, thus allowing to concisely encode\ntrace inclusion problems that occur often in verification. In this paper we\nconsider alternating automata over infinite alphabets, whose transition rules\nare formulae in a combined theory of booleans and some infinite data domain,\nthat relate past and current values of the data variables. The data theory is\nnot fixed, but rather it is a parameter of the class. We show that union,\nintersection and complementation are possible in linear time in this model and,\nthough the emptiness problem is undecidable, we provide two efficient\nsemi-algorithms, inspired by two state-of-the-art abstraction refinement model\nchecking methods: lazy predicate abstraction \\cite{HJMS02} and the \\impact~\nsemi-algorithm \\cite{mcmillan06}. We have implemented both methods and report\nthe results of an experimental comparison.\n",
"title": "The Impact of Alternation"
}
| null | null | null | null | true | null |
4751
| null |
Default
| null | null |
null |
{
"abstract": " We study a special case at which the analytical solution of the\nLippmann-Schwinger integral equation for the partial wave two-body Coulomb\ntransition matrix for likely charged particles at negative energy is possible.\nWith the use of the Fock's method of the stereographic projection of the\nmomentum space onto the four-dimensional unit sphere, the analytical\nexpressions for s-, p- and d-wave partial Coulomb transition matrices for\nrepulsively interacting particles at bound-state energy have been derived.\n",
"title": "Partial-wave Coulomb t-matrices for like-charged particles at ground-state energy"
}
| null | null | null | null | true | null |
4752
| null |
Default
| null | null |
null |
{
"abstract": " Causal effect estimation from observational data is an important and much\nstudied research topic. The instrumental variable (IV) and local causal\ndiscovery (LCD) patterns are canonical examples of settings where a closed-form\nexpression exists for the causal effect of one variable on another, given the\npresence of a third variable. Both rely on faithfulness to infer that the\nlatter only influences the target effect via the cause variable. In reality, it\nis likely that this assumption only holds approximately and that there will be\nat least some form of weak interaction. This brings about the paradoxical\nsituation that, in the large-sample limit, no predictions are made, as\ndetecting the weak edge invalidates the setting. We introduce an alternative\napproach by replacing strict faithfulness with a prior that reflects the\nexistence of many 'weak' (irrelevant) and 'strong' interactions. We obtain a\nposterior distribution over the target causal effect estimator which shows\nthat, in many cases, we can still make good estimates. We demonstrate the\napproach in an application on a simple linear-Gaussian setting, using the\nMultiNest sampling algorithm, and compare it with established techniques to\nshow our method is robust even when strict faithfulness is violated.\n",
"title": "Robust Causal Estimation in the Large-Sample Limit without Strict Faithfulness"
}
| null | null | null | null | true | null |
4753
| null |
Default
| null | null |
null |
{
"abstract": " We show that the poset of $SL(n)$-orbit closures in the product of two\npartial flag varieties is a lattice if the action of $SL(n)$ is spherical.\n",
"title": "The Cross-section of a Spherical Double Cone"
}
| null | null | null | null | true | null |
4754
| null |
Default
| null | null |
null |
{
"abstract": " The fundamental understanding of loop formation of long polymer chains in\nsolution has been an important thread of research for several theoretical and\nexperimental studies. Loop formations are important phenomenological parameters\nin many important biological processes. Here we give a general method for\nfinding an exact analytical solution for the occurrence of looping of a long\npolymer chains in solution modeled by using a Smoluchowski-like equation with a\ndelocalized sink. The average rate constant for the delocalized sink is\nexplicitly expressed in terms of the corresponding rate constants for localized\nsinks with different initial conditions. Simple analytical expressions are\nprovided for average rate constant.\n",
"title": "Understanding looping kinetics of a long polymer molecule in solution. Exact solution for delocalized sink model"
}
| null | null | null | null | true | null |
4755
| null |
Default
| null | null |
null |
{
"abstract": " In recent years supervised representation learning has provided state of the\nart or close to the state of the art results in semantic analysis tasks\nincluding ranking and information retrieval. The core idea is to learn how to\nembed items into a latent space such that they optimize a supervised objective\nin that latent space. The dimensions of the latent space have no clear\nsemantics, and this reduces the interpretability of the system. For example, in\npersonalization models, it is hard to explain why a particular item is ranked\nhigh for a given user profile. We propose a novel model of representation\nlearning called Supervised Explicit Semantic Analysis (SESA) that is trained in\na supervised fashion to embed items to a set of dimensions with explicit\nsemantics. The model learns to compare two objects by representing them in this\nexplicit space, where each dimension corresponds to a concept from a knowledge\nbase. This work extends Explicit Semantic Analysis (ESA) with a supervised\nmodel for ranking problems. We apply this model to the task of Job-Profile\nrelevance in LinkedIn in which a set of skills defines our explicit dimensions\nof the space. Every profile and job are encoded to this set of skills their\nsimilarity is calculated in this space. We use RNNs to embed text input into\nthis space. In addition to interpretability, our model makes use of the\nweb-scale collaborative skills data that is provided by users for each LinkedIn\nprofile. Our model provides state of the art result while it remains\ninterpretable.\n",
"title": "SESA: Supervised Explicit Semantic Analysis"
}
| null | null | null | null | true | null |
4756
| null |
Default
| null | null |
null |
{
"abstract": " This paper aims to bridge the affective gap between image content and the\nemotional response of the viewer it elicits by using High-Level Concepts\n(HLCs). In contrast to previous work that relied solely on low-level features\nor used convolutional neural network (CNN) as a black-box, we use HLCs\ngenerated by pretrained CNNs in an explicit way to investigate the\nrelations/associations between these HLCs and a (small) set of Ekman's\nemotional classes. As a proof-of-concept, we first propose a linear admixture\nmodel for modeling these relations, and the resulting computational framework\nallows us to determine the associations between each emotion class and certain\nHLCs (objects and places). This linear model is further extended to a nonlinear\nmodel using support vector regression (SVR) that aims to predict the viewer's\nemotional response using both low-level image features and HLCs extracted from\nimages. These class-specific regressors are then assembled into a regressor\nensemble that provide a flexible and effective predictor for predicting\nviewer's emotional responses from images. Experimental results have\ndemonstrated that our results are comparable to existing methods, with a clear\nview of the association between HLCs and emotional classes that is ostensibly\nmissing in most existing work.\n",
"title": "High-Level Concepts for Affective Understanding of Images"
}
| null | null | null | null | true | null |
4757
| null |
Default
| null | null |
null |
{
"abstract": " We present two different approaches to model power grids as interconnected\nnetworks of networks. Both models are derived from a model for spatially\nembedded mono-layer networks and are generalised to handle an arbitrary number\nof network layers. The two approaches are distinguished by their use case. The\nstatic glue stick construction model yields a multi-layer network from a\npredefined layer interconnection scheme, i.e. different layers are attached\nwith transformer edges. It is especially suited to construct multi-layer power\ngrids with a specified number of nodes in and transformers between layers. We\ncontrast it with a genuine growth model which we label interconnected layer\ngrowth model.\n",
"title": "A Network of Networks Approach to Interconnected Power Grids"
}
| null | null | null | null | true | null |
4758
| null |
Default
| null | null |
null |
{
"abstract": " We present a method for drawing isolines indicating regions of equal joint\nexceedance probability for bivariate data. The method relies on bivariate\nregular variation, a dependence framework widely used for extremes. This\nframework enables drawing isolines corresponding to very low exceedance\nprobabilities and these lines may lie beyond the range of the data. The method\nwe utilize for characterizing dependence in the tail is largely nonparametric.\nFurthermore, we extend this method to the case of asymptotic independence and\npropose a procedure which smooths the transition from asymptotic independence\nin the interior to the first-order behavior on the axes. We propose a\ndiagnostic plot for assessing isoline estimate and choice of smoothing, and a\nbootstrap procedure to visually assess uncertainty.\n",
"title": "A Nonparametric Method for Producing Isolines of Bivariate Exceedance Probabilities"
}
| null | null | null | null | true | null |
4759
| null |
Default
| null | null |
null |
{
"abstract": " This work is concerned with the optimal control problems governed by a 1D\nwave equation with variable coefficients and the control spaces $\\mathcal M_T$\nof either measure-valued functions $L_{w^*}^2(I,\\mathcal M(\\Omega))$ or vector\nmeasures $\\mathcal M(\\Omega,L^2(I))$. The cost functional involves the standard\nquadratic tracking terms and the regularization term $\\alpha\\|u\\|_{\\mathcal\nM_T}$ with $\\alpha>0$. We construct and study three-level in time bilinear\nfinite element discretizations for this class of problems. The main focus lies\non the derivation of error estimates for the optimal state variable and the\nerror measured in the cost functional. The analysis is mainly based on some\nprevious results of the authors. The numerical results are included.\n",
"title": "Finite element error analysis for measure-valued optimal control problems governed by a 1D wave equation with variable coefficients"
}
| null | null | null | null | true | null |
4760
| null |
Default
| null | null |
null |
{
"abstract": " We show that all GL(2,R) equivariant point markings over orbit closures of\ntranslation surfaces arise from branched covering constructions and periodic\npoints, completely classify such point markings over strata of quadratic\ndifferentials, and give applications to the finite blocking problem.\n",
"title": "Marked points on translation surfaces"
}
| null | null | null | null | true | null |
4761
| null |
Default
| null | null |
null |
{
"abstract": " Fundamental frequency (f0) estimation from polyphonic music includes the\ntasks of multiple-f0, melody, vocal, and bass line estimation. Historically\nthese problems have been approached separately, and only recently, using\nlearning-based approaches. We present a multitask deep learning architecture\nthat jointly estimates outputs for various tasks including multiple-f0, melody,\nvocal and bass line estimation, and is trained using a large,\nsemi-automatically annotated dataset. We show that the multitask model\noutperforms its single-task counterparts, and explore the effect of various\ndesign decisions in our approach, and show that it performs better or at least\ncompetitively when compared against strong baseline methods.\n",
"title": "Multitask Learning for Fundamental Frequency Estimation in Music"
}
| null | null | null | null | true | null |
4762
| null |
Default
| null | null |
null |
{
"abstract": " We study the eigenvalues of the self-adjoint Zakharov-Shabat operator\ncorresponding to the defocusing nonlinear Schrodinger equation in the inverse\nscattering method. Real eigenvalues exist when the square of the potential has\na simple well. We derive two types of quantization condition for the\neigenvalues by using the exact WKB method, and show that the eigenvalues stay\nreal for a sufficiently small non-self-adjoint perturbation when the potential\nhas some PT-like symmetry.\n",
"title": "Real eigenvalues of a non-self-adjoint perturbation of the self-adjoint Zakharov-Shabat operator"
}
| null | null | null | null | true | null |
4763
| null |
Default
| null | null |
null |
{
"abstract": " Recent work has considered theoretical models for the behavior of agents with\nspecific behavioral biases: rather than making decisions that optimize a given\npayoff function, the agent behaves inefficiently because its decisions suffer\nfrom an underlying bias. These approaches have generally considered an agent\nwho experiences a single behavioral bias, studying the effect of this bias on\nthe outcome.\nIn general, however, decision-making can and will be affected by multiple\nbiases operating at the same time. How do multiple biases interact to produce\nthe overall outcome? Here we consider decisions in the presence of a pair of\nbiases exhibiting an intuitively natural interaction: present bias -- the\ntendency to value costs incurred in the present too highly -- and sunk-cost\nbias -- the tendency to incorporate costs experienced in the past into one's\nplans for the future.\nWe propose a theoretical model for planning with this pair of biases, and we\nshow how certain natural behavioral phenomena can arise in our model only when\nagents exhibit both biases. As part of our model we differentiate between\nagents that are aware of their biases (sophisticated) and agents that are\nunaware of them (naive). Interestingly, we show that the interaction between\nthe two biases is quite complex: in some cases, they mitigate each other's\neffects while in other cases they might amplify each other. We obtain a number\nof further results as well, including the fact that the planning problem in our\nmodel for an agent experiencing and aware of both biases is computationally\nhard in general, though tractable under more relaxed assumptions.\n",
"title": "Planning with Multiple Biases"
}
| null | null | null | null | true | null |
4764
| null |
Default
| null | null |
null |
{
"abstract": " We establish precise Zhu reduction formulas for Jacobi $n$-point functions\nwhich show the absence of any possible poles arising in these formulas. We then\nexploit this to produce results concerning the structure of strongly regular\nvertex operator algebras, and also to motivate new differential operators\nacting on Jacobi forms. Finally, we apply the reduction formulas to the Fermion\nmodel in order to create polynomials of quasi-Jacobi forms which are Jacobi\nforms.\n",
"title": "Zhu reduction for Jacobi $n$-point functions and applications"
}
| null | null | null | null | true | null |
4765
| null |
Default
| null | null |
null |
{
"abstract": " Timelimited functions and bandlimited functions play a fundamental role in\nsignal and image processing. But by the uncertainty principles, a signal cannot\nbe simultaneously time and bandlimited. A natural assumption is thus that a\nsignal is almost time and almost bandlimited. The aim of this paper is to prove\nthat the set of almost time and almost bandlimited signals is not excluded from\nthe uncertainty principles. The transforms under consideration are integral\noperators with bounded kernels for which there is a Parseval Theorem. Then we\ndefine the wavelet multipliers for this class of operators, and study their\nboundedness and Schatten class properties. We show that the wavelet multiplier\nis unitary equivalent to a scalar multiple of the phase space restriction\noperator. Moreover we prove that a signal which is almost time and almost\nbandlimited can be approximated by its projection on the span of the first\neigenfunctions of the phase space restriction operator, corresponding to the\nlargest eigenvalues which are close to one.\n",
"title": "Fourier-like multipliers and applications for integral operators"
}
| null | null | null | null | true | null |
4766
| null |
Default
| null | null |
null |
{
"abstract": " We model the size distribution of supernova remnants to infer the surrounding\nISM density. Using simple, yet standard SNR evolution models, we find that the\ndistribution of ambient densities is remarkably narrow; either the standard\nassumptions about SNR evolution are wrong, or observable SNRs are biased to a\nnarrow range of ambient densities. We show that the size distributions are\nconsistent with log-normal, which severely limits the number of model\nparameters in any SNR population synthesis model. Simple Monte Carlo\nsimulations demonstrate that the size distribution is indistinguishable from\nlog-normal when the SNR sample size is less than 600. This implies that these\nSNR distributions provide only information on the mean and variance, yielding\nadditional information only when the sample size grows larger than $\\sim{600}$\nSNRs. To infer the parameters of the ambient density, we use Bayesian\nstatistical inference under the assumption that SNR evolution is dominated by\nthe Sedov phase. In particular, we use the SNR sizes and explosion energies to\nestimate the mean and variance of the ambient medium surrounding SNR\nprogenitors. We find that the mean ISM particle density around our sample of\nSNRs is $\\mu_{\\log{n}} = -1.33$, in $\\log_{10}$ of particles per cubic\ncentimeter, with variance $\\sigma^2_{\\log{n}} = 0.49$. If interpreted at face\nvalue, this implies that most SNRs result from supernovae propagating in the\nwarm, ionized medium. However, it is also likely that either SNR evolution is\nnot dominated by the simple Sedov evolution or SNR samples are biased to the\nwarm, ionized medium (WIM).\n",
"title": "Inferring Properties of the ISM from Supernova Remnant Size Distributions"
}
| null | null |
[
"Physics"
] | null | true | null |
4767
| null |
Validated
| null | null |
null |
{
"abstract": " Community detection provides invaluable help for various applications, such\nas marketing and product recommendation. Traditional community detection\nmethods designed for plain networks may not be able to detect communities with\nhomogeneous attributes inside on attributed networks with attribute\ninformation. Most of recent attribute community detection methods may fail to\ncapture the requirements of a specific application and not be able to mine the\nset of required communities for a specific application. In this paper, we aim\nto detect the set of target communities in the target subspace which has some\nfocus attributes with large importance weights satisfying the requirements of a\nspecific application. In order to improve the university of the problem, we\naddress the problem in an extreme case where only two sample nodes in any\npotential target community are provided. A Target Subspace and Communities\nMining (TSCM) method is proposed. In TSCM, a sample information extension\nmethod is designed to extend the two sample nodes to a set of exemplar nodes\nfrom which the target subspace is inferred. Then the set of target communities\nare located and mined based on the target subspace. Experiments on synthetic\ndatasets demonstrate the effectiveness and efficiency of our method and\napplications on real-world datasets show its application values.\n",
"title": "Mining Target Attribute Subspace and Set of Target Communities in Large Attributed Networks"
}
| null | null | null | null | true | null |
4768
| null |
Default
| null | null |
null |
{
"abstract": " Given the subjective preferences of n roommates in an n-bedroom apartment,\none can use Sperner's lemma to find a division of the rent such that each\nroommate is content with a distinct room. At the given price distribution, no\nroommate has a strictly stronger preference for a different room. We give a new\nelementary proof that the subjective preferences of only n-1 of the roommates\nactually suffice to achieve this envy-free rent division. Our proof, in\nparticular, yields an algorithm to find such a fair division of rent. The\ntechniques also give generalizations of Sperner's lemma including a new proof\nof a conjecture of the third author.\n",
"title": "Achieving rental harmony with a secretive roommate"
}
| null | null | null | null | true | null |
4769
| null |
Default
| null | null |
null |
{
"abstract": " A desired closure property in Bayesian probability is that an updated\nposterior distribution be in the same class of distributions --- say Gaussians\n--- as the prior distribution. When the updating takes place via a statistical\nmodel, one calls the class of prior distributions the `conjugate priors' of the\nmodel. This paper gives (1) an abstract formulation of this notion of conjugate\nprior, using channels, in a graphical language, (2) a simple abstract proof\nthat such conjugate priors yield Bayesian inversions, and (3) a logical\ndescription of conjugate priors that highlights the required closure of the\npriors under updating. The theory is illustrated with several standard\nexamples, also covering multiple updating.\n",
"title": "A Channel-Based Perspective on Conjugate Priors"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4770
| null |
Validated
| null | null |
null |
{
"abstract": " Coded distributed computing (CDC) introduced by Li et al. in 2015 offers an\nefficient approach to trade computing power to reduce the communication load in\ngeneral distributed computing frameworks such as MapReduce. For the more\ngeneral cascaded CDC, Map computations are repeated at $r$ nodes to\nsignificantly reduce the communication load among nodes tasked with computing\n$Q$ Reduce functions $s$ times. While an achievable cascaded CDC scheme was\nproposed, it only operates on homogeneous networks, where the storage,\ncomputation load and communication load of each computing node is the same. In\nthis paper, we address this limitation by proposing a novel combinatorial\ndesign which operates on heterogeneous networks where nodes have varying\nstorage and computing capabilities. We provide an analytical characterization\nof the computation-communication trade-off and show that it is optimal within a\nconstant factor and could outperform the state-of-the-art homogeneous schemes.\n",
"title": "Cascaded Coded Distributed Computing on Heterogeneous Networks"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4771
| null |
Validated
| null | null |
null |
{
"abstract": " The motion and photon emission of electrons in a superlattice may be\ndescribed as in an undulator. Therefore, there is a close analogy between\nballistic electrons in a superlattice and electrons in a free electron laser\n(FEL). Touching upon this analogy the intensity of photon emission in the IR\nregion and the gain are calculated. It is shown that the amplification can be\nsignificant, reaching tens of percent.\n",
"title": "A FEL Based on a Superlattice"
}
| null | null |
[
"Physics"
] | null | true | null |
4772
| null |
Validated
| null | null |
null |
{
"abstract": " We present hydrodynamic simulations of the hot cocoon produced when a\nrelativistic jet passes through the gamma-ray burst (GRB) progenitor star and\nits environment, and we compute the lightcurve and spectrum of the radiation\nemitted by the cocoon. The radiation from the cocoon has a nearly thermal\nspectrum with a peak in the X-ray band, and it lasts for a few minutes in the\nobserver frame; the cocoon radiation starts at roughly the same time as when\n$\\gamma$-rays from a burst trigger detectors aboard GRB satellites. The\nisotropic cocoon luminosity ($\\sim 10^{47}$ erg s$^{-1}$) is of the same order\nof magnitude as the X-ray luminosity of a typical long-GRB afterglow during the\nplateau phase. This radiation should be identifiable in the Swift data because\nof its nearly thermal spectrum which is distinct from the somewhat brighter\npower-law component. The detection of this thermal component would provide\ninformation regarding the size and density stratification of the GRB progenitor\nstar. Photons from the cocoon are also inverse-Compton (IC) scattered by\nelectrons in the relativistic jet. We present the IC lightcurve and spectrum,\nby post-processing the results of the numerical simulations. The IC spectrum\nlies in 10 keV--MeV band for typical GRB parameters. The detection of this IC\ncomponent would provide an independent measurement of GRB jet Lorentz factor\nand it would also help to determine the jet magnetisation parameter.\n",
"title": "Thermal and non-thermal emission from the cocoon of a gamma-ray burst jet"
}
| null | null | null | null | true | null |
4773
| null |
Default
| null | null |
null |
{
"abstract": " As non-institutive polynomial chaos expansion (PCE) techniques have gained\ngrowing popularity among researchers, we here provide a comprehensive review of\nmajor sampling strategies for the least squares based PCE. Traditional sampling\nmethods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal\ndesign of experiments (ODE), Gaussian quadratures, as well as more recent\ntechniques, such as coherence-optimal and randomized quadratures are discussed.\nWe also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal,\nthat employs the so-called alphabetic optimality criteria used in the context\nof ODE in conjunction with coherence-optimal samples. A comparison between the\nempirical performance of the selected sampling methods applied to three\nnumerical examples, including high-order PCE's, high-dimensional problems, and\nlow oversampling ratios, is presented to provide a road map for practitioners\nseeking the most suitable sampling technique for a problem at hand. We observed\nthat the alphabetic-coherence-optimal technique outperforms other sampling\nmethods, specially when high-order ODE are employed and/or the oversampling\nratio is low.\n",
"title": "Least Squares Polynomial Chaos Expansion: A Review of Sampling Strategies"
}
| null | null |
[
"Statistics"
] | null | true | null |
4774
| null |
Validated
| null | null |
null |
{
"abstract": " Recognizing human activities in a sequence is a challenging area of research\nin ubiquitous computing. Most approaches use a fixed size sliding window over\nconsecutive samples to extract features---either handcrafted or learned\nfeatures---and predict a single label for all samples in the window. Two key\nproblems emanate from this approach: i) the samples in one window may not\nalways share the same label. Consequently, using one label for all samples\nwithin a window inevitably lead to loss of information; ii) the testing phase\nis constrained by the window size selected during training while the best\nwindow size is difficult to tune in practice. We propose an efficient algorithm\nthat can predict the label of each sample, which we call dense labeling, in a\nsequence of human activities of arbitrary length using a fully convolutional\nnetwork. In particular, our approach overcomes the problems posed by the\nsliding window step. Additionally, our algorithm learns both the features and\nclassifier automatically. We release a new daily activity dataset based on a\nwearable sensor with hospitalized patients. We conduct extensive experiments\nand demonstrate that our proposed approach is able to outperform the\nstate-of-the-arts in terms of classification and label misalignment measures on\nthree challenging datasets: Opportunity, Hand Gesture, and our new dataset.\n",
"title": "Efficient Dense Labeling of Human Activity Sequences from Wearables using Fully Convolutional Networks"
}
| null | null | null | null | true | null |
4775
| null |
Default
| null | null |
null |
{
"abstract": " We introduce and study the higher tetrahedral algebras, an exotic family of\nfinite-dimensional tame symmetric algebras over an algebraically closed field.\nThe Gabriel quiver of such an algebra is the triangulation quiver associated to\nthe coherent orientation of the tetrahedron. Surprisingly, these algebras\noccurred in the classification of all algebras of generalised quaternion type,\nbut are not weighted surface algebras. We prove that a higher tetrahedral\nalgebra is periodic if and only if it is non-singular.\n",
"title": "Higher Tetrahedral Algebras"
}
| null | null | null | null | true | null |
4776
| null |
Default
| null | null |
null |
{
"abstract": " For many algorithms, parameter tuning remains a challenging and critical\ntask, which becomes tedious and infeasible in a multi-parameter setting.\nMulti-penalty regularization, successfully used for solving undetermined sparse\nregression of problems of unmixing type where signal and noise are additively\nmixed, is one of such examples. In this paper, we propose a novel algorithmic\nframework for an adaptive parameter choice in multi-penalty regularization with\na focus on the correct support recovery. Building upon the theory of\nregularization paths and algorithms for single-penalty functionals, we extend\nthese ideas to a multi-penalty framework by providing an efficient procedure\nfor the construction of regions containing structurally similar solutions,\ni.e., solutions with the same sparsity and sign pattern, over the whole range\nof parameters. Combining this with a model selection criterion, we can choose\nregularization parameters in a data-adaptive manner. Another advantage of our\nalgorithm is that it provides an overview on the solution stability over the\nwhole range of parameters. This can be further exploited to obtain additional\ninsights into the problem of interest. We provide a numerical analysis of our\nmethod and compare it to the state-of-the-art single-penalty algorithms for\ncompressed sensing problems in order to demonstrate the robustness and power of\nthe proposed algorithm.\n",
"title": "Adaptive multi-penalty regularization based on a generalized Lasso path"
}
| null | null | null | null | true | null |
4777
| null |
Default
| null | null |
null |
{
"abstract": " We propose a method to generate 3D shapes using point clouds. Given a\npoint-cloud representation of a 3D shape, our method builds a kd-tree to\nspatially partition the points. This orders them consistently across all\nshapes, resulting in reasonably good correspondences across all shapes. We then\nuse PCA analysis to derive a linear shape basis across the spatially\npartitioned points, and optimize the point ordering by iteratively minimizing\nthe PCA reconstruction error. Even with the spatial sorting, the point clouds\nare inherently noisy and the resulting distribution over the shape coefficients\ncan be highly multi-modal. We propose to use the expressive power of neural\nnetworks to learn a distribution over the shape coefficients in a\ngenerative-adversarial framework. Compared to 3D shape generative models\ntrained on voxel-representations, our point-based method is considerably more\nlight-weight and scalable, with little loss of quality. It also outperforms\nsimpler linear factor models such as Probabilistic PCA, both qualitatively and\nquantitatively, on a number of categories from the ShapeNet dataset.\nFurthermore, our method can easily incorporate other point attributes such as\nnormal and color information, an additional advantage over voxel-based\nrepresentations.\n",
"title": "Shape Generation using Spatially Partitioned Point Clouds"
}
| null | null | null | null | true | null |
4778
| null |
Default
| null | null |
null |
{
"abstract": " Recognizing arbitrary objects in the wild has been a challenging problem due\nto the limitations of existing classification models and datasets. In this\npaper, we propose a new task that aims at parsing scenes with a large and open\nvocabulary, and several evaluation metrics are explored for this problem. Our\nproposed approach to this problem is a joint image pixel and word concept\nembeddings framework, where word concepts are connected by semantic relations.\nWe validate the open vocabulary prediction ability of our framework on ADE20K\ndataset which covers a wide variety of scenes and objects. We further explore\nthe trained joint embedding space to show its interpretability.\n",
"title": "Open Vocabulary Scene Parsing"
}
| null | null |
[
"Computer Science"
] | null | true | null |
4779
| null |
Validated
| null | null |
null |
{
"abstract": " This paper describes the procedure to estimate the parameters in mean\nreversion processes with functional tendency defined by a periodic continuous\ndeterministic function, expressed as a series of truncated Fourier. Two phases\nof estimation are defined, in the first phase through Gaussian techniques using\nthe Euler-Maruyama discretization, we obtain the maximum likelihood function,\nthat will allow us to find estimators of the external parameters and an\nestimation of the expected value of the process. In the second phase, a\nreestimate of the periodic functional tendency with it's parameters of phase\nand amplitude is carried out, this will allow, improve the initial estimation.\nSome experimental result using simulated data sets are graphically illustrated.\n",
"title": "Parameter Estimation in Mean Reversion Processes with Periodic Functional Tendency"
}
| null | null | null | null | true | null |
4780
| null |
Default
| null | null |
null |
{
"abstract": " A multitude of web and desktop applications are now widely available in\ndiverse human languages. This paper explores the design issues that are\nspecifically relevant for multilingual users. It reports on the continued\nstudies of Information System (IS) issues and users' behaviour across\ncross-cultural and transnational boundaries. Taking the BBC website as a model\nthat is internationally recognised, usability tests were conducted to compare\ndifferent versions of the website. The dependant variables derived from the\nquestionnaire were analysed (via descriptive statistics) to elucidate the\nmultilingual UI design issues. Using Principal Component Analysis (PCA), five\nde-correlated variables were identified which were then used for hypotheses\ntests. A modified version of Herzberg's Hygiene-motivational Theory about the\nWorkplace was applied to assess the components used in the website. Overall, it\nwas concluded that the English versions of the website gave superior usability\nresults and this implies the need for deeper study of the problems in usability\nof the translated versions.\n",
"title": "User Interface (UI) Design Issues for the Multilingual Users: A Case Study"
}
| null | null | null | null | true | null |
4781
| null |
Default
| null | null |
null |
{
"abstract": " Nowadays, a big part of people rely on available content in social media in\ntheir decisions (e.g. reviews and feedback on a topic or product). The\npossibility that anybody can leave a review provide a golden opportunity for\nspammers to write spam reviews about products and services for different\ninterests. Identifying these spammers and the spam content is a hot topic of\nresearch and although a considerable number of studies have been done recently\ntoward this end, but so far the methodologies put forth still barely detect\nspam reviews, and none of them show the importance of each extracted feature\ntype. In this study, we propose a novel framework, named NetSpam, which\nutilizes spam features for modeling review datasets as heterogeneous\ninformation networks to map spam detection procedure into a classification\nproblem in such networks. Using the importance of spam features help us to\nobtain better results in terms of different metrics experimented on real-world\nreview datasets from Yelp and Amazon websites. The results show that NetSpam\noutperforms the existing methods and among four categories of features;\nincluding review-behavioral, user-behavioral, reviewlinguistic,\nuser-linguistic, the first type of features performs better than the other\ncategories.\n",
"title": "NetSpam: a Network-based Spam Detection Framework for Reviews in Online Social Media"
}
| null | null | null | null | true | null |
4782
| null |
Default
| null | null |
null |
{
"abstract": " We investigate proving properties of Curry programs using Agda. First, we\naddress the functional correctness of Curry functions that, apart from some\nsyntactic and semantic differences, are in the intersection of the two\nlanguages. Second, we use Agda to model non-deterministic functions with two\ndistinct and competitive approaches incorporating the non-determinism. The\nfirst approach eliminates non-determinism by considering the set of all\nnon-deterministic values produced by an application. The second approach\nencodes every non-deterministic choice that the application could perform. We\nconsider our initial experiment a success. Although proving properties of\nprograms is a notoriously difficult task, the functional logic paradigm does\nnot seem to add any significant layer of difficulty or complexity to the task.\n",
"title": "Proving Non-Deterministic Computations in Agda"
}
| null | null | null | null | true | null |
4783
| null |
Default
| null | null |
null |
{
"abstract": " Classification, which involves finding rules that partition a given data set\ninto disjoint groups, is one class of data mining problems. Approaches proposed\nso far for mining classification rules for large databases are mainly decision\ntree based symbolic learning methods. The connectionist approach based on\nneural networks has been thought not well suited for data mining. One of the\nmajor reasons cited is that knowledge generated by neural networks is not\nexplicitly represented in the form of rules suitable for verification or\ninterpretation by humans. This paper examines this issue. With our newly\ndeveloped algorithms, rules which are similar to, or more concise than those\ngenerated by the symbolic methods can be extracted from the neural networks.\nThe data mining process using neural networks with the emphasis on rule\nextraction is described. Experimental results and comparison with previously\npublished works are presented.\n",
"title": "NeuroRule: A Connectionist Approach to Data Mining"
}
| null | null | null | null | true | null |
4784
| null |
Default
| null | null |
null |
{
"abstract": " Modern learning algorithms excel at producing accurate but complex models of\nthe data. However, deploying such models in the real-world requires extra care:\nwe must ensure their reliability, robustness, and absence of undesired biases.\nThis motivates the development of models that are equally accurate but can be\nalso easily inspected and assessed beyond their predictive performance. To this\nend, we introduce contextual explanation networks (CENs)---a class of\narchitectures that learn to predict by generating and utilizing intermediate,\nsimplified probabilistic models. Specifically, CENs generate parameters for\nintermediate graphical models which are further used for prediction and play\nthe role of explanations. Contrary to the existing post-hoc model-explanation\ntools, CENs learn to predict and to explain jointly. Our approach offers two\nmajor advantages: (i) for each prediction, valid, instance-specific\nexplanations are generated with no computational overhead and (ii) prediction\nvia explanation acts as a regularizer and boosts performance in low-resource\nsettings. We analyze the proposed framework theoretically and experimentally.\nOur results on image and text classification and survival analysis tasks\ndemonstrate that CENs are not only competitive with the state-of-the-art\nmethods but also offer additional insights behind each prediction, that are\nvaluable for decision support. We also show that while post-hoc methods may\nproduce misleading explanations in certain cases, CENs are always consistent\nand allow to detect such cases systematically.\n",
"title": "Contextual Explanation Networks"
}
| null | null | null | null | true | null |
4785
| null |
Default
| null | null |
null |
{
"abstract": " Statistical regression models whose mean functions are represented by\nordinary differential equations (ODEs) can be used to describe phenomenons\ndynamical in nature, which are abundant in areas such as biology, climatology\nand genetics. The estimation of parameters of ODE based models is essential for\nunderstanding its dynamics, but the lack of an analytical solution of the ODE\nmakes the parameter estimation challenging. The aim of this paper is to propose\na general and fast framework of statistical inference for ODE based models by\nrelaxation of the underlying ODE system. Relaxation is achieved by a properly\nchosen numerical procedure, such as the Runge-Kutta, and by introducing\nadditive Gaussian noises with small variances. Consequently, filtering methods\ncan be applied to obtain the posterior distribution of the parameters in the\nBayesian framework. The main advantage of the proposed method is computation\nspeed. In a simulation study, the proposed method was at least 14 times faster\nthan the other methods. Theoretical results which guarantee the convergence of\nthe posterior of the approximated dynamical system to the posterior of true\nmodel are presented. Explicit expressions are given that relate the order and\nthe mesh size of the Runge-Kutta procedure to the rate of convergence of the\napproximated posterior as a function of sample size.\n",
"title": "Inference for Differential Equation Models using Relaxation via Dynamical Systems"
}
| null | null | null | null | true | null |
4786
| null |
Default
| null | null |
null |
{
"abstract": " Unmanned aerial vehicles (UAVs) have attracted significant interest recently\nin wireless communication due to their high maneuverability, flexible\ndeployment, and low cost. This paper studies a UAV-enabled wireless network\nwhere the UAV is employed as an aerial mobile base station (BS) to serve a\ngroup of users on the ground. To achieve fair performance among users, we\nmaximize the minimum throughput over all ground users by jointly optimizing the\nmultiuser communication scheduling and UAV trajectory over a finite horizon.\nThe formulated problem is shown to be a mixed integer non-convex optimization\nproblem that is difficult to solve in general. We thus propose an efficient\niterative algorithm by applying the block coordinate descent and successive\nconvex optimization techniques, which is guaranteed to converge to at least a\nlocally optimal solution. To achieve fast convergence and stable throughput, we\nfurther propose a low-complexity initialization scheme for the UAV trajectory\ndesign based on the simple circular trajectory. Extensive simulation results\nare provided which show significant throughput gains of the proposed design as\ncompared to other benchmark schemes.\n",
"title": "Joint Trajectory and Communication Design for UAV-Enabled Multiple Access"
}
| null | null | null | null | true | null |
4787
| null |
Default
| null | null |
null |
{
"abstract": " We propose a new imaging technique for radio and optical/infrared\ninterferometry. The proposed technique reconstructs the image from the\nvisibility amplitude and closure phase, which are standard data products of\nshort-millimeter very long baseline interferometers such as the Event Horizon\nTelescope (EHT) and optical/infrared interferometers, by utilizing two\nregularization functions: the $\\ell_1$-norm and total variation (TV) of the\nbrightness distribution. In the proposed method, optimal regularization\nparameters, which represent the sparseness and effective spatial resolution of\nthe image, are derived from data themselves using cross validation (CV). As an\napplication of this technique, we present simulated observations of M87 with\nthe EHT based on four physically motivated models. We confirm that $\\ell_1$+TV\nregularization can achieve an optimal resolution of $\\sim 20-30$% of the\ndiffraction limit $\\lambda/D_{\\rm max}$, which is the nominal spatial\nresolution of a radio interferometer. With the proposed technique, the EHT can\nrobustly and reasonably achieve super-resolution sufficient to clearly resolve\nthe black hole shadow. These results make it promising for the EHT to provide\nan unprecedented view of the event-horizon-scale structure in the vicinity of\nthe super-massive black hole in M87 and also the Galactic center Sgr A*.\n",
"title": "Imaging the Schwarzschild-radius-scale Structure of M87 with the Event Horizon Telescope using Sparse Modeling"
}
| null | null | null | null | true | null |
4788
| null |
Default
| null | null |
null |
{
"abstract": " Using process algebra, this paper describes the formalisation of the\nprocess/semantics behind the purely event-driven programming language.\n",
"title": "The process of purely event-driven programs"
}
| null | null | null | null | true | null |
4789
| null |
Default
| null | null |
null |
{
"abstract": " The profiles of the broad emission lines of active galactic nuclei (AGNs) and\nthe time delays in their response to changes in the ionizing continuum (\"lags\")\ngive information about the structure and kinematics of the inner regions of\nAGNs. Line profiles are also our main way of estimating the masses of the\nsupermassive black holes (SMBHs). However, the profiles often show\nill-understood, asymmetric structure and velocity-dependent lags vary with\ntime. Here we show that partial obscuration of the broad-line region (BLR) by\noutflowing, compact, dusty clumps produces asymmetries and velocity-dependent\nlags similar to those observed. Our model explains previously inexplicable\nchanges in the ratios of the hydrogen lines with time and velocity, the lack of\ncorrelation of changes in line profiles with variability of the central engine,\nthe velocity dependence of lags, and the change of lags with time. We propose\nthat changes on timescales longer than the light-crossing time do not come from\ndynamical changes in the BLR, but are a natural result of the effect of\noutflowing dusty clumps driven by radiation pressure acting on the dust. The\nmotion of these clumps offers an explanation of long-term changes in\npolarization. The effects of the dust complicate the study of the structure and\nkinematics of the BLR and the search for sub-parsec SMBH binaries. Partial\nobscuration of the accretion disc can also provide the local fluctuations in\nluminosity that can explain sizes deduced from microlensing.\n",
"title": "Partial dust obscuration in active galactic nuclei as a cause of broad-line profile and lag variability, and apparent accretion disc inhomogeneities"
}
| null | null | null | null | true | null |
4790
| null |
Default
| null | null |
null |
{
"abstract": " Agricultural robots are expected to increase yields in a sustainable way and\nautomate precision tasks, such as weeding and plant monitoring. At the same\ntime, they move in a continuously changing, semi-structured field environment,\nin which features can hardly be found and reproduced at a later time.\nChallenges for Lidar and visual detection systems stem from the fact that\nplants can be very small, overlapping and have a steadily changing appearance.\nTherefore, a popular way to localize vehicles with high accuracy is based on\nex- pensive global navigation satellite systems and not on natural landmarks.\nThe contribution of this work is a novel image- based plant localization\ntechnique that uses the time-invariant stem emerging point as a reference. Our\napproach is based on a fully convolutional neural network that learns landmark\nlocalization from RGB and NIR image input in an end-to-end manner. The network\nperforms pose regression to generate a plant location likelihood map. Our\napproach allows us to cope with visual variances of plants both for different\nspecies and different growth stages. We achieve high localization accuracies as\nshown in detailed evaluations of a sugar beet cultivation phase. In experiments\nwith our BoniRob we demonstrate that detections can be robustly reproduced with\ncentimeter accuracy.\n",
"title": "From Plants to Landmarks: Time-invariant Plant Localization that uses Deep Pose Regression in Agricultural Fields"
}
| null | null | null | null | true | null |
4791
| null |
Default
| null | null |
null |
{
"abstract": " Textbooks in applied mathematics often use graphs to explain the meaning of\nformulae, even though their benefit is still not fully explored. To test\nprocesses underlying this assumed multimedia effect we collected performance\nscores, eye movements, and think-aloud protocols from students solving problems\nin vector calculus with and without graphs. Results showed no overall\nmultimedia effect, but instead an effect to confirm statements that were\naccompanied by graphs, irrespective of whether these statements were true or\nfalse. Eye movement and verbal data shed light on this surprising finding.\nStudents looked proportionally less at the text and the problem statement when\na graph was present. Moreover, they experienced more mental effort with the\ngraph, as indicated by more silent pauses in thinking aloud. Hence, students\nactively processed the graphs. This, however, was not sufficient. Further\nanalysis revealed that the more students looked at the statement, the better\nthey performed. Thus, in the multimedia condition the graph drew students'\nattention and cognitive capacities away from focusing on the statement. A good\nalternative strategy in the multimedia condition was to frequently look between\ngraph and problem statement, and thus to integrate their information. In\nconclusion, graphs influence where students look and what they process, and may\neven mislead them into believing accompanying information. Thus, teachers and\ntextbook designers should be very critical on when to use graphs and carefully\nconsider how the graphs are integrated with other parts of the problem.\n",
"title": "There's more to the multimedia effect than meets the eye: is seeing pictures believing?"
}
| null | null | null | null | true | null |
4792
| null |
Default
| null | null |
null |
{
"abstract": " This paper provides an alternate proof to parts of the Goulden-Slofstra\nformula for enumerating two vertex maps by genus, which is an extension of the\nfamous Harer-Zagier formula that computes the Euler characteristic of the\nmoduli space of curves. This paper also shows a further simplification to the\nGoulden-Slofstra formula. Portions of this alternate proof will be used in a\nsubsequent paper, where it forms a basis for a more general result that applies\nfor a certain class of maps with an arbitrary number of vertices.\n",
"title": "Methods of Enumerating Two Vertex Maps of Arbitrary Genus"
}
| null | null | null | null | true | null |
4793
| null |
Default
| null | null |
null |
{
"abstract": " According to the Butterfield--Isham proposal, to understand quantum gravity\nwe must revise the way we view the universe of mathematics. However, this paper\ndemonstrates that the current elaborations of this programme neglect quantum\ninteractions. The paper then introduces the Faddeev--Mickelsson anomaly which\nobstructs the renormalization of Yang--Mills theory, suggesting that to\ntheorise on many-particle systems requires a many-topos view of mathematics\nitself: higher theory. As our main contribution, the topos theoretic framework\nis used to conceptualise the fact that there are principally three different\nquantisation problems, the differences of which have been ignored not just by\ntopos physicists but by most philosophers of science. We further argue that if\nhigher theory proves out to be necessary for understanding quantum gravity, its\nimplications to philosophy will be foundational: higher theory challenges the\npropositional concept of truth and thus the very meaning of theorising in\nscience.\n",
"title": "Higher Theory and the Three Problems of Physics"
}
| null | null |
[
"Physics",
"Mathematics"
] | null | true | null |
4794
| null |
Validated
| null | null |
null |
{
"abstract": " Emission of electromagnetic radiation by accelerated particles with electric,\ntoroidal and anapole dipole moments is analyzed. It is shown that ellipticity\nof the emitted light can be used to differentiate between electric and toroidal\ndipole sources, and that anapoles, elementary neutral non-radiating\nconfigurations, which consist of electric and toroidal dipoles, can emit light\nunder uniform acceleration. The existence of non-radiating configurations in\nelectrodynamics implies that it is impossible to fully determine the internal\nmakeup of the emitter given only the distribution of the emitted light. Here we\ndemonstrate that there is a loop-hole in this `inverse source problem'. Our\nresults imply that there may be a whole range of new phenomena to be discovered\nby studying the electromagnetic response of matter under acceleration.\n",
"title": "Light emission by accelerated electric, toroidal and anapole dipolar sources"
}
| null | null | null | null | true | null |
4795
| null |
Default
| null | null |
null |
{
"abstract": " Distribution regression has recently attracted much interest as a generic\nsolution to the problem of supervised learning where labels are available at\nthe group level, rather than at the individual level. Current approaches,\nhowever, do not propagate the uncertainty in observations due to sampling\nvariability in the groups. This effectively assumes that small and large groups\nare estimated equally well, and should have equal weight in the final\nregression. We account for this uncertainty with a Bayesian distribution\nregression formalism, improving the robustness and performance of the model\nwhen group sizes vary. We frame our models in a neural network style, allowing\nfor simple MAP inference using backpropagation to learn the parameters, as well\nas MCMC-based inference which can fully propagate uncertainty. We demonstrate\nour approach on illustrative toy datasets, as well as on a challenging problem\nof predicting age from images.\n",
"title": "Bayesian Approaches to Distribution Regression"
}
| null | null | null | null | true | null |
4796
| null |
Default
| null | null |
null |
{
"abstract": " By using the state-of-the-art microscopy and spectroscopy in\naberration-corrected scanning transmission electron microscopes, we determine\nthe atomic arrangements, occupancy, elemental distribution, and the electronic\nstructures of dislocation cores in the 10°tilted SrTiO3 bicrystal. We\nidentify that there are two different types of oxygen deficient dislocation\ncores, i.e., the SrO plane terminated Sr0.82Ti0.85O3-x (Ti3.67+, 0.48<x<0.91)\nand TiO2 plane terminated Sr0.63Ti0.90O3-y (Ti3.60+, 0.57<y<1). They have the\nsame Burgers vector of a[100] but different atomic arrangements and chemical\nproperties. Besides the oxygen vacancies, Sr vacancies and rocksalt-like\ntitanium oxide reconstruction are also identified in the dislocation core with\nTiO2 plane termination. Our atomic-scale study reveals the true atomic\nstructures and chemistry of individual dislocation cores, providing useful\ninsights into understanding the properties of dislocations and grain\nboundaries.\n",
"title": "Atomic-Scale Structure Relaxation, Chemistry and Charge Distribution of Dislocation Cores in SrTiO3"
}
| null | null | null | null | true | null |
4797
| null |
Default
| null | null |
null |
{
"abstract": " The Henon-Heiles system was originally proposed to describe the dynamical\nbehavior of galaxies, but this system has been widely applied in dynamical\nsystems by exhibit great details in phase space. This work presents the\nformalism to describe Henon-Heiles system and a qualitative approach of\ndynamics behavior. The growth of chaotic region in phase space is observed by\nPoincare Surface of Section when the total energy increases. Island of\nregularity remain around stable points and relevants phenomena appear, such as\nsticky.\n",
"title": "Transição de fase no sistema de Hénon-Heiles (Phase transition in the Henon-Heiles system)"
}
| null | null | null | null | true | null |
4798
| null |
Default
| null | null |
null |
{
"abstract": " We study the large time behaviour of the mass (size) of particles described\nby the fragmentation equation with homogeneous breakup kernel. We give\nnecessary and sufficient conditions for the convergence of solutions to the\nunique self-similar solution.\n",
"title": "Self-similar solutions of fragmentation equations revisited"
}
| null | null | null | null | true | null |
4799
| null |
Default
| null | null |
null |
{
"abstract": " Although neural machine translation (NMT) with the encoder-decoder framework\nhas achieved great success in recent times, it still suffers from some\ndrawbacks: RNNs tend to forget old information which is often useful and the\nencoder only operates through words without considering word relationship. To\nsolve these problems, we introduce a relation networks (RN) into NMT to refine\nthe encoding representations of the source. In our method, the RN first\naugments the representation of each source word with its neighbors and reasons\nall the possible pairwise relations between them. Then the source\nrepresentations and all the relations are fed to the attention module and the\ndecoder together, keeping the main encoder-decoder architecture unchanged.\nExperiments on two Chinese-to-English data sets in different scales both show\nthat our method can outperform the competitive baselines significantly.\n",
"title": "Refining Source Representations with Relation Networks for Neural Machine Translation"
}
| null | null | null | null | true | null |
4800
| null |
Default
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.