text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " Browsing and finding relevant information for Bangladeshi laws is a challenge\nfaced by all law students and researchers in Bangladesh, and by citizens who\nwant to learn about any legal procedure. Some law archives in Bangladesh are\ndigitized, but lack proper tools to organize the data meaningfully. We present\na text visualization tool that utilizes machine learning techniques to make the\nsearching of laws quicker and easier. Using Doc2Vec to layout law article\nnodes, link mining techniques to visualize relevant citation networks, and\nnamed entity recognition to quickly find relevant sections in long law\narticles, our tool provides a faster and better search experience to the users.\nQualitative feedback from law researchers, students, and government officials\nshow promise for visually intuitive search tools in the context of\ngovernmental, legal, and constitutional data in developing countries, where\ndigitized data does not necessarily pave the way towards an easy access to\ninformation.\n", "title": "A visual search engine for Bangladeshi laws" }
null
null
null
null
true
null
16101
null
Default
null
null
null
{ "abstract": " Let $P$ be a graph with a vertex $v$ such that $P\\backslash v$ is a forest,\nand let $Q$ be an outerplanar graph. We prove that there exists a number\n$p=p(P,Q)$ such that every 2-connected graph of path-width at least $p$ has a\nminor isomorphic to $P$ or $Q$. This result answers a question of Seymour and\nimplies a conjecture of Marshall and Wood. The proof is based on a new property\nof tree-decompositions.\n", "title": "Minors of two-connected graphs of large path-width" }
null
null
null
null
true
null
16102
null
Default
null
null
null
{ "abstract": " We extend the Theory of Computation on real numbers, continuous real\nfunctions, and bounded closed Euclidean subsets, to compact metric spaces\n$(X,d)$: thereby generically including computational and optimization problems\nover higher types, such as the compact 'hyper' spaces of (i) nonempty closed\nsubsets of $X$ w.r.t. Hausdorff metric, and of (ii) equicontinuous functions on\n$X$. The thus obtained Cartesian closure is shown to exhibit the same\nstructural properties as in the Euclidean case, particularly regarding function\npre/image. This allows us to assert the computability of (iii) Fréchet\nDistances between curves and between loops, as well as of (iv)\nconstrained/Shape Optimization.\n", "title": "Computable Operations on Compact Subsets of Metric Spaces with Applications to Fréchet Distance and Shape Optimization" }
null
null
null
null
true
null
16103
null
Default
null
null
null
{ "abstract": " Capsule Networks have shown encouraging results on \\textit{defacto} benchmark\ncomputer vision datasets such as MNIST, CIFAR and smallNORB. Although, they are\nyet to be tested on tasks where (1) the entities detected inherently have more\ncomplex internal representations and (2) there are very few instances per class\nto learn from and (3) where point-wise classification is not suitable. Hence,\nthis paper carries out experiments on face verification in both controlled and\nuncontrolled settings that together address these points. In doing so we\nintroduce \\textit{Siamese Capsule Networks}, a new variant that can be used for\npairwise learning tasks. The model is trained using contrastive loss with\n$\\ell_2$-normalized capsule encoded pose features. We find that \\textit{Siamese\nCapsule Networks} perform well against strong baselines on both pairwise\nlearning datasets, yielding best results in the few-shot learning setting where\nimage pairs in the test set contain unseen subjects.\n", "title": "Siamese Capsule Networks" }
null
null
null
null
true
null
16104
null
Default
null
null
null
{ "abstract": " An adversarial attack is an exploitative process in which minute alterations\nare made to natural inputs, causing the inputs to be misclassified by neural\nmodels. In the field of speech recognition, this has become an issue of\nincreasing significance. Although adversarial attacks were originally\nintroduced in computer vision, they have since infiltrated the realm of speech\nrecognition. In 2017, a genetic attack was shown to be quite potent against the\nSpeech Commands Model. Limited-vocabulary speech classifiers, such as the\nSpeech Commands Model, are used in a variety of applications, particularly in\ntelephony; as such, adversarial examples produced by this attack pose as a\nmajor security threat. This paper explores various methods of detecting these\nadversarial examples with combinations of audio preprocessing. One particular\ncombined defense incorporating compressions, speech coding, filtering, and\naudio panning was shown to be quite effective against the attack on the Speech\nCommands Model, detecting audio adversarial examples with 93.5% precision and\n91.2% recall.\n", "title": "Isolated and Ensemble Audio Preprocessing Methods for Detecting Adversarial Examples against Automatic Speech Recognition" }
null
null
null
null
true
null
16105
null
Default
null
null
null
{ "abstract": " A detailed Monte Carlo-study of the satisfiability threshold for random 3-SAT\nhas been undertaken. In combination with a monotonicity assumption we find that\nthe threshold for random 3-SAT satisfies $\\alpha_3 \\leq 4.262$. If the\nassumption is correct, this means that the actual threshold value for $k=3$ is\nlower than that given by the cavity method. In contrast the latter has recently\nbeen shown to give the correct value for large $k$. Our result thus indicate\nthat there are distinct behaviors for $k$ above and below some critical $k_c$,\nand the cavity method may provide a correct mean-field picture for the range\nabove $k_c$.\n", "title": "Revisiting the cavity-method threshold for random 3-SAT" }
null
null
null
null
true
null
16106
null
Default
null
null
null
{ "abstract": " The FEAST eigenvalue algorithm is a subspace iteration algorithm that uses\ncontour integration in the complex plane to obtain the eigenvectors of a matrix\nfor the eigenvalues that are located in any user-defined search interval. By\ncomputing small numbers of eigenvalues in specific regions of the complex\nplane, FEAST is able to naturally parallelize the solution of eigenvalue\nproblems by solving for multiple eigenpairs simultaneously. The traditional\nFEAST algorithm is implemented by directly solving collections of shifted\nlinear systems of equations; in this paper, we describe a variation of the\nFEAST algorithm that uses iterative Krylov subspace algorithms for solving the\nshifted linear systems inexactly. We show that this iterative FEAST algorithm\n(which we call IFEAST) is mathematically equivalent to a block Krylov subspace\nmethod for solving eigenvalue problems. By using Krylov subspaces indirectly\nthrough solving shifted linear systems, rather than directly for projecting the\neigenvalue problem, IFEAST is able to solve eigenvalue problems using very\nlarge dimension Krylov subspaces, without ever having to store a basis for\nthose subspaces. IFEAST thus combines the flexibility and power of Krylov\nmethods, requiring only matrix-vector multiplication for solving eigenvalue\nproblems, with the natural parallelism of the traditional FEAST algorithm. We\ndiscuss the relationship between IFEAST and more traditional Krylov methods,\nand provide numerical examples illustrating its behavior.\n", "title": "An improved Krylov eigenvalue strategy using the FEAST algorithm with inexact system solves" }
null
null
null
null
true
null
16107
null
Default
null
null
null
{ "abstract": " Planar magnetic structures (PMSs) are periods in the solar wind during which\ninterplanetary magnetic field vectors are nearly parallel to a single plane.\nOne of the specific regions where PMSs have been reported are coronal mass\nejection (CME)-driven sheaths. We use here an automated method to identify PMSs\nin 95 CME sheath regions observed in-situ by the Wind and ACE spacecraft\nbetween 1997 and 2015. The occurrence and location of the PMSs are related to\nvarious shock, sheath and CME properties. We find that PMSs are ubiquitous in\nCME sheaths; 85% of the studied sheath regions had PMSs with the mean duration\nof 6.0 hours. In about one-third of the cases the magnetic field vectors\nfollowed a single PMS plane that covered a significant part (at least 67%) of\nthe sheath region. Our analysis gives strong support for two suggested PMS\nformation mechanisms: the amplification and alignment of solar wind\ndiscontinuities near the CME-driven shock and the draping of the magnetic field\nlines around the CME ejecta. For example, we found that the shock and PMS plane\nnormals generally coincided for the events where the PMSs occurred near the\nshock (68% of the PMS plane normals near the shock were separated by less than\n20° from the shock normal), while deviations were clearly larger when PMSs\noccurred close to the ejecta leading edge. In addition, PMSs near the shock\nwere generally associated with lower upstream plasma beta than the cases where\nPMSs occurred near the leading edge of the CME. We also demonstrate that the\nplanar parts of the sheath contain a higher amount of strongly southward\nmagnetic field than the non-planar parts, suggesting that planar sheaths are\nmore likely to drive magnetospheric activity.\n", "title": "Planar magnetic structures in coronal mass ejection-driven sheath regions" }
null
null
[ "Physics" ]
null
true
null
16108
null
Validated
null
null
null
{ "abstract": " The feature map obtained from the denoising autoencoder (DAE) is investigated\nby determining transportation dynamics of the DAE, which is a cornerstone for\ndeep learning. Despite the rapid development in its application, deep neural\nnetworks remain analytically unexplained, because the feature maps are nested\nand parameters are not faithful. In this paper, we address the problem of the\nformulation of nested complex of parameters by regarding the feature map as a\ntransport map. Even when a feature map has different dimensions between input\nand output, we can regard it as a transportation map by considering that both\nthe input and output spaces are embedded in a common high-dimensional space. In\naddition, the trajectory is a geometric object and thus, is independent of\nparameterization. In this manner, transportation can be regarded as a universal\ncharacter of deep neural networks. By determining and analyzing the\ntransportation dynamics, we can understand the behavior of a deep neural\nnetwork. In this paper, we investigate a fundamental case of deep neural\nnetworks: the DAE. We derive the transport map of the DAE, and reveal that the\ninfinitely deep DAE transports mass to decrease a certain quantity, such as\nentropy, of the data distribution. These results though analytically simple,\nshed light on the correspondence between deep neural networks and the\nWasserstein gradient flows.\n", "title": "Transportation analysis of denoising autoencoders: a novel method for analyzing deep neural networks" }
null
null
null
null
true
null
16109
null
Default
null
null
null
{ "abstract": " Learning algorithms for energy based Boltzmann architectures that rely on\ngradient descent are in general computationally prohibitive, typically due to\nthe exponential number of terms involved in computing the partition function.\nIn this way one has to resort to approximation schemes for the evaluation of\nthe gradient. This is the case of Restricted Boltzmann Machines (RBM) and its\nlearning algorithm Contrastive Divergence (CD). It is well-known that CD has a\nnumber of shortcomings, and its approximation to the gradient has several\ndrawbacks. Overcoming these defects has been the basis of much research and new\nalgorithms have been devised, such as persistent CD. In this manuscript we\npropose a new algorithm that we call Weighted CD (WCD), built from small\nmodifications of the negative phase in standard CD. However small these\nmodifications may be, experimental work reported in this paper suggest that WCD\nprovides a significant improvement over standard CD and persistent CD at a\nsmall additional computational cost.\n", "title": "Weighted Contrastive Divergence" }
null
null
null
null
true
null
16110
null
Default
null
null
null
{ "abstract": " The dark energy plus cold dark matter ($\\Lambda$CDM) cosmological model has\nbeen a demonstrably successful framework for predicting and explaining the\nlarge-scale structure of Universe and its evolution with time. Yet on length\nscales smaller than $\\sim 1$ Mpc and mass scales smaller than $\\sim 10^{11}\nM_{\\odot}$, the theory faces a number of challenges. For example, the observed\ncores of many dark-matter dominated galaxies are both less dense and less cuspy\nthan naively predicted in $\\Lambda$CDM. The number of small galaxies and dwarf\nsatellites in the Local Group is also far below the predicted count of low-mass\ndark matter halos and subhalos within similar volumes. These issues underlie\nthe most well-documented problems with $\\Lambda$CDM: Cusp/Core, Missing\nSatellites, and Too-Big-to-Fail. The key question is whether a better\nunderstanding of baryon physics, dark matter physics, or both will be required\nto meet these challenges. Other anomalies, including the observed planar and\norbital configurations of Local Group satellites and the tight baryonic/dark\nmatter scaling relations obeyed by the galaxy population, have been less\nthoroughly explored in the context of $\\Lambda$CDM theory. Future surveys to\ndiscover faint, distant dwarf galaxies and to precisely measure their masses\nand density structure hold promising avenues for testing possible solutions to\nthe small-scale challenges going forward. Observational programs to constrain\nor discover and characterize the number of truly dark low-mass halos are among\nthe most important, and achievable, goals in this field over then next decade.\nThese efforts will either further verify the $\\Lambda$CDM paradigm or demand a\nsubstantial revision in our understanding of the nature of dark matter.\n", "title": "Small-Scale Challenges to the $Λ$CDM Paradigm" }
null
null
null
null
true
null
16111
null
Default
null
null
null
{ "abstract": " Alternating minimization heuristics seek to solve a (difficult) global\noptimization task through iteratively solving a sequence of (much easier) local\noptimization tasks on different parts (or blocks) of the input parameters.\nWhile popular and widely applicable, very few examples of this heuristic are\nrigorously shown to converge to optimality, and even fewer to do so\nefficiently.\nIn this paper we present a general framework which is amenable to rigorous\nanalysis, and expose its applicability. Its main feature is that the local\noptimization domains are each a group of invertible matrices, together\nnaturally acting on tensors, and the optimization problem is minimizing the\nnorm of an input tensor under this joint action. The solution of this\noptimization problem captures a basic problem in Invariant Theory, called the\nnull-cone problem.\nThis algebraic framework turns out to encompass natural computational\nproblems in combinatorial optimization, algebra, analysis, quantum information\ntheory, and geometric complexity theory. It includes and extends to high\ndimensions the recent advances on (2-dimensional) operator scaling.\nOur main result is a fully polynomial time approximation scheme for this\ngeneral problem, which may be viewed as a multi-dimensional scaling algorithm.\nThis directly leads to progress on some of the problems in the areas above, and\na unified view of others. We explain how faster convergence of an algorithm for\nthe same problem will allow resolving central open problems.\nOur main techniques come from Invariant Theory, and include its rich\nnon-commutative duality theory, and new bounds on the bitsizes of coefficients\nof invariant polynomials. They enrich the algorithmic toolbox of this very\ncomputational field of mathematics, and are directly related to some challenges\nin geometric complexity theory (GCT).\n", "title": "Alternating minimization, scaling algorithms, and the null-cone problem from invariant theory" }
null
null
null
null
true
null
16112
null
Default
null
null
null
{ "abstract": " In this paper, we prove that positivity of denominator vectors holds for any\nskew-symmetric cluster algebra.\n", "title": "Positivity of denominator vectors of cluster algebras" }
null
null
[ "Mathematics" ]
null
true
null
16113
null
Validated
null
null
null
{ "abstract": " For primordial black holes (PBH) to be the dark matter in single-field\ninflation, the slow-roll approximation must be violated by at least ${\\cal\nO}(1)$ in order to enhance the curvature power spectrum within the required\nnumber of efolds between CMB scales and PBH mass scales. Power spectrum\npredictions which rely on the inflaton remaining on the slow-roll attractor can\nfail dramatically leading to qualitatively incorrect conclusions in models like\nan inflection potential and misestimate the mass scale in a running mass model.\nWe show that an optimized temporal evaluation of the Hubble slow-roll\nparameters to second order remains a good description for a wide range of PBH\nformation models where up to a $10^7$ amplification of power occurs in $10$\nefolds or more.\n", "title": "Primordial Black Holes and Slow-Roll Violation" }
null
null
[ "Physics" ]
null
true
null
16114
null
Validated
null
null
null
{ "abstract": " Supervised speech separation uses supervised learning algorithms to learn a\nmapping from an input noisy signal to an output target. With the fast\ndevelopment of deep learning, supervised separation has become the most\nimportant direction in speech separation area in recent years. For the\nsupervised algorithm, training target has a significant impact on the\nperformance. Ideal ratio mask is a commonly used training target, which can\nimprove the speech intelligibility and quality of the separated speech.\nHowever, it does not take into account the correlation between noise and clean\nspeech. In this paper, we use the optimal ratio mask as the training target of\nthe deep neural network (DNN) for speech separation. The experiments are\ncarried out under various noise environments and signal to noise ratio (SNR)\nconditions. The results show that the optimal ratio mask outperforms other\ntraining targets in general.\n", "title": "Using Optimal Ratio Mask as Training Target for Supervised Speech Separation" }
null
null
null
null
true
null
16115
null
Default
null
null
null
{ "abstract": " A challenge in isogeometric analysis is constructing analysis-suitable\nvolumetric meshes which can accurately represent the geometry of a given\nphysical domain. In this paper, we propose a method to derive a spline-based\nrepresentation of a domain of interest from voxel-based data. We show an\nefficient way to obtain a boundary representation of the domain by a level-set\nfunction. Then, we use the geometric information from the boundary (the normal\nvectors and curvature) to construct a matching C1 representation with\nhierarchical cubic splines. The approximation is done by a single template and\nlinear transformations (scaling, translations and rotations) without the need\nfor solving an optimization problem. We illustrate our method with several\nexamples in two and three dimensions, and show good performance on some\nstandard benchmark test problems.\n", "title": "Volumetric parametrization from a level set boundary representation with PHT Splines" }
null
null
null
null
true
null
16116
null
Default
null
null
null
{ "abstract": " This paper provides a theoretical justification of the superior\nclassification performance of deep rectifier networks over shallow rectifier\nnetworks from the geometrical perspective of piecewise linear (PWL) classifier\nboundaries. We show that, for a given threshold on the approximation error, the\nrequired number of boundary facets to approximate a general smooth boundary\ngrows exponentially with the dimension of the data, and thus the number of\nboundary facets, referred to as boundary resolution, of a PWL classifier is an\nimportant quality measure that can be used to estimate a lower bound on the\nclassification errors. However, learning naively an exponentially large number\nof boundary facets requires the determination of an exponentially large number\nof parameters and also requires an exponentially large number of training\npatterns. To overcome this issue of \"curse of dimensionality\", compressive\nrepresentations of high resolution classifier boundaries are required. To show\nthe superior compressive power of deep rectifier networks over shallow\nrectifier networks, we prove that the maximum boundary resolution of a single\nhidden layer rectifier network classifier grows exponentially with the number\nof units when this number is smaller than the dimension of the patterns. When\nthe number of units is larger than the dimension of the patterns, the growth\nrate is reduced to a polynomial order. Consequently, the capacity of generating\na high resolution boundary will increase if the same large number of units are\narranged in multiple layers instead of a single hidden layer. Taking high\ndimensional spherical boundaries as examples, we show how deep rectifier\nnetworks can utilize geometric symmetries to approximate a boundary with the\nsame accuracy but with a significantly fewer number of parameters than single\nhidden layer nets.\n", "title": "On the Compressive Power of Deep Rectifier Networks for High Resolution Representation of Class Boundaries" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
16117
null
Validated
null
null
null
{ "abstract": " We report the first experimental demonstration of frequency-locking of an\nextended-cavity quantum-cascade-laser (EC-QCL) to a near-infrared frequency\ncomb. The locking scheme is applied to carry out absolute spectroscopy of N2O\nlines near 7.87 {\\mu}m with an accuracy of ~60 kHz. Thanks to a single mode\noperation over more than 100 cm^{-1}, the comb-locked EC-QCL shows great\npotential for the accurate retrieval of line center frequencies in a spectral\nregion that is currently outside the reach of broadly tunable cw sources,\neither based on difference frequency generation or optical parametric\noscillation. The approach described here can be straightforwardly extended up\nto 12 {\\mu}m, which is the current wavelength limit for commercial cw EC-QCLs.\n", "title": "Absolute spectroscopy near 7.8 μm with a comb-locked extended-cavity quantum-cascade-laser" }
null
null
null
null
true
null
16118
null
Default
null
null
null
{ "abstract": " We study upper bounds on Weierstrass primary factors and discuss their\napplication in spectral theory. One of the main aims of this note is to draw\nattention to works of Blumenthal and Denjoy from 1910, but we also provide some\nnew results and some numerical computations of our own.\n", "title": "Some remarks on upper bounds for Weierstrass primary factors and their application in spectral theory" }
null
null
null
null
true
null
16119
null
Default
null
null
null
{ "abstract": " We prove that certain conditions on multigraded Betti numbers of a simplicial\ncomplex $K$ imply existence of a higher Massey product in cohomology of a\nmoment-angle-complex $\\mathcal Z_K$, which contains a unique element (a\nstrictly defined product). Using the simplicial multiwedge construction, we\nfind a family $\\mathcal{F}$ of polyhedral products being smooth closed\nmanifolds such that for any $l,r\\geq 2$ there exists an $l$-connected manifold\n$M\\in\\mathcal F$ with a nontrivial strictly defined $r$-fold Massey product in\n$H^{*}(M)$. As an application to homological algebra, we determine a wide class\nof triangulated spheres $K$ such that a nontrivial higher Massey product of any\norder may exist in Koszul homology of their Stanley--Reisner rings. As an\napplication to rational homotopy theory, we establish a combinatorial criterion\nfor a simple graph $\\Gamma$ to provide a (rationally) formal generalized\nmoment-angle manifold $\\mathcal Z_{P}^{J}=(D^{2j_{i}},S^{2j_{i}-1})^{\\partial\nP^*}$, $J=(j_{1},\\ldots,j_m)$ over a graph-associahedron $P=P_{\\Gamma}$ and\ncompute all the diffeomorphism types of formal moment-angle manifolds over\ngraph-associahedra.\n", "title": "Topology of polyhedral products over simplicial multiwedges" }
null
null
[ "Mathematics" ]
null
true
null
16120
null
Validated
null
null
null
{ "abstract": " Recent work in learning ontologies (hierarchical and partially-ordered\nstructures) has leveraged the intrinsic geometry of spaces of learned\nrepresentations to make predictions that automatically obey complex structural\nconstraints. We explore two extensions of one such model, the order-embedding\nmodel for hierarchical relation learning, with an aim towards improved\nperformance on text data for commonsense knowledge representation. Our first\nmodel jointly learns ordering relations and non-hierarchical knowledge in the\nform of raw text. Our second extension exploits the partial order structure of\nthe training data to find long-distance triplet constraints among embeddings\nwhich are poorly enforced by the pairwise training procedure. We find that both\nincorporating free text and augmented training constraints improve over the\noriginal order-embedding model and other strong baselines.\n", "title": "Improved Representation Learning for Predicting Commonsense Ontologies" }
null
null
null
null
true
null
16121
null
Default
null
null
null
{ "abstract": " In the past decade, the discovery of active pharmaceutical substances with\nhigh therapeutic value but poor aqueous solubility has increased, thus making\nit challenging to formulate these compounds as oral dosage forms. The\nbioavailability of these drugs can be increased by formulating these drugs as\nan amorphous drug delivery system. Use of porous media like mesoporous silica\nhas been investigated as a potential means to increase the solubility of poorly\nsoluble drugs and to stabilize the amorphous drug delivery system. These\nmaterials have nanosized capillaries and the large surface area which enable\nthe materials to accommodate high drug loading and promote the controlled and\nfast release. Therefore, mesoporous silica has been used as a carrier in the\nsolid dispersion to form an amorphous solid dispersion (ASD). Mesoporous silica\nis also being used as an adsorbent in a conventional solid dispersion, which\nhas many useful aspects. This review focuses on the use of mesoporous silica in\nASD as potential means to increase the dissolution rate and to provide or\nincrease the stability of the ASD. First, an overview of mesoporous silica and\nthe classification is discussed. Subsequently, methods of drug incorporation,\nthe stability of dispersion and, much more are discussed.\n", "title": "Mesoporous Silica as a Carrier for Amorphous Solid Dispersion" }
null
null
null
null
true
null
16122
null
Default
null
null
null
{ "abstract": " We define and study a probability monad on the category of complete metric\nspaces and short maps. It assigns to each space the space of Radon probability\nmeasures on it with finite first moment, equipped with the\nKantorovich-Wasserstein distance. This monad is analogous to the Giry monad on\nthe category of Polish spaces, and it extends a construction due to van Breugel\nfor compact and for 1-bounded complete metric spaces.\nWe prove that this Kantorovich monad arises from a colimit construction on\nfinite powers, which formalizes the intuition that probability measures are\nlimits of finite samples. The proof relies on a criterion for when an ordinary\nleft Kan extension of lax monoidal functors is a monoidal Kan extension. The\ncolimit characterization allows the development of integration theory and the\ntreatment of measures on spaces of measures, without measure theory.\nWe also show that the category of algebras of the Kantorovich monad is\nequivalent to the category of closed convex subsets of Banach spaces with short\naffine maps as morphisms.\n", "title": "A Probability Monad as the Colimit of Finite Powers" }
null
null
null
null
true
null
16123
null
Default
null
null
null
{ "abstract": " Design of robotic systems that safely and efficiently operate in uncertain\noperational conditions, such as rehabilitation and physical assistance robots,\nremains an important challenge in the field. Current methods for the design of\nenergy efficient series elastic actuators use an optimization formulation that\ntypically assumes known operational conditions. This approach could lead to\nactuators that cannot perform in uncertain environments because elongation,\nspeed, or torque requirements may be beyond actuator specifications when the\noperation deviates from its nominal conditions. Addressing this gap, we propose\na convex optimization formulation to design the stiffness of series elastic\nactuators to minimize energy consumption and satisfy actuator constraints\ndespite uncertainty due to manufacturing of the spring, unmodeled dynamics,\nefficiency of the transmission, and the kinematics and kinetics of the load. In\nour formulation, we express energy consumption as a scalar convex-quadratic\nfunction of compliance. In the unconstrained case, this quadratic equation\nprovides an analytical solution to the optimal value of stiffness that\nminimizes energy consumption for arbitrary periodic reference trajectories. As\nactuator constraints, we consider peak motor torque, peak motor velocity,\nlimitations due to the speed-torque relationship of DC motors, and peak\nelongation of the spring. As a simulation case study, we apply our formulation\nto the robust design of a series elastic actuator for a powered prosthetic\nankle. Our simulation results indicate that a small trade-off between energy\nefficiency and robustness is justified to design actuators that can operate\nwith uncertainty.\n", "title": "Robust Optimal Design of Energy Efficient Series Elastic Actuators: Application to a Powered Prosthetic Ankle" }
null
null
null
null
true
null
16124
null
Default
null
null
null
{ "abstract": " The aim of this paper is to show both analytically and numerically the\nexistence of a subwavelength phononic bandgap in bubble phononic crystals. The\nkey is an original formula for the quasi-periodic Minnaert resonance\nfrequencies of an arbitrarily shaped bubble. The main findings in this paper\nare illustrated with a variety of numerical experiments.\n", "title": "Subwavelength phononic bandgap opening in bubbly media" }
null
null
null
null
true
null
16125
null
Default
null
null
null
{ "abstract": " Scalable quantum photonic systems require efficient single photon sources\ncoupled to integrated photonic devices. Solid-state quantum emitters can\ngenerate single photons with high efficiency, while silicon photonic circuits\ncan manipulate them in an integrated device structure. Combining these two\nmaterial platforms could, therefore, significantly increase the complexity of\nintegrated quantum photonic devices. Here, we demonstrate hybrid integration of\nsolid-state quantum emitters to a silicon photonic device. We develop a\npick-and-place technique that can position epitaxially grown InAs/InP quantum\ndots emitting at telecom wavelengths on a silicon photonic chip\ndeterministically with nanoscale precision. We employ an adiabatic tapering\napproach to transfer the emission from the quantum dots to the waveguide with\nhigh efficiency. We also incorporate an on-chip silicon-photonic beamsplitter\nto perform a Hanbury-Brown and Twiss measurement. Our approach could enable\nintegration of pre-characterized III-V quantum photonic devices into\nlarge-scale photonic structures to enable complex devices composed of many\nemitters and photons.\n", "title": "Hybrid integration of solid-state quantum emitters on a silicon photonic chip" }
null
null
null
null
true
null
16126
null
Default
null
null
null
{ "abstract": " Tasks such as search and recommendation have become increas- ingly important\nfor E-commerce to deal with the information over- load problem. To meet the\ndiverse needs of di erent users, person- alization plays an important role. In\nmany large portals such as Taobao and Amazon, there are a bunch of di erent\ntypes of search and recommendation tasks operating simultaneously for person-\nalization. However, most of current techniques address each task separately.\nThis is suboptimal as no information about users shared across di erent tasks.\nIn this work, we propose to learn universal user representations across\nmultiple tasks for more e ective personalization. In partic- ular, user\nbehavior sequences (e.g., click, bookmark or purchase of products) are modeled\nby LSTM and attention mechanism by integrating all the corresponding content,\nbehavior and temporal information. User representations are shared and learned\nin an end-to-end setting across multiple tasks. Bene ting from better\ninformation utilization of multiple tasks, the user representations are more e\nective to re ect their interests and are more general to be transferred to new\ntasks. We refer this work as Deep User Perception Network (DUPN) and conduct an\nextensive set of o ine and online experiments. Across all tested ve di erent\ntasks, our DUPN consistently achieves better results by giving more e ective\nuser representations. Moreover, we deploy DUPN in large scale operational tasks\nin Taobao. Detailed implementations, e.g., incre- mental model updating, are\nalso provided to address the practical issues for the real world applications.\n", "title": "Perceive Your Users in Depth: Learning Universal User Representations from Multiple E-commerce Tasks" }
null
null
null
null
true
null
16127
null
Default
null
null
null
{ "abstract": " We investigate the ground-state properties and the collective modes of a\ntwo-dimensional two-component Rydberg-dressed Fermi liquid in the\ndipole-blockade regime. We find instability of the homogeneous system toward\nphase separated and density ordered phases, using the Hartree-Fock and\nrandom-phase approximations, respectively. The spectral weight of collective\ndensity oscillations in the homogenous phase also signals the emergence of\ndensity-wave instability. We examine the effect of exchange-hole on the\ndensity-wave instability and on the collective mode dispersion using the\nHubbard local-field factor.\n", "title": "Phase-diagram and dynamics of Rydberg-dressed fermions in two-dimensions" }
null
null
null
null
true
null
16128
null
Default
null
null
null
{ "abstract": " Waves can be used to probe and image an unknown medium. Passive imaging uses\nambient noise sources to illuminate the medium. This paper considers passive\nimaging with moving sensors. The motivation is to generate large synthetic\napertures, which should result in enhanced resolution. However Doppler effects\nand lack of reciprocity significantly affect the imaging process. This paper\ndiscusses the consequences in terms of resolution and it shows how to design\nappropriate imaging functions depending on the sensor trajectory and velocity.\n", "title": "Ambient noise correlation-based imaging with moving sensors" }
null
null
null
null
true
null
16129
null
Default
null
null
null
{ "abstract": " We investigate the Anderson localization in non-Hermitian\nAubry-André-Harper (AAH) models with imaginary potentials added to lattice\nsites to represent the physical gain and loss during the interacting processes\nbetween the system and environment. By checking the mean inverse participation\nratio (MIPR) of the system, we find that different configurations of physical\ngain and loss have very different impacts on the localization phase transition\nin the system. In the case with balanced physical gain and loss added in an\nalternate way to the lattice sites, the critical region (in the case with\np-wave superconducting pairing) and the critical value (both in the situations\nwith and without p-wave pairing) for the Anderson localization phase transition\nwill be significantly reduced, which implies an enhancement of the localization\nprocess. However, if the system is divided into two parts with one of them\ncoupled to physical gain and the other coupled to the corresponding physical\nloss, the transition process will be impacted only in a very mild way. Besides,\nwe also discuss the situations with imbalanced physical gain and loss and find\nthat the existence of random imaginary potentials in the system will also\naffect the localization process while constant imaginary potentials will not.\n", "title": "Anderson localization in the Non-Hermitian Aubry-André-Harper model with physical gain and loss" }
null
null
null
null
true
null
16130
null
Default
null
null
null
{ "abstract": " Using a 10D lift of non-perturbative volume stabilization in type IIB string\ntheory we study the limitations for obtaining de Sitter vacua. Based on this we\nfind that the simplest KKLT vacua with a single Kahler modulus stabilized by a\ngaugino condensate cannot be uplifted to de Sitter. Rather, the uplift flattens\nout due to stronger backreaction on the volume modulus than has previously been\nanticipated, resulting in vacua which are meta-stable and SUSY breaking, but\nthat are always AdS. However, we also show that setups such as racetrack\nstabilization can avoid this issue. In these models it is possible to obtain\nsupersymmetric AdS vacua with a cosmological constant that can be tuned to zero\nwhile retaining finite moduli stabilization. In this regime, it seems that de\nSitter uplifts are possible with negligible backreaction on the internal\nvolume. We exhibit this behavior also from the 10D perspective.\n", "title": "Towards de Sitter from 10D" }
null
null
null
null
true
null
16131
null
Default
null
null
null
{ "abstract": " Background: Pairwise and network meta-analyses using fixed effect and random\neffects models are commonly applied to synthesise evidence from randomised\ncontrolled trials. The models differ in their assumptions and the\ninterpretation of the results. The model choice depends on the objective of the\nanalysis and knowledge of the included studies. Fixed effect models are often\nused because there are too few studies with which to estimate the between-study\nstandard deviation from the data alone. Objectives: The aim is to propose a\nframework for eliciting an informative prior distribution for the between-study\nstandard deviation in a Bayesian random effects meta-analysis model to\ngenuinely represent heterogeneity when data are sparse. Methods: We developed\nan elicitation method using external information such as empirical evidence and\nexperts' beliefs on the 'range' of treatment effects in order to infer the\nprior distribution for the between-study standard deviation. We also developed\nthe method to be implemented in R. Results: The three-stage elicitation\napproach allows uncertainty to be represented by a genuine prior distribution\nto avoid making misleading inferences. It is flexible to what judgments an\nexpert can provide, and is applicable to all types of outcome measure for which\na treatment effect can be constructed on an additive scale. Conclusions: The\nchoice between using a fixed effect or random effects meta-analysis model\ndepends on the inferences required and not on the number of available studies.\nOur elicitation framework captures external evidence about heterogeneity and\novercomes the often implausible assumption that studies are estimating the same\ntreatment effect, thereby improving the quality of inferences in decision\nmaking.\n", "title": "Incorporating genuine prior information about between-study heterogeneity in random effects pairwise and network meta-analyses" }
null
null
null
null
true
null
16132
null
Default
null
null
null
{ "abstract": " A central question in statistical learning is to design algorithms that not\nonly perform well on training data, but also generalize to new and unseen data.\nIn this paper, we tackle this question by formulating a distributionally robust\nstochastic optimization (DRSO) problem, which seeks a solution that minimizes\nthe worst-case expected loss over a family of distributions that are close to\nthe empirical distribution in Wasserstein distances. We establish a connection\nbetween such Wasserstein DRSO and regularization. More precisely, we identify a\nbroad class of loss functions, for which the Wasserstein DRSO is asymptotically\nequivalent to a regularization problem with a gradient-norm penalty. Such\nrelation provides new interpretations for problems involving regularization,\nincluding a great number of statistical learning problems and discrete choice\nmodels (e.g. multinomial logit). The connection suggests a principled way to\nregularize high-dimensional, non-convex problems. This is demonstrated through\nthe training of Wasserstein generative adversarial networks in deep learning.\n", "title": "Wasserstein Distributional Robustness and Regularization in Statistical Learning" }
null
null
null
null
true
null
16133
null
Default
null
null
null
{ "abstract": " In machine learning ensemble methods have demonstrated high accuracy for the\nvariety of problems in different areas. Two notable ensemble methods widely\nused in practice are gradient boosting and random forests. In this paper we\npresent InfiniteBoost - a novel algorithm, which combines important properties\nof these two approaches. The algorithm constructs the ensemble of trees for\nwhich two properties hold: trees of the ensemble incorporate the mistakes done\nby others; at the same time the ensemble could contain the infinite number of\ntrees without the over-fitting effect. The proposed algorithm is evaluated on\nthe regression, classification, and ranking tasks using large scale, publicly\navailable datasets.\n", "title": "InfiniteBoost: building infinite ensembles with gradient descent" }
null
null
null
null
true
null
16134
null
Default
null
null
null
{ "abstract": " We demonstrate the full functionality of a circuit that generates single\nmicrowave photons on demand, with a wave packet that can be modulated with a\nnear-arbitrary shape. We achieve such a high tunability by coupling a\nsuperconducting qubit near the end of a semi-infinite transmission line. A dc\nsuperconducting quantum interference device shunts the line to ground and is\nemployed to modify the spatial dependence of the electromagnetic mode structure\nin the transmission line. This control allows us to couple and decouple the\nqubit from the line, shaping its emission rate on fast time scales. Our\ndecoupling scheme is applicable to all types of superconducting qubits and\nother solid-state systems and can be generalized to multiple qubits as well as\nto resonators.\n", "title": "On-demand microwave generator of shaped single photons" }
null
null
null
null
true
null
16135
null
Default
null
null
null
{ "abstract": " In this paper, we derive a family of fast and stable algorithms for\nmultiplying and inverting $n \\times n$ Pascal matrices that run in $O(n log^2\nn)$ time and are closely related to De Casteljau's algorithm for Bézier curve\nevaluation. These algorithms use a recursive factorization of the triangular\nPascal matrices and improve upon the cripplingly unstable $O(n log n)$ fast\nFourier transform-based algorithms which involve a Toeplitz matrix\nfactorization. We conduct numerical experiments which establish the speed and\nstability of our algorithm, as well as the poor performance of the Toeplitz\nfactorization algorithm. As an example, we show how our formulation relates to\nBézier curve evaluation.\n", "title": "Fast and Stable Pascal Matrix Algorithms" }
null
null
null
null
true
null
16136
null
Default
null
null
null
{ "abstract": " This paper develops variational continual learning (VCL), a simple but\ngeneral framework for continual learning that fuses online variational\ninference (VI) and recent advances in Monte Carlo VI for neural networks. The\nframework can successfully train both deep discriminative models and deep\ngenerative models in complex continual learning settings where existing tasks\nevolve over time and entirely new tasks emerge. Experimental results show that\nVCL outperforms state-of-the-art continual learning methods on a variety of\ntasks, avoiding catastrophic forgetting in a fully automatic way.\n", "title": "Variational Continual Learning" }
null
null
null
null
true
null
16137
null
Default
null
null
null
{ "abstract": " In the multiple testing problem with independent tests, the classical linear\nstep-up procedure controls the false discovery rate (FDR) at level\n$\\pi_0\\alpha$, where $\\pi_0$ is the proportion of true null hypotheses and\n$\\alpha$ is the target FDR level. Adaptive procedures can improve power by\nincorporating estimates of $\\pi_0$, which typically rely on a tuning parameter.\nFixed adaptive procedures set their tuning parameters before seeing the data\nand can be shown to control the FDR in finite samples. We develop theoretical\nresults for dynamic adaptive procedures whose tuning parameters are determined\nby the data. We show that, if the tuning parameter is chosen according to a\nleft-to-right stopping time rule, the corresponding dynamic adaptive procedure\ncontrols the FDR in finite samples. Examples include the recently proposed\nright-boundary procedure and the widely used lowest-slope procedure, among\nothers. Simulation results show that the right-boundary procedure is more\npowerful than other dynamic adaptive procedures under independence and mild\ndependence conditions.\n", "title": "Dynamic adaptive procedures that control the false discovery rate" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
16138
null
Validated
null
null
null
{ "abstract": " This paper investigates, from information theoretic grounds, a learning\nproblem based on the principle that any regularity in a given dataset can be\nexploited to extract compact features from data, i.e., using fewer bits than\nneeded to fully describe the data itself, in order to build meaningful\nrepresentations of a relevant content (multiple labels). We begin by\nintroducing the noisy lossy source coding paradigm with the log-loss fidelity\ncriterion which provides the fundamental tradeoffs between the\n\\emph{cross-entropy loss} (average risk) and the information rate of the\nfeatures (model complexity). Our approach allows an information theoretic\nformulation of the \\emph{multi-task learning} (MTL) problem which is a\nsupervised learning framework in which the prediction models for several\nrelated tasks are learned jointly from common representations to achieve better\ngeneralization performance. Then, we present an iterative algorithm for\ncomputing the optimal tradeoffs and its global convergence is proven provided\nthat some conditions hold. An important property of this algorithm is that it\nprovides a natural safeguard against overfitting, because it minimizes the\naverage risk taking into account a penalization induced by the model\ncomplexity. Remarkably, empirical results illustrate that there exists an\noptimal information rate minimizing the \\emph{excess risk} which depends on the\nnature and the amount of available training data. An application to\nhierarchical text categorization is also investigated, extending previous\nworks.\n", "title": "Compression-Based Regularization with an Application to Multi-Task Learning" }
null
null
null
null
true
null
16139
null
Default
null
null
null
{ "abstract": " The invariant is one of central topics in science, technology and\nengineering. The differential invariant is essential in understanding or\ndescribing some important phenomena or procedures in mathematics, physics,\nchemistry, biology or computer science etc. The derivation of differential\ninvariants is usually difficult or complicated. This paper reports a discovery\nthat under the affine transform, differential invariants have similar\nstructures with moment invariants up to a scalar function of transform\nparameters. If moment invariants are known, relative differential invariants\ncan be obtained by the substitution of moments by derivatives with the same\norder. Whereas moment invariants can be calculated by multiple integrals, this\nmethod provides a simple way to derive differential invariants without the need\nto resolve any equation system. Since the definition of moments on different\nmanifolds or in different dimension of spaces is well established, differential\ninvariants on or in them will also be well defined. Considering that moments\nhave a strong background in mathematics and physics, this technique offers a\nnew view angle to the inner structure of invariants. Projective differential\ninvariants can also be found in this way with a screening process.\n", "title": "Isomorphism between Differential and Moment Invariants under Affine Transform" }
null
null
null
null
true
null
16140
null
Default
null
null
null
{ "abstract": " Windowed orthogonal frequency-division multiplexing (OFDM) and wavelet OFDM\nhave been proposed as medium access techniques for broadband communications\nover the power line network by the standard IEEE 1901. Windowed OFDM has been\nextensively researched and employed in different fields of communication, while\nwavelet OFDM, which has been recently recommended for the first time in a\nstandard, has received less attention. This work is aimed to show that wavelet\nOFDM, which basically is an Extended Lapped Transform-based multicarrier\nmodulation (ELT-MCM), is a viable and attractive alternative for data\ntransmission in hostile scenarios, such as in-home PLC. To this end, we obtain\ntheoretical expressions for ELT-MCM of: 1) the useful signal power, 2) the\ninter-symbol interference (ISI) power, 3) the inter-carrier interference (ICI)\npower, and 4) the noise power at the receiver side. The system capacity and the\nachievable throughput are derived from these. This study includes several\ncomputer simulations that show that ELT-MCM is an efficient alternative to\nimprove data rates in PLC networks.\n", "title": "Throughput Analysis for Wavelet OFDM in Broadband Power Line Communications" }
null
null
null
null
true
null
16141
null
Default
null
null
null
{ "abstract": " Neural networks have recently had a lot of success for many tasks. However,\nneural network architectures that perform well are still typically designed\nmanually by experts in a cumbersome trial-and-error process. We propose a new\nmethod to automatically search for well-performing CNN architectures based on a\nsimple hill climbing procedure whose operators apply network morphisms,\nfollowed by short optimization runs by cosine annealing. Surprisingly, this\nsimple method yields competitive results, despite only requiring resources in\nthe same order of magnitude as training a single network. E.g., on CIFAR-10,\nour method designs and trains networks with an error rate below 6% in only 12\nhours on a single GPU; training for one day reduces this error further, to\nalmost 5%.\n", "title": "Simple And Efficient Architecture Search for Convolutional Neural Networks" }
null
null
null
null
true
null
16142
null
Default
null
null
null
{ "abstract": " We investigate that no-knowledge measurement-based feedback control is\nutilized to obtain the estimation precision of the detection efficiency. For\nthe feedback operators that concern us, no-knowledge measurement is the optimal\nway to estimate the detection efficiency. We show that the higher precision can\nbe achieved for the lower or larger detection efficiency. It is found that\nno-knowledge feedback can be used to cancel decoherence. No-knowledge feedback\nwith a high detection efficiency can perform well in estimating frequency and\ndetection efficiency parameters simultaneously. And simultaneous estimation is\nbetter than independent estimation given by the same probes.\n", "title": "Quantum estimation of detection efficiency with no-knowledge quantum feedback" }
null
null
null
null
true
null
16143
null
Default
null
null
null
{ "abstract": " Earthquakes at seismogenic plate boundaries are a response to the\ndifferential motions of tectonic blocks embedded within a geometrically complex\nnetwork of branching and coalescing faults. Elastic strain is accumulated at a\nslow strain rate of the order of $10^{-15}$ s$^{-1}$, and released\nintermittently at intervals $>100$ years, in the form of rapid (seconds to\nminutes) coseismic ruptures. The development of macroscopic models of\nquasi-static planar tectonic dynamics at these plate boundaries has remained\nchallenging due to uncertainty with regard to the spatial and kinematic\ncomplexity of fault system behaviors. In particular, the characteristic length\nscale of kinematically distinct tectonic structures is poorly constrained. Here\nwe analyze fluctuations in GPS recordings of interseismic velocities from the\nsouthern California plate boundary, identifying heavy-tailed scaling behavior.\nThis suggests that the plate boundary can be understood as a densely packed\ngranular medium near the jamming transition, with a characteristic length scale\nof $91 \\pm 20$ km. In this picture fault and block systems may rapidly\nrearrange the distribution of forces within them, driving a mixture of\ntransient and intermittent fault slip behaviors over tectonic time scales.\n", "title": "Intermittent Granular Dynamics at a Seismogenic Plate Boundary" }
null
null
[ "Physics" ]
null
true
null
16144
null
Validated
null
null
null
{ "abstract": " We study a continuous-time asset-allocation problem for a firm in the\ninsurance industry that backs up the liabilities raised by the insurance\ncontracts with the underwriting profits and the income resulting from investing\nin the financial market. Using the martingale approach and convex duality\ntechniques we characterize strategies that maximize expected utility from\nconsumption and final wealth under CRRA preferences. We present numerical\nresults for some distributions of claims/liabilities with policy limits.\n", "title": "Optimal continuous-time ALM for insurers: a martingale approach" }
null
null
null
null
true
null
16145
null
Default
null
null
null
{ "abstract": " The proliferation of fake news on social media has opened up new directions\nof research for timely identification and containment of fake news, and\nmitigation of its widespread impact on public opinion. While much of the\nearlier research was focused on identification of fake news based on its\ncontents or by exploiting users' engagements with the news on social media,\nthere has been a rising interest in proactive intervention strategies to\ncounter the spread of misinformation and its impact on society. In this survey,\nwe describe the modern-day problem of fake news and, in particular, highlight\nthe technical challenges associated with it. We discuss existing methods and\ntechniques applicable to both identification and mitigation, with a focus on\nthe significant advances in each method and their advantages and limitations.\nIn addition, research has often been limited by the quality of existing\ndatasets and their specific application contexts. To alleviate this problem, we\ncomprehensively compile and summarize characteristic features of available\ndatasets. Furthermore, we outline new directions of research to facilitate\nfuture development of effective and interdisciplinary solutions.\n", "title": "Combating Fake News: A Survey on Identification and Mitigation Techniques" }
null
null
null
null
true
null
16146
null
Default
null
null
null
{ "abstract": " There has recently been a growing interest in the development of statistical\nmethods to compare medical costs between treatment groups. When cumulative cost\nis the outcome of interest, right-censoring poses the challenge of informative\nmissingness due to heterogeneity in the rates of cost accumulation across\nsubjects. Existing approaches seeking to address the challenge of informative\ncost trajectories typically rely on inverse probability weighting and target a\nnet \"intent-to-treat\" effect. However, no approaches capable of handling\ntime-dependent treatment and confounding in this setting have been developed to\ndate. A method to estimate the joint causal effect of a treatment regime on\ncost would be of value to inform public policy when comparing interventions. In\nthis paper, we develop a nested g-computation approach to cost analysis in\norder to accommodate time-dependent treatment and repeated outcome measures. We\ndemonstrate that our procedure is reasonably robust to departures from its\ndistributional assumptions and can provide unique insights into fundamental\ndifferences in average cost across time-dependent treatment regimes.\n", "title": "A causal approach to analysis of censored medical costs in the presence of time-varying treatment" }
null
null
null
null
true
null
16147
null
Default
null
null
null
{ "abstract": " We seek to infer the parameters of an ergodic Markov process from samples\ntaken independently from the steady state. Our focus is on non-equilibrium\nprocesses, where the steady state is not described by the Boltzmann measure,\nbut is generally unknown and hard to compute, which prevents the application of\nestablished equilibrium inference methods. We propose a quantity we call\npropagator likelihood, which takes on the role of the likelihood in equilibrium\nprocesses. This propagator likelihood is based on fictitious transitions\nbetween those configurations of the system which occur in the samples. The\npropagator likelihood can be derived by minimising the relative entropy between\nthe empirical distribution and a distribution generated by propagating the\nempirical distribution forward in time. Maximising the propagator likelihood\nleads to an efficient reconstruction of the parameters of the underlying model\nin different systems, both with discrete configurations and with continuous\nconfigurations. We apply the method to non-equilibrium models from statistical\nphysics and theoretical biology, including the asymmetric simple exclusion\nprocess (ASEP), the kinetic Ising model, and replicator dynamics.\n", "title": "Inferring the parameters of a Markov process from snapshots of the steady state" }
null
null
null
null
true
null
16148
null
Default
null
null
null
{ "abstract": " TextCNN, the convolutional neural network for text, is a useful deep learning\nalgorithm for sentence classification tasks such as sentiment analysis and\nquestion classification. However, neural networks have long been known as black\nboxes because interpreting them is a challenging task. Researchers have\ndeveloped several tools to understand a CNN for image classification by deep\nvisualization, but research about deep TextCNNs is still insufficient. In this\npaper, we are trying to understand what a TextCNN learns on two classical NLP\ndatasets. Our work focuses on functions of different convolutional kernels and\ncorrelations between convolutional kernels.\n", "title": "What Does a TextCNN Learn?" }
null
null
[ "Statistics" ]
null
true
null
16149
null
Validated
null
null
null
{ "abstract": " As political polarization in the United States continues to rise, the\nquestion of whether polarized individuals can fruitfully cooperate becomes\npressing. Although diversity of individual perspectives typically leads to\nsuperior team performance on complex tasks, strong political perspectives have\nbeen associated with conflict, misinformation and a reluctance to engage with\npeople and perspectives beyond one's echo chamber. It is unclear whether\nself-selected teams of politically diverse individuals will create higher or\nlower quality outcomes. In this paper, we explore the effect of team political\ncomposition on performance through analysis of millions of edits to Wikipedia's\nPolitical, Social Issues, and Science articles. We measure editors' political\nalignments by their contributions to conservative versus liberal articles. A\nsurvey of editors validates that those who primarily edit liberal articles\nidentify more strongly with the Democratic party and those who edit\nconservative ones with the Republican party. Our analysis then reveals that\npolarized teams---those consisting of a balanced set of politically diverse\neditors---create articles of higher quality than politically homogeneous teams.\nThe effect appears most strongly in Wikipedia's Political articles, but is also\nobserved in Social Issues and even Science articles. Analysis of article \"talk\npages\" reveals that politically polarized teams engage in longer, more\nconstructive, competitive, and substantively focused but linguistically diverse\ndebates than political moderates. More intense use of Wikipedia policies by\npolitically diverse teams suggests institutional design principles to help\nunleash the power of politically polarized teams.\n", "title": "The Wisdom of Polarized Crowds" }
null
null
null
null
true
null
16150
null
Default
null
null
null
{ "abstract": " We present a probabilistic approach to generate a small, query-able summary\nof a dataset for interactive data exploration. Departing from traditional\nsummarization techniques, we use the Principle of Maximum Entropy to generate a\nprobabilistic representation of the data that can be used to give approximate\nquery answers. We develop the theoretical framework and formulation of our\nprobabilistic representation and show how to use it to answer queries. We then\npresent solving techniques and give three critical optimizations to improve\npreprocessing time and query accuracy. Lastly, we experimentally evaluate our\nwork using a 5 GB dataset of flights within the United States and a 210 GB\ndataset from an astronomy particle simulation. While our current work only\nsupports linear queries, we show that our technique can successfully answer\nqueries faster than sampling while introducing, on average, no more error than\nsampling and can better distinguish between rare and nonexistent values.\n", "title": "Probabilistic Database Summarization for Interactive Data Exploration" }
null
null
null
null
true
null
16151
null
Default
null
null
null
{ "abstract": " Recent advances in generative adversarial networks (GANs) have shown\npromising potentials in conditional image generation. However, how to generate\nhigh-resolution images remains an open problem. In this paper, we aim at\ngenerating high-resolution well-blended images given composited copy-and-paste\nones, i.e. realistic high-resolution image blending. To achieve this goal, we\npropose Gaussian-Poisson GAN (GP-GAN), a framework that combines the strengths\nof classical gradient-based approaches and GANs, which is the first work that\nexplores the capability of GANs in high-resolution image blending task to the\nbest of our knowledge. Particularly, we propose Gaussian-Poisson Equation to\nformulate the high-resolution image blending problem, which is a joint\noptimisation constrained by the gradient and colour information. Gradient\nfilters can obtain gradient information. For generating the colour information,\nwe propose Blending GAN to learn the mapping between the composited image and\nthe well-blended one. Compared to the alternative methods, our approach can\ndeliver high-resolution, realistic images with fewer bleedings and unpleasant\nartefacts. Experiments confirm that our approach achieves the state-of-the-art\nperformance on Transient Attributes dataset. A user study on Amazon Mechanical\nTurk finds that majority of workers are in favour of the proposed approach.\n", "title": "GP-GAN: Towards Realistic High-Resolution Image Blending" }
null
null
null
null
true
null
16152
null
Default
null
null
null
{ "abstract": " Feature extraction becomes increasingly important as data grows high\ndimensional. Autoencoder as a neural network based feature extraction method\nachieves great success in generating abstract features of high dimensional\ndata. However, it fails to consider the relationships of data samples which may\naffect experimental results of using original and new features. In this paper,\nwe propose a Relation Autoencoder model considering both data features and\ntheir relationships. We also extend it to work with other major autoencoder\nmodels including Sparse Autoencoder, Denoising Autoencoder and Variational\nAutoencoder. The proposed relational autoencoder models are evaluated on a set\nof benchmark datasets and the experimental results show that considering data\nrelationships can generate more robust features which achieve lower\nconstruction loss and then lower error rate in further classification compared\nto the other variants of autoencoders.\n", "title": "Relational Autoencoder for Feature Extraction" }
null
null
null
null
true
null
16153
null
Default
null
null
null
{ "abstract": " Sentiment classification and sarcasm detection are both important NLP tasks.\nWe show that these two tasks are correlated, and present a multi-task\nlearning-based framework using deep neural network that models this correlation\nto improve the performance of both tasks in a multi-task learning setting.\n", "title": "Sentiment and Sarcasm Classification with Multitask Learning" }
null
null
null
null
true
null
16154
null
Default
null
null
null
{ "abstract": " This review paper discusses how context has been used in neural machine\ntranslation (NMT) in the past two years (2017-2018). Starting with a brief\nretrospect on the rapid evolution of NMT models, the paper then reviews studies\nthat evaluate NMT output from various perspectives, with emphasis on those\nanalyzing limitations of the translation of contextual phenomena. In a\nsubsequent version, the paper will then present the main methods that were\nproposed to leverage context for improving translation quality, and\ndistinguishes methods that aim to improve the translation of specific phenomena\nfrom those that consider a wider unstructured context.\n", "title": "Context in Neural Machine Translation: A Review of Models and Evaluations" }
null
null
null
null
true
null
16155
null
Default
null
null
null
{ "abstract": " A pair of type-II Dirac cones in PdTe$_2$ was recently predicted by theories\nand confirmed in experiments, making PdTe$_2$ the first material that processes\nboth superconductivity and type-II Dirac fermions. In this work, we study the\nevolution of Dirac cones in PdTe$_2$ under hydrostatic pressure by the\nfirst-principles calculations. Our results show that the pair of type-II Dirac\npoints disappears at 6.1 GPa. Interestingly, a new pair of type-I Dirac points\nfrom the same two bands emerges at 4.7 GPa. Due to the distinctive band\nstructures compared with those of PtSe$_2$ and PtTe$_2$, the two types of Dirac\npoints can coexist in PdTe$_2$ under proper pressure (4.7-6.1 GPa). The\nemergence of type-I Dirac cones and the disappearance of type-II Dirac ones are\nattributed to the increase/decrease of the energy of the states at $\\Gamma$ and\n$A$ point, which have the anti-bonding/bonding characters of interlayer Te-Te\natoms. On the other hand, we find that the superconductivity of PdTe$_2$\nslightly decreases with pressure. The pressure-induced different types of Dirac\ncones combined with superconductivity may open a promising way to investigate\nthe complex interactions between Dirac fermions and superconducting\nquasi-particles.\n", "title": "Manipulation of type-I and type-II Dirac points in PdTe2 superconductor by external pressure" }
null
null
null
null
true
null
16156
null
Default
null
null
null
{ "abstract": " We propose a novel Metropolis-Hastings algorithm to sample uniformly from the\nspace of correlation matrices. Existing methods in the literature are based on\nelaborated representations of a correlation matrix, or on complex\nparametrizations of it. By contrast, our method is intuitive and simple, based\nthe classical Cholesky factorization of a positive definite matrix and Markov\nchain Monte Carlo theory. We perform a detailed convergence analysis of the\nresulting Markov chain, and show how it benefits from fast convergence, both\ntheoretically and empirically. Furthermore, in numerical experiments our\nalgorithm is shown to be significantly faster than the current alternative\napproaches, thanks to its simple yet principled approach.\n", "title": "A fast Metropolis-Hastings method for generating random correlation matrices" }
null
null
null
null
true
null
16157
null
Default
null
null
null
{ "abstract": " We present a weak lensing analysis of a sample of SDSS Compact Groups (CGs).\nUsing the measured radial density contrast profile, we derive the average\nmasses under the assumption of spherical symmetry, obtaining a velocity\ndispersion for the Singular Isothermal Spherical model, $\\sigma_V = 270 \\pm 40\n\\rm ~km~s^{-1}$, and for the NFW model, $R_{200}=0.53\\pm0.10\\,h_{70}^{-1}\\,\\rm\nMpc$. We test three different definitions of CGs centres to identify which best\ntraces the true dark matter halo centre, concluding that a luminosity weighted\ncentre is the most suitable choice. We also study the lensing signal dependence\non CGs physical radius, group surface brightness, and morphological mixing. We\nfind that groups with more concentrated galaxy members show steeper mass\nprofiles and larger velocity dispersions. We argue that both, a possible lower\nfraction of interloper and a true steeper profile, could be playing a role in\nthis effect. Straightforward velocity dispersion estimates from member\nspectroscopy yields $\\sigma_V \\approx 230 \\rm ~km~s^{-1}$ in agreement with our\nlensing results.\n", "title": "Compact Groups analysis using weak gravitational lensing" }
null
null
null
null
true
null
16158
null
Default
null
null
null
{ "abstract": " The complexity of knowledge production on complex systems is well-known, but\nthere still lacks knowledge framework that would both account for a certain\nstructure of knowledge production at an epistemological level and be directly\napplicable to the study and management of complex systems. We set a basis for\nsuch a framework, by first analyzing in detail a case study of the construction\nof a geographical theory of complex territorial systems, through mixed methods,\nnamely qualitative interview analysis and quantitative citation network\nanalysis. We can therethrough inductively build a framework that considers\nknowledge entreprises as perspectives, with co-evolving components within\ncomplementary knowledge domains. We finally discuss potential applications and\ndevelopments.\n", "title": "An Applied Knowledge Framework to Study Complex Systems" }
null
null
null
null
true
null
16159
null
Default
null
null
null
{ "abstract": " We classify certain subcategories in quotients of exact categories. In\nparticular, we classify the triangulated and thick subcategories of an\nalgebraic triangulated category, i.e. the stable category of a Frobenius\ncategory.\n", "title": "Classifying subcategories in quotients of exact categories" }
null
null
[ "Mathematics" ]
null
true
null
16160
null
Validated
null
null
null
{ "abstract": " Here we present a new approach to search for first order invariants (first\nintegrals) of rational second order ordinary differential equations. This\nmethod is an alternative to the Darbouxian and symmetry approaches. Our\nprocedure can succeed in many cases where these two approaches fail. We also\npresent here a Maple implementation of the theoretical results and methods,\nhereby introduced, in a computational package -- {\\it InSyDE}. The package is\ndesigned, apart from materializing the algorithms presented, to provide a set\nof tools to allow the user to analyse the intermediary steps of the process.\n", "title": "Dealing with Rational Second Order Ordinary Differential Equations where both Darboux and Lie Find It Difficult: The $S$-function Method" }
null
null
null
null
true
null
16161
null
Default
null
null
null
{ "abstract": " Statistical analyses of urban environments have been recently improved\nthrough publicly available high resolution data and mapping technologies that\nhave been adopted across industries. These technologies allow us to create\nmetrics to empirically investigate urban design principles of the past\nhalf-century. Philadelphia is an interesting case study for this work, with its\nrapid urban development and population increase in the last decade. We outline\na data analysis pipeline for exploring the association between safety and local\nneighborhood features such as population, economic health and the built\nenvironment. As a particular example of our analysis pipeline, we focus on\nquantitative measures of the built environment that serve as proxies for\nvibrancy: the amount of human activity in a local area. Historically, vibrancy\nhas been very challenging to measure empirically. Measures based on land use\nzoning are not an adequate description of local vibrancy and so we construct a\ndatabase and set of measures of business activity in each neighborhood. We\nemploy several matching analyses to explore the relationship between\nneighborhood vibrancy and safety, such as comparing high crime versus low crime\nlocations within the same neighborhood. As additional sources of urban data\nbecome available, our analysis pipeline can serve as the template for further\ninvestigations into the relationships between safety, economic factors and the\nbuilt environment at the local neighborhood level.\n", "title": "Urban Vibrancy and Safety in Philadelphia" }
null
null
null
null
true
null
16162
null
Default
null
null
null
{ "abstract": " In this paper a technique is suggested to integrate linear initial boundary\nvalue problems with exponential quadrature rules in such a way that the order\nin time is as high as possible. A thorough error analysis is given for both the\nclassical approach of integrating the problem firstly in space and then in time\nand of doing it in the reverse order in a suitable manner. Time-dependent\nboundary conditions are considered with both approaches and full discretization\nformulas are given to implement the methods once the quadrature nodes have been\nchosen for the time integration and a particular (although very general) scheme\nis selected for the space discretization. Numerical experiments are shown which\ncorroborate that, for example, with the suggested technique, order $2s$ is\nobtained when choosing the $s$ nodes of Gaussian quadrature rule.\n", "title": "Exponential quadrature rules without order reduction" }
null
null
null
null
true
null
16163
null
Default
null
null
null
{ "abstract": " In this manuscript, we generalize F-calculus to apply it on fractal Tartan\nspaces. The generalized standard F-calculus is used to obtain the integral and\nderivative of the functions on the fractal Tartan with different dimensions.\nThe generalized fractional derivatives have local properties that make it more\nuseful in modelling physical problems. The illustrative examples are used to\npresent the details.\n", "title": "New Derivatives for the Functions with the Fractal Tartan Support" }
null
null
[ "Mathematics" ]
null
true
null
16164
null
Validated
null
null
null
{ "abstract": " A longstanding goal of behavior-based robotics is to solve high-level\nnavigation tasks using end to end navigation behaviors that directly map\nsensors to actions. Navigation behaviors, such as reaching a goal or following\na path without collisions, can be learned from exploration and interaction with\nthe environment, but are constrained by the type and quality of a robot's\nsensors, dynamics, and actuators. Traditional motion planning handles varied\nrobot geometry and dynamics, but typically assumes high-quality observations.\nModern vision-based navigation typically considers imperfect or partial\nobservations, but simplifies the robot action space. With both approaches, the\ntransition from simulation to reality can be difficult. Here, we learn two end\nto end navigation behaviors that avoid moving obstacles: point to point and\npath following. These policies receive noisy lidar observations and output\nrobot linear and angular velocities. We train these policies in small, static\nenvironments with Shaped-DDPG, an adaptation of the Deep Deterministic Policy\nGradient (DDPG) reinforcement learning method which optimizes reward and\nnetwork architecture. Over 500 meters of on-robot experiments show , these\npolicies generalize to new environments and moving obstacles, are robust to\nsensor, actuator, and localization noise, and can serve as robust building\nblocks for larger navigation tasks. The path following and point and point\npolicies are 83% and 56% more successful than the baseline, respectively.\n", "title": "Learning Navigation Behaviors End to End" }
null
null
null
null
true
null
16165
null
Default
null
null
null
{ "abstract": " This paper introduces a new member of the family of Variational Autoencoders\n(VAE) that constrains the rate of information transferred by the latent layer.\nThe latent layer is interpreted as a communication channel, the information\nrate of which is bound by imposing a pre-set signal-to-noise ratio. The new\nconstraint subsumes the mutual information between the input and latent\nvariables, combining naturally with the likelihood objective of the observed\ndata as used in a conventional VAE. The resulting Bounded-Information-Rate\nVariational Autoencoder (BIR-VAE) provides a meaningful latent representation\nwith an information resolution that can be specified directly in bits by the\nsystem designer. The rate constraint can be used to prevent overtraining, and\nthe method naturally facilitates quantisation of the latent variables at the\nset rate. Our experiments confirm that the BIR-VAE has a meaningful latent\nrepresentation and that its performance is at least as good as state-of-the-art\ncompeting algorithms, but with lower computational complexity.\n", "title": "Bounded Information Rate Variational Autoencoders" }
null
null
null
null
true
null
16166
null
Default
null
null
null
{ "abstract": " Results in Wasan geometry of tangents circles can still be considered in a\nsingular case by the division by 0.\n", "title": "Wasan geometry with the division by 0" }
null
null
null
null
true
null
16167
null
Default
null
null
null
{ "abstract": " The beta family owes its privileged status within unit interval distributions\nto several relevant features such as, for example, easyness of interpretation\nand versatility in modeling different types of data. However, its flexibility\nat the unit interval endpoints is poor enough to prevent from properly modeling\nthe portions of data having values next to zero and one. Such a drawback can be\novercome by resorting to the class of the non-central beta distributions.\nIndeed, the latter allows the density to take on arbitrary positive and finite\nlimits which have a really simple form. That said, new insights into such class\nare provided in this paper. In particular, new representations and moments\nexpressions are derived. Moreover, its potential with respect to alternative\nmodels is highlighted through applications to real data.\n", "title": "New insights into non-central beta distributions" }
null
null
null
null
true
null
16168
null
Default
null
null
null
{ "abstract": " We investigate the mean curvature flows in a class of warped product\nmanifolds with closed hypersurfaces fibering over $\\mathbb{R}$. In particular,\nwe prove that under natural conditions on the warping function and Ricci\ncurvature bound for the ambient space, there exists a large class of closed\ninitial hypersurfaces, as geodesic graphs over the totally geodesic\nhypersurface $\\Sigma$, such that the mean curvature flow starting from $S_0$\nexists for all time and converges to $\\Sigma$.\n", "title": "Mean Curvature Flows of Closed Hypersurfaces in Warped Product Manifolds" }
null
null
[ "Mathematics" ]
null
true
null
16169
null
Validated
null
null
null
{ "abstract": " Finite-difference methods are widely used in solving partial differential\nequations. In a large problem set, approximations can take days or weeks to\nevaluate, yet the bulk of computation may occur within a single loop nest. The\nmodelling process for researchers is not straightforward either, requiring\nmodels with differential equations to be translated into stencil kernels, then\noptimised separately. One tool that seeks to speed up and eliminate mistakes\nfrom this tedious procedure is Devito, used to efficiently employ\nfinite-difference methods.\nIn this work, we implement time-tiling, a loop nest optimisation, in Devito\nyielding a decrease in runtime of up to 45%, and at least 20% across stencils\nfrom the acoustic wave equation family, widely used in Devito's target domain\nof seismic imaging. We present an estimator for arithmetic intensity under\ntime-tiling and a model to predict runtime improvements in stencil\ncomputations. We also consider generalisation of time-tiling to imperfect loop\nnests, a less widely studied problem.\n", "title": "Optimising finite-difference methods for PDEs through parameterised time-tiling in Devito" }
null
null
null
null
true
null
16170
null
Default
null
null
null
{ "abstract": " In this work we propose an ontology to support automated negotiation in\nmultiagent systems. The ontology can be connected with some domain-specific\nontologies to facilitate the negotiation in different domains, such as\nIntelligent Transportation Systems (ITS), e-commerce, etc. The specific\nnegotiation rules for each type of negotiation strategy can also be defined as\npart of the ontology, reducing the amount of knowledge hardcoded in the agents\nand ensuring the interoperability. The expressiveness of the ontology was\nproved in a multiagent architecture for the automatic traffic light setting\napplication on ITS.\n", "title": "An Ontology to support automated negotiation" }
null
null
null
null
true
null
16171
null
Default
null
null
null
{ "abstract": " We present a technique for efficiently synthesizing images of atmospheric\nclouds using a combination of Monte Carlo integration and neural networks. The\nintricacies of Lorenz-Mie scattering and the high albedo of cloud-forming\naerosols make rendering of clouds---e.g. the characteristic silverlining and\nthe \"whiteness\" of the inner body---challenging for methods based solely on\nMonte Carlo integration or diffusion theory. We approach the problem\ndifferently. Instead of simulating all light transport during rendering, we\npre-learn the spatial and directional distribution of radiant flux from tens of\ncloud exemplars. To render a new scene, we sample visible points of the cloud\nand, for each, extract a hierarchical 3D descriptor of the cloud geometry with\nrespect to the shading location and the light source. The descriptor is input\nto a deep neural network that predicts the radiance function for each shading\nconfiguration. We make the key observation that progressively feeding the\nhierarchical descriptor into the network enhances the network's ability to\nlearn faster and predict with high accuracy while using few coefficients. We\nalso employ a block design with residual connections to further improve\nperformance. A GPU implementation of our method synthesizes images of clouds\nthat are nearly indistinguishable from the reference solution within seconds\ninteractively. Our method thus represents a viable solution for applications\nsuch as cloud design and, thanks to its temporal stability, also for\nhigh-quality production of animated content.\n", "title": "Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks" }
null
null
null
null
true
null
16172
null
Default
null
null
null
{ "abstract": " We study the size and the complexity of computing finite state automata (FSA)\nrepresenting and approximating the downward and the upward closure of Petri net\nlanguages with coverability as the acceptance condition. We show how to\nconstruct an FSA recognizing the upward closure of a Petri net language in\ndoubly-exponential time, and therefore the size is at most doubly exponential.\nFor downward closures, we prove that the size of the minimal automata can be\nnon-primitive recursive. In the case of BPP nets, a well-known subclass of\nPetri nets, we show that an FSA accepting the downward/upward closure can be\nconstructed in exponential time. Furthermore, we consider the problem of\nchecking whether a simple regular language is included in the downward/upward\nclosure of a Petri net/BPP net language. We show that this problem is\nEXPSPACE-complete (resp. NP-complete) in the case of Petri nets (resp. BPP\nnets). Finally, we show that it is decidable whether a Petri net language is\nupward/downward closed. To this end, we prove that one can decide whether a\ngiven regular language is a subset of a Petri net coverability language.\n", "title": "On the Upward/Downward Closures of Petri Nets" }
null
null
null
null
true
null
16173
null
Default
null
null
null
{ "abstract": " Artificial Intelligence federates numerous scientific fields in the aim of\ndeveloping machines able to assist human operators performing complex\ntreatments -- most of which demand high cognitive skills (e.g. learning or\ndecision processes). Central to this quest is to give machines the ability to\nestimate the likeness or similarity between things in the way human beings\nestimate the similarity between stimuli.\nIn this context, this book focuses on semantic measures: approaches designed\nfor comparing semantic entities such as units of language, e.g. words,\nsentences, or concepts and instances defined into knowledge bases. The aim of\nthese measures is to assess the similarity or relatedness of such semantic\nentities by taking into account their semantics, i.e. their meaning --\nintuitively, the words tea and coffee, which both refer to stimulating\nbeverage, will be estimated to be more semantically similar than the words\ntoffee (confection) and coffee, despite that the last pair has a higher\nsyntactic similarity. The two state-of-the-art approaches for estimating and\nquantifying semantic similarities/relatedness of semantic entities are\npresented in detail: the first one relies on corpora analysis and is based on\nNatural Language Processing techniques and semantic models while the second is\nbased on more or less formal, computer-readable and workable forms of knowledge\nsuch as semantic networks, thesaurus or ontologies. (...) Beyond a simple\ninventory and categorization of existing measures, the aim of this monograph is\nto convey novices as well as researchers of these domains towards a better\nunderstanding of semantic similarity estimation and more generally semantic\nmeasures.\n", "title": "Semantic Similarity from Natural Language and Ontology Analysis" }
null
null
null
null
true
null
16174
null
Default
null
null
null
{ "abstract": " Elementary net systems (ENS) are the most fundamental class of Petri nets.\nTheir synthesis problem has important applications in the design of digital\nhardware and commercial processes. Given a labeled transition system (TS) $A$,\nfeasibility is the NP-complete decision problem whether $A$ can be equivalently\nsynthesized into an ENS. It is well known that $A$ is feasible if and only if\nit has the event state separation property (ESSP) and the state separation\nproperty (SSP). Recently, these properties have also been studied individually\nfor their practical implications. A fast ESSP algorithm, for instance, would\nallow applications to at least validate the language equivalence of $A$ and a\nsynthesized ENS. Being able to efficiently decide SSP, on the other hand, could\nserve as a quick-fail preprocessing mechanism for synthesis. Although a few\ntractable subclasses have been found, this paper destroys much of the hope that\nmany practically meaningful input restrictions make feasibility or at least one\nof ESSP and SSP efficient. We show that all three problems remain NP-complete\neven if the input is restricted to linear TSs where every event occurs at most\nthree times or if the input is restricted to TSs where each event occurs at\nmost twice and each state has at most two successor and two predecessor states.\n", "title": "The Hardness of Synthesizing Elementary Net Systems from Highly Restricted Inputs" }
null
null
[ "Computer Science" ]
null
true
null
16175
null
Validated
null
null
null
{ "abstract": " The three exceptional lattices, $E_6$, $E_7$, and $E_8$, have attracted much\nattention due to their anomalously dense and symmetric structures which are of\ncritical importance in modern theoretical physics. Here, we study the\nelectronic band structure of a single spinless quantum particle hopping between\ntheir nearest-neighbor lattice points in the tight-binding limit. Using Markov\nchain Monte Carlo methods, we numerically sample their lattice Green's\nfunctions, densities of states, and random walk return probabilities. We find\nand tabulate a plethora of Van Hove singularities in the densities of states,\nincluding degenerate ones in $E_6$ and $E_7$. Finally, we use brute force\nenumeration to count the number of distinct closed walks of length up to eight,\nwhich gives the first eight moments of the densities of states.\n", "title": "Exceptional Lattice Green's Functions" }
null
null
[ "Physics" ]
null
true
null
16176
null
Validated
null
null
null
{ "abstract": " Analysis of conjugate natural convection with surface radiation in a\ntwo-dimensional enclosure is carried out in order to search the optimal\nlocation of the heat source with entropy generation minimization (EGM) approach\nand conventional heat transfer parameters. The air as an incompressible fluid\nand transparent media is considered the fluid filling the enclosure with the\nsteady and laminar regime. The enclosure internal surfaces are also gray,\nopaque and diffuse. The governing equations with stream function and vorticity\nformulation are solved using finite difference approach. Results include the\neffect of Rayleigh number and emissivity on the dimensionless average rate of\nentropy generation and its optimum location. The optimum location search with\nconventional heat transfer parameters including maximum temperature and Nusselt\nnumbers are also examined.\n", "title": "Optimal design with EGM approach in conjugate natural convection with surface radiation in a two-dimensional enclosure" }
null
null
null
null
true
null
16177
null
Default
null
null
null
{ "abstract": " Anosov representations of word hyperbolic groups into higher-rank semisimple\nLie groups are representations with finite kernel and discrete image that have\nstrong analogies with convex cocompact representations into rank-one Lie\ngroups. However, the most naive analogy fails: generically, Anosov\nrepresentations do not act properly and cocompactly on a convex set in the\nassociated Riemannian symmetric space. We study representations into projective\nindefinite orthogonal groups PO(p,q) by considering their action on the\nassociated pseudo-Riemannian hyperbolic space H^{p,q-1} in place of the\nRiemannian symmetric space. Following work of Barbot and Mérigot in anti-de\nSitter geometry, we find an intimate connection between Anosov representations\nand the natural notion of convex cocompactness in this setting.\n", "title": "Convex cocompactness in pseudo-Riemannian hyperbolic spaces" }
null
null
[ "Mathematics" ]
null
true
null
16178
null
Validated
null
null
null
{ "abstract": " We examine the role of memorization in deep learning, drawing connections to\ncapacity, generalization, and adversarial robustness. While deep networks are\ncapable of memorizing noise data, our results suggest that they tend to\nprioritize learning simple patterns first. In our experiments, we expose\nqualitative differences in gradient-based optimization of deep neural networks\n(DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned\nexplicit regularization (e.g., dropout) we can degrade DNN training performance\non noise datasets without compromising generalization on real data. Our\nanalysis suggests that the notions of effective capacity which are dataset\nindependent are unlikely to explain the generalization performance of deep\nnetworks when trained with gradient based methods because training data itself\nplays an important role in determining the degree of memorization.\n", "title": "A Closer Look at Memorization in Deep Networks" }
null
null
null
null
true
null
16179
null
Default
null
null
null
{ "abstract": " Recent advances in 3D fully convolutional networks (FCN) have made it\nfeasible to produce dense voxel-wise predictions of full volumetric images. In\nthis work, we show that a multi-class 3D FCN trained on manually labeled CT\nscans of seven abdominal structures (artery, vein, liver, spleen, stomach,\ngallbladder, and pancreas) can achieve competitive segmentation results, while\navoiding the need for handcrafting features or training organ-specific models.\nTo this end, we propose a two-stage, coarse-to-fine approach that trains an FCN\nmodel to roughly delineate the organs of interest in the first stage (seeing\n$\\sim$40% of the voxels within a simple, automatically generated binary mask of\nthe patient's body). We then use these predictions of the first-stage FCN to\ndefine a candidate region that will be used to train a second FCN. This step\nreduces the number of voxels the FCN has to classify to $\\sim$10% while\nmaintaining a recall high of $>$99%. This second-stage FCN can now focus on\nmore detailed segmentation of the organs. We respectively utilize training and\nvalidation sets consisting of 281 and 50 clinical CT images. Our hierarchical\napproach provides an improved Dice score of 7.5 percentage points per organ on\naverage in our validation set. We furthermore test our models on a completely\nunseen data collection acquired at a different hospital that includes 150 CT\nscans with three anatomical labels (liver, spleen, and pancreas). In such\nchallenging organs as the pancreas, our hierarchical approach improves the mean\nDice score from 68.5 to 82.2%, achieving the highest reported average score on\nthis dataset.\n", "title": "Hierarchical 3D fully convolutional networks for multi-organ segmentation" }
null
null
null
null
true
null
16180
null
Default
null
null
null
{ "abstract": " We study the phase diagram of a minority game where three classes of agents\nare present. Two types of agents play a risk-loving game that we model by the\nstandard Snowdrift Game. The behaviour of the third type of agents is coded by\n{\\em indifference} w.r.t. the game at all: their dynamics is designed to\naccount for risk-aversion as an innovative behavioral gambit. From this point\nof view, the choice of this solitary strategy is enhanced when innovation\nstarts, while is depressed when it becomes the majority option. This implies\nthat the payoff matrix of the game becomes dependent on the global awareness of\nthe agents measured by the relevance of the population of the indifferent\nplayers. The resulting dynamics is non-trivial with different kinds of phase\ntransition depending on a few model parameters. The phase diagram is studied on\nregular as well as complex networks.\n", "title": "An evolutionary game model for behavioral gambit of loyalists: Global awareness and risk-aversion" }
null
null
null
null
true
null
16181
null
Default
null
null
null
{ "abstract": " The complexity of Philip Wolfe's method for the minimum Euclidean-norm point\nproblem over a convex polytope has remained unknown since he proposed the\nmethod in 1974. The method is important because it is used as a subroutine for\none of the most practical algorithms for submodular function minimization. We\npresent the first example that Wolfe's method takes exponential time.\nAdditionally, we improve previous results to show that linear programming\nreduces in strongly-polynomial time to the minimum norm point problem over a\nsimplex.\n", "title": "The Minimum Euclidean-Norm Point on a Convex Polytope: Wolfe's Combinatorial Algorithm is Exponential" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
16182
null
Validated
null
null
null
{ "abstract": " In this work, by using strong gravitational lensing (SGL) observations along\nwith Type Ia Supernovae (Union2.1) and gamma ray burst data (GRBs), we propose\na new method to study a possible redshift evolution of $\\gamma(z)$, the mass\ndensity power-law index of strong gravitational lensing systems. In this\nanalysis, we assume the validity of cosmic distance duality relation and the\nflat universe. In order to explore the $\\gamma(z)$ behavior, three different\nparametrizations are considered, namely: (P1) $\\gamma(z_l)=\\gamma_0+\\gamma_1\nz_l$, (P2) $\\gamma(z_l)=\\gamma_0+\\gamma_1 z_l/(1+z_l)$ and (P3)\n$\\gamma(z_l)=\\gamma_0+\\gamma_1 \\ln(1+z_l)$, where $z_l$ corresponds to lens\nredshift. If $\\gamma_0=2$ and $\\gamma_1=0$ the singular isothermal sphere model\nis recovered. Our method is performed on SGL sub-samples defined by different\nlens redshifts and velocity dispersions. For the former case, the results are\nin full agreement with each other, while a 1$\\sigma$ tension between the\nsub-samples with low ($\\leq 250$ km/s) and high ($>250$ km/s) velocity\ndispersions was obtained on the ($\\gamma_0$-$\\gamma_1$) plane. By considering\nthe complete SGL sample, we obtain $\\gamma_0 \\approx 2$ and $ \\gamma_1 \\approx\n0$ within 1$\\sigma$ c.l. for all $\\gamma(z)$ parametrizations. However, we find\nthe following best fit values of $\\gamma_1$: $-0.085$, $-0.16$ and $-0.12$ for\nP1, P2 and P3 parametrizations, respectively, suggesting a mild evolution for\n$\\gamma(z)$. By repeating the analysis with Type Ia Supernovae from JLA\ncompilation, GRBs and SGL systems this mild evolution is reinforced.\n", "title": "Constraints on a possible evolution of mass density power-law index in strong gravitational lensing from cosmological data" }
null
null
null
null
true
null
16183
null
Default
null
null
null
{ "abstract": " Hydrogen peroxide (H2O2) is an important signaling molecule in cancer cells.\nHowever, the significant secretion of H2O2 by cancer cells have been rarely\nobserved. Cold atmospheric plasma (CAP) is a near room temperature ionized gas\ncomposed of neutral particles, charged particles, reactive species, and\nelectrons. Here, we first demonstrated that breast cancer cells and pancreatic\nadenocarcinoma cells generated micromolar level H2O2 during just 1 min of\ndirect CAP treatment on these cells. The cell-based H2O2 generation is affected\nby the medium volume, the cell confluence, as well as the discharge voltage.\nThe application of cold atmospheric plasma (CAP) in the cancer treatment has\nbeen intensively investigated over the past decade. Several cellular responses\nto the CAP treatment have been observed including the consumption of the\nCAP-originated reactive species, the rise of intracellular reactive oxygen\nspecies, the damage on DNA and mitochondria, as well as the activation of\napoptotic events. This is a new previously unknown cellular response to CAP,\nwhich provides a new prospective to understand the interaction between CAP and\ncells.\n", "title": "The Strong Cell-based Hydrogen Peroxide Generation Triggered by Cold Atmospheric Plasma" }
null
null
null
null
true
null
16184
null
Default
null
null
null
{ "abstract": " Many organisms repartition their proteome in a circadian fashion in response\nto the daily nutrient changes in their environment. A striking example is\nprovided by cyanobacteria, which perform photosynthesis during the day to fix\ncarbon. These organisms not only face the challenge of rewiring their proteome\nevery 12 hours, but also the necessity of storing the fixed carbon in the form\nof glycogen to fuel processes during the night. In this manuscript, we extend\nthe framework developed by Hwa and coworkers (Scott et al., Science 330, 1099\n(2010)) for quantifying the relatinship between growth and proteome composition\nto circadian metabolism. We then apply this framework to investigate the\ncircadian metabolism of the cyanobacterium Cyanothece, which not only fixes\ncarbon during the day, but also nitrogen during the night, storing it in the\npolymer cyanophycin. Our analysis reveals that the need to store carbon and\nnitrogen tends to generate an extreme growth strategy, in which the cells\npredominantly grow during the day, as observed experimentally. This strategy\nmaximizes the growth rate over 24 hours, and can be quantitatively understood\nby the bacterial growth laws. Our analysis also shows that the slow relaxation\nof the proteome, arising from the slow growth rate, puts a severe constraint on\nimplementing this optimal strategy. Yet, the capacity to estimate the time of\nthe day, enabled by the circadian clock, makes it possible to anticipate the\ndaily changes in the environment and mount a response ahead of time. This\nsignificantly enhances the growth rate by counteracting the detrimental effects\nof the slow proteome relaxation.\n", "title": "Theory of circadian metabolism" }
null
null
[ "Quantitative Biology" ]
null
true
null
16185
null
Validated
null
null
null
{ "abstract": " We present an investigation into the intrinsic magnetic properties of the\ncompounds YCo5 and GdCo5, members of the RETM5 class of permanent magnets (RE =\nrare earth, TM = transition metal). Focusing on Y and Gd provides direct\ninsight into both the TM magnetization and RE-TM interactions without the\ncomplication of strong crystal field effects. We synthesize single crystals of\nYCo5 and GdCo5 using the optical floating zone technique and measure the\nmagnetization from liquid helium temperatures up to 800 K. These measurements\nare interpreted through calculations based on a Green's function formulation of\ndensity-functional theory, treating the thermal disorder of the local magnetic\nmoments within the coherent potential approximation. The rise in the\nmagnetization of GdCo5 with temperature is shown to arise from a faster\ndisordering of the Gd magnetic moments compared to the antiferromagnetically\naligned Co sublattice. We use the calculations to analyze the different Curie\ntemperatures of the compounds and also compare the molecular (Weiss) fields at\nthe RE site with previously published neutron scattering experiments. To gain\nfurther insight into the RE-TM interactions, we perform substitutional doping\non the TM site, studying the compounds RECo4.5Ni0.5, RECo4Ni, and RECo4.5Fe0.5.\nBoth our calculations and experiments on powdered samples find an\nincreased/decreased magnetization with Fe/Ni doping, respectively. The\ncalculations further reveal a pronounced dependence on the location of the\ndopant atoms of both the Curie temperatures and the Weiss field at the RE site.\n", "title": "Rare-earth/transition-metal magnetic interactions in pristine and (Ni,Fe)-doped YCo5 and GdCo5" }
null
null
null
null
true
null
16186
null
Default
null
null
null
{ "abstract": " We present an explicit construction of the moduli spaces of rank 2 stable\nparabolic bundles of parabolic degree 0 over the Riemann sphere, corresponding\nto \"optimum\" open weight chambers of parabolic weights in the weight polytope.\nThe complexity of the different moduli space' weight chambers is understood in\nterms of the complexity of the actions of the corresponding groups of bundle\nautomorphisms on stable parabolic structures. For the given choices of\nparabolic weights, $\\mathscr{N}$ consists entirely of isomorphism classes of\nstrictly stable parabolic bundles whose underlying Birkhoff-Grothendieck\nsplitting coefficients are constant and minimal, is constructed as a quotient\nof a set of stable parabolic structures by a group of bundle automorphisms, and\nis a smooth, compact complex manifold biholomorphic to\n$\\left(\\mathbb{C}\\mathbb{P}^{1}\\right)^{n-3}$ for even degree, and\n$\\mathbb{C}\\mathbb{P}^{n-3}$ for odd degree. As an application of the\nconstruction of such explicit models, we provide an explicit characterization\nof the nilpotent cone locus on $T^{*}\\mathscr{N}$ for Hitchin's integrable\nsystem.\n", "title": "Optimum weight chamber examples of moduli spaces of stable parabolic bundles in genus 0" }
null
null
null
null
true
null
16187
null
Default
null
null
null
{ "abstract": " We report the development and validation of a data-driven real-time risk\nscore that provides timely assessments for the clinical acuity of ward patients\nbased on their temporal lab tests and vital signs, which allows for timely\nintensive care unit (ICU) admissions. Unlike the existing risk scoring\ntechnologies, the proposed score is individualized; it uses the electronic\nhealth record (EHR) data to cluster the patients based on their static\ncovariates into subcohorts of similar patients, and then learns a separate\ntemporal, non-stationary multi-task Gaussian Process (GP) model that captures\nthe physiology of every subcohort. Experiments conducted on data from a\nheterogeneous cohort of 6,094 patients admitted to the Ronald Reagan UCLA\nmedical center show that our risk score significantly outperforms the\nstate-of-the-art risk scoring technologies, such as the Rothman index and MEWS,\nin terms of timeliness, true positive rate (TPR), and positive predictive value\n(PPV). In particular, the proposed score increases the AUC with 20% and 38% as\ncompared to Rothman index and MEWS respectively, and can predict ICU admissions\n8 hours before clinicians at a PPV of 35% and a TPR of 50%. Moreover, we show\nthat the proposed risk score allows for better decisions on when to discharge\nclinically stable patients from the ward, thereby improving the efficiency of\nhospital resource utilization.\n", "title": "Individualized Risk Prognosis for Critical Care Patients: A Multi-task Gaussian Process Model" }
null
null
null
null
true
null
16188
null
Default
null
null
null
{ "abstract": " Global registration of multi-view robot data is a challenging task.\nAppearance-based global localization approaches often fail under drastic\nview-point changes, as representations have limited view-point invariance. This\nwork is based on the idea that human-made environments contain rich semantics\nwhich can be used to disambiguate global localization. Here, we present X-View,\na Multi-View Semantic Global Localization system. X-View leverages semantic\ngraph descriptor matching for global localization, enabling localization under\ndrastically different view-points. While the approach is general in terms of\nthe semantic input data, we present and evaluate an implementation on visual\ndata. We demonstrate the system in experiments on the publicly available\nSYNTHIA dataset, on a realistic urban dataset recorded with a simulator, and on\nreal-world StreetView data. Our findings show that X-View is able to globally\nlocalize aerial-to-ground, and ground-to-ground robot data of drastically\ndifferent view-points. Our approach achieves an accuracy of up to 85 % on\nglobal localizations in the multi-view case, while the benchmarked baseline\nappearance-based methods reach up to 75 %.\n", "title": "X-View: Graph-Based Semantic Multi-View Localization" }
null
null
null
null
true
null
16189
null
Default
null
null
null
{ "abstract": " In this paper we introduce variable exponent local Hardy spaces associated\nwith a non-negative self-adjoint operator L. We define them by using an area\nsquare integral involving the heat semigroup associated to L. A molecular\ncharacterization is established and as an aplication of the molecular\ncharacterization we prove that our local Hardy space coincides with the\n(global) variable exponent Hardy space associated to L, provided that 0 does\nnot belong to the spectrum of L. Also, we show that it coincides with the\nglobal variable exponent Hardy space associated to L+I.\n", "title": "Local Hardy spaces with variable exponents associated to non-negative self-adjoint operators satisfying Gaussian estimates" }
null
null
null
null
true
null
16190
null
Default
null
null
null
{ "abstract": " The robust PCA problem, wherein, given an input data matrix that is the\nsuperposition of a low-rank matrix and a sparse matrix, we aim to separate out\nthe low-rank and sparse components, is a well-studied problem in machine\nlearning. One natural question that arises is that, as in the inductive\nsetting, if features are provided as input as well, can we hope to do better?\nAnswering this in the affirmative, the main goal of this paper is to study the\nrobust PCA problem while incorporating feature information. In contrast to\nprevious works in which recovery guarantees are based on the convex relaxation\nof the problem, we propose a simple iterative algorithm based on\nhard-thresholding of appropriate residuals. Under weaker assumptions than\nprevious works, we prove the global convergence of our iterative procedure;\nmoreover, it admits a much faster convergence rate and lesser computational\ncomplexity per iteration. In practice, through systematic synthetic and real\ndata simulations, we confirm our theoretical findings regarding improvements\nobtained by using feature information.\n", "title": "Provable Inductive Robust PCA via Iterative Hard Thresholding" }
null
null
null
null
true
null
16191
null
Default
null
null
null
{ "abstract": " We study the Postnikov tower of the classifying space of a compact Lie group\nP(n,mn), which gives obstructions to lifting a topological Brauer class of\nperiod $n$ to a PU_{mn}-torsor, where the base space is a CW complex of\ndimension 8. Combined with the study of a twisted version of Atiyah-Hirzebruch\nspectral sequence, this solves the topological period-index problem for CW\ncomplexes of dimension 8.\n", "title": "The Topological Period-Index Problem over 8-Complexes" }
null
null
null
null
true
null
16192
null
Default
null
null
null
{ "abstract": " Using a dataset of over 1.9 million messages posted on Twitter by about\n25,000 ISIS members, we explore how ISIS makes use of social media to spread\nits propaganda and to recruit militants from the Arab world and across the\nglobe. By distinguishing between violence-driven, theological, and sectarian\ncontent, we trace the connection between online rhetoric and key events on the\nground. To the best of our knowledge, ours is one of the first studies to focus\non Arabic content, while most literature focuses on English content. Our\nfindings yield new important insights about how social media is used by radical\nmilitant groups to target the Arab-speaking world, and reveal important\npatterns in their propaganda efforts.\n", "title": "The Rise of Jihadist Propaganda on Social Networks" }
null
null
null
null
true
null
16193
null
Default
null
null
null
{ "abstract": " The Muon g-2 Experiment plans to use the Fermilab Recycler Ring for forming\nthe proton bunches that hit its production target. The proposed scheme uses one\nRF system, 80 kV of 2.5 MHz RF. In order to avoid bunch rotations in a\nmismatched bucket, the 2.5 MHz is ramped adiabatically from 3 to 80 kV in 90\nms. In this study, the interaction of the primary proton beam with the\nproduction target for the Muon g-2 Experiment is numerically examined.\n", "title": "Simulated performance of the production target for the Muon g-2 Experiment" }
null
null
null
null
true
null
16194
null
Default
null
null
null
{ "abstract": " Collecting training data from the physical world is usually time-consuming\nand even dangerous for fragile robots, and thus, recent advances in robot\nlearning advocate the use of simulators as the training platform.\nUnfortunately, the reality gap between synthetic and real visual data prohibits\ndirect migration of the models trained in virtual worlds to the real world.\nThis paper proposes a modular architecture for tackling the virtual-to-real\nproblem. The proposed architecture separates the learning model into a\nperception module and a control policy module, and uses semantic image\nsegmentation as the meta representation for relating these two modules. The\nperception module translates the perceived RGB image to semantic image\nsegmentation. The control policy module is implemented as a deep reinforcement\nlearning agent, which performs actions based on the translated image\nsegmentation. Our architecture is evaluated in an obstacle avoidance task and a\ntarget following task. Experimental results show that our architecture\nsignificantly outperforms all of the baseline methods in both virtual and real\nenvironments, and demonstrates a faster learning curve than them. We also\npresent a detailed analysis for a variety of variant configurations, and\nvalidate the transferability of our modular architecture.\n", "title": "Virtual-to-Real: Learning to Control in Visual Semantic Segmentation" }
null
null
null
null
true
null
16195
null
Default
null
null
null
{ "abstract": " Alternating minimization, or Fienup methods, have a long history in phase\nretrieval. We provide new insights related to the empirical and theoretical\nanalysis of these algorithms when used with Fourier measurements and combined\nwith convex priors. In particular, we show that Fienup methods can be viewed as\nperforming alternating minimization on a regularized nonconvex least-squares\nproblem with respect to amplitude measurements. We then prove that under mild\nadditional structural assumptions on the prior (semi-algebraicity), the\nsequence of signal estimates has a smooth convergent behaviour towards a\ncritical point of the nonconvex regularized least-squares objective. Finally,\nwe propose an extension to Fienup techniques, based on a projected gradient\ndescent interpretation and acceleration using inertial terms. We demonstrate\nexperimentally that this modification combined with an $\\ell_1$ prior\nconstitutes a competitive approach for sparse phase retrieval.\n", "title": "On Fienup Methods for Regularized Phase Retrieval" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
16196
null
Validated
null
null
null
{ "abstract": " In this work, we develop an importance sampling estimator by coupling the\nreduced-order model and the generative model in a problem setting of\nuncertainty quantification. The target is to estimate the probability that the\nquantity of interest (QoI) in a complex system is beyond a given threshold. To\navoid the prohibitive cost of sampling a large scale system, the reduced-order\nmodel is usually considered for a trade-off between efficiency and accuracy.\nHowever, the Monte Carlo estimator given by the reduced-order model is biased\ndue to the error from dimension reduction. To correct the bias, we still need\nto sample the fine model. An effective technique to reduce the variance\nreduction is importance sampling, where we employ the generative model to\nestimate the distribution of the data from the reduced-order model and use it\nfor the change of measure in the importance sampling estimator. To compensate\nthe approximation errors of the reduced-order model, more data that induce a\nslightly smaller QoI than the threshold need to be included into the training\nset. Although the amount of these data can be controlled by a posterior error\nestimate, redundant data, which may outnumber the effective data, will be kept\ndue to the epistemic uncertainty. To deal with this issue, we introduce a\nweighted empirical distribution to process the data from the reduced-order\nmodel. The generative model is then trained by minimizing the cross entropy\nbetween it and the weighted empirical distribution. We also introduce a penalty\nterm into the objective function to deal with the overfitting for more\nrobustness. Numerical results are presented to demonstrate the effectiveness of\nthe proposed methodology.\n", "title": "Coupling the reduced-order model and the generative model for an importance sampling estimator" }
null
null
null
null
true
null
16197
null
Default
null
null
null
{ "abstract": " The existence or absence of non-analytic cusps in the Loschmidt-echo return\nrate is traditionally employed to distinguish between a regular dynamical phase\n(regular cusps) and a trivial phase (no cusps) in quantum spin chains after a\nglobal quench. However, numerical evidence in a recent study [J. C. Halimeh and\nV. Zauner-Stauber, arXiv:1610.02019] suggests that instead of the trivial phase\na distinct anomalous dynamical phase characterized by a novel type of\nnon-analytic cusps occurs in the one-dimensional transverse-field Ising model\nwhen interactions are sufficiently long-range. Using an analytic semiclassical\napproach and exact diagonalization, we show that this anomalous phase also\narises in the fully-connected case of infinite-range interactions, and we\ndiscuss its defining signature. Our results show that the transition from the\nregular to the anomalous dynamical phase coincides with Z2-symmetry breaking in\nthe infinite-time limit, thereby showing a connection between two different\nconcepts of dynamical criticality. Our work further expands the dynamical phase\ndiagram of long-range interacting quantum spin chains, and can be tested\nexperimentally in ion-trap setups and ultracold atoms in optical cavities,\nwhere interactions are inherently long-range.\n", "title": "Anomalous dynamical phase in quantum spin chains with long-range interactions" }
null
null
null
null
true
null
16198
null
Default
null
null
null
{ "abstract": " We explore ways of creating cold keV-scale dark matter by means of decays and\nscatterings. The main observation is that certain thermal freeze-in processes\ncan lead to a cold dark matter distribution in regions with small available\nphase space. In this way the free-streaming length of keV particles can be\nsuppressed without decoupling them too much from the Standard Model. In all\ncases, dark matter needs to be produced together with a heavy particle that\ncarries away most of the initial momentum. For decays, this simply requires an\noff-diagonal DM coupling to two heavy particles; for scatterings, the coupling\nof soft DM to two heavy particles needs to be diagonal, in particular in spin\nspace. Decays can thus lead to cold light DM of any spin, while scatterings\nonly work for bosons with specific couplings. We explore a number of simple\nmodels and also comment on the connection to the tentative 3.5 keV line.\n", "title": "Cold keV dark matter from decays and scatterings" }
null
null
null
null
true
null
16199
null
Default
null
null
null
{ "abstract": " Developing algorithms for solving high-dimensional partial differential\nequations (PDEs) has been an exceedingly difficult task for a long time, due to\nthe notoriously difficult problem known as the \"curse of dimensionality\". This\npaper introduces a deep learning-based approach that can handle general\nhigh-dimensional parabolic PDEs. To this end, the PDEs are reformulated using\nbackward stochastic differential equations and the gradient of the unknown\nsolution is approximated by neural networks, very much in the spirit of deep\nreinforcement learning with the gradient acting as the policy function.\nNumerical results on examples including the nonlinear Black-Scholes equation,\nthe Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that\nthe proposed algorithm is quite effective in high dimensions, in terms of both\naccuracy and cost. This opens up new possibilities in economics, finance,\noperational research, and physics, by considering all participating agents,\nassets, resources, or particles together at the same time, instead of making ad\nhoc assumptions on their inter-relationships.\n", "title": "Solving high-dimensional partial differential equations using deep learning" }
null
null
null
null
true
null
16200
null
Default
null
null