text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We present a distributed control strategy for a team of quadrotors to\nautonomously achieve a desired 3D formation. Our approach is based on local\nrelative position measurements and does not require global position information\nor inter-vehicle communication. We assume that quadrotors have a common sense\nof direction, which is chosen as the direction of gravitational force measured\nby their onboard IMU sensors. However, this assumption is not crucial, and our\napproach is robust to inaccuracies and effects of acceleration on gravitational\nmeasurements. In particular, converge to the desired formation is unaffected if\neach quadrotor has a velocity vector that projects positively onto the desired\nvelocity vector provided by the formation control strategy. We demonstrate the\nvalidity of proposed approach in an experimental setup and show that a team of\nquadrotors achieve a desired 3D formation.\n", "title": "Robust 3D Distributed Formation Control with Application to Quadrotors" }
null
null
null
null
true
null
3301
null
Default
null
null
null
{ "abstract": " Our contribution is to widen the scope of extreme value analysis applied to\ndiscrete-valued data. Extreme values of a random variable $X$ are commonly\nmodeled using the generalized Pareto distribution, a method that often gives\ngood results in practice. When $X$ is discrete, we propose two other methods\nusing a discrete generalized Pareto and a generalized Zipf distribution\nrespectively. Both are theoretically motivated and we show that they perform\nwell in estimating rare events in several simulated and real data cases such as\nword frequency, tornado outbreaks and multiple births.\n", "title": "Discrete Extremes" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
3302
null
Validated
null
null
null
{ "abstract": " We describe a mechanism by which artificial neural networks can learn rapid\nadaptation - the ability to adapt on the fly, with little data, to new tasks -\nthat we call conditionally shifted neurons. We apply this mechanism in the\nframework of metalearning, where the aim is to replicate some of the\nflexibility of human learning in machines. Conditionally shifted neurons modify\ntheir activation values with task-specific shifts retrieved from a memory\nmodule, which is populated rapidly based on limited task experience. On\nmetalearning benchmarks from the vision and language domains, models augmented\nwith conditionally shifted neurons achieve state-of-the-art results.\n", "title": "Rapid Adaptation with Conditionally Shifted Neurons" }
null
null
null
null
true
null
3303
null
Default
null
null
null
{ "abstract": " We propose a novel approach for the generation of polyphonic music based on\nLSTMs. We generate music in two steps. First, a chord LSTM predicts a chord\nprogression based on a chord embedding. A second LSTM then generates polyphonic\nmusic from the predicted chord progression. The generated music sounds pleasing\nand harmonic, with only few dissonant notes. It has clear long-term structure\nthat is similar to what a musician would play during a jam session. We show\nthat our approach is sensible from a music theory perspective by evaluating the\nlearned chord embeddings. Surprisingly, our simple model managed to extract the\ncircle of fifths, an important tool in music theory, from the dataset.\n", "title": "JamBot: Music Theory Aware Chord Based Generation of Polyphonic Music with LSTMs" }
null
null
null
null
true
null
3304
null
Default
null
null
null
{ "abstract": " Following a new microlensing constraint on primordial black holes (PBHs) with\n$\\sim10^{20}$--$10^{28}\\,\\mathrm{g}$~[1], we revisit the idea of PBH as all\nDark Matter (DM). We have shown that the updated observational constraints\nsuggest the viable mass function for PBHs as all DM to have a peak at $\\simeq\n10^{20}\\,\\mathrm{g}$ with a small width $\\sigma \\lesssim 0.1$, by imposing\nobservational constraints on an extended mass function in a proper way. We have\nalso provided an inflation model that successfully generates PBHs as all DM\nfulfilling this requirement.\n", "title": "Inflationary Primordial Black Holes as All Dark Matter" }
null
null
null
null
true
null
3305
null
Default
null
null
null
{ "abstract": " It is well known the concept of the condition number $\\kappa(A) =\n\\|A\\|\\|A^{-1}\\|$, where $A$ is a $n \\times n$ real or complex matrix and the\nnorm used is the spectral norm. Although it is very common to think in\n$\\kappa(A)$ as \"the\" condition number of $A$, the truth is that condition\nnumbers are associated to problems, not just instance of problems. Our goal is\nto clarify this difference. We will introduce the general concept of condition\nnumber and apply it to the particular case of real or complex matrices. After\nthis, we will introduce the classic condition number $\\kappa(A)$ of a matrix\nand show some known results.\n", "title": "Condition number and matrices" }
null
null
null
null
true
null
3306
null
Default
null
null
null
{ "abstract": " In this paper, we study further properties and applications of weighted\nhomology and persistent homology. We introduce the Mayer-Vietoris sequence and\ngeneralized Bockstein spectral sequence for weighted homology. For\napplications, we show an algorithm to construct a filtration of weighted\nsimplicial complexes from a weighted network. We also prove a theorem that\nallows us to calculate the mod $p^2$ weighted persistent homology given some\ninformation on the mod $p$ weighted persistent homology.\n", "title": "Computational Tools in Weighted Persistent Homology" }
null
null
null
null
true
null
3307
null
Default
null
null
null
{ "abstract": " Honeycomb structures of group IV elements can host massless Dirac fermions\nwith non-trivial Berry phases. Their potential for electronic applications has\nattracted great interest and spurred a broad search for new Dirac materials\nespecially in monolayer structures. We present a detailed investigation of the\n\\beta 12 boron sheet, which is a borophene structure that can form\nspontaneously on a Ag(111) surface. Our tight-binding analysis revealed that\nthe lattice of the \\beta 12-sheet could be decomposed into two triangular\nsublattices in a way similar to that for a honeycomb lattice, thereby hosting\nDirac cones. Furthermore, each Dirac cone could be split by introducing\nperiodic perturbations representing overlayer-substrate interactions. These\nunusual electronic structures were confirmed by angle-resolved photoemission\nspectroscopy and validated by first-principles calculations. Our results\nsuggest monolayer boron as a new platform for realizing novel high-speed\nlow-dissipation devices.\n", "title": "Dirac fermions in borophene" }
null
null
null
null
true
null
3308
null
Default
null
null
null
{ "abstract": " The motion of electrons and nuclei in photochemical events often involve\nconical intersections, degeneracies between electronic states. They serve as\nfunnels for nuclear relaxation - on the femtosecond scale - in processes where\nthe electrons and nuclei couple nonadiabatically. Accurate ab initio quantum\nchemical models are essential for interpreting experimental measurements of\nsuch phenomena. In this paper we resolve a long-standing problem in coupled\ncluster theory, presenting the first formulation of the theory that correctly\ndescribes conical intersections between excited electronic states of the same\nsymmetry. This new development demonstrates that the highly accurate coupled\ncluster theory can be applied to describe dynamics on excited electronic states\ninvolving conical intersections.\n", "title": "Resolving the notorious case of conical intersections for coupled cluster dynamics" }
null
null
null
null
true
null
3309
null
Default
null
null
null
{ "abstract": " With the advent of public access to small gate-based quantum processors, it\nbecomes necessary to develop a benchmarking methodology such that independent\nresearchers can validate the operation of these processors. We explore the\nusefulness of a number of simple quantum circuits as benchmarks for gate-based\nquantum computing devices and show that circuits performing identity operations\nare very simple, scalable and sensitive to gate errors and are therefore very\nwell suited for this task. We illustrate the procedure by presenting benchmark\nresults for the IBM Quantum Experience, a cloud-based platform for gate-based\nquantum computing.\n", "title": "Benchmarking gate-based quantum computers" }
null
null
[ "Physics" ]
null
true
null
3310
null
Validated
null
null
null
{ "abstract": " In our previous works, a relationship between Hermite's two approximation\nproblems and Schlesinger transformations of linear differential equations has\nbeen clarified. In this paper, we study tau-functions associated with holonomic\ndeformations of linear differential equations by using Hermite's two\napproximation problems. As a result, we present a determinant formula for the\nratio of tau-functions (tau-quotient).\n", "title": "Determinant structure for tau-function of holonomic deformation of linear differential equations" }
null
null
null
null
true
null
3311
null
Default
null
null
null
{ "abstract": " In this paper we develop linear transfer Perron Frobenius operator-based\napproach for optimal stabilization of stochastic nonlinear system. One of the\nmain highlight of the proposed transfer operator based approach is that both\nthe theory and computational framework developed for the optimal stabilization\nof deterministic dynamical system in [1] carries over to the stochastic case\nwith little change. The optimal stabilization problem is formulated as an\ninfinite dimensional linear program. Set oriented numerical methods are\nproposed for the finite dimensional approximation of the transfer operator and\nthe controller. Simulation results are presented to verify the developed\nframework.\n", "title": "Transfer Operator Based Approach for Optimal Stabilization of Stochastic System" }
null
null
null
null
true
null
3312
null
Default
null
null
null
{ "abstract": " Machine learning algorithms based on deep neural networks have achieved\nremarkable results and are being extensively used in different domains.\nHowever, the machine learning algorithms requires access to raw data which is\noften privacy sensitive. To address this issue, we develop new techniques to\nprovide solutions for running deep neural networks over encrypted data. In this\npaper, we develop new techniques to adopt deep neural networks within the\npractical limitation of current homomorphic encryption schemes. More\nspecifically, we focus on classification of the well-known convolutional neural\nnetworks (CNN). First, we design methods for approximation of the activation\nfunctions commonly used in CNNs (i.e. ReLU, Sigmoid, and Tanh) with low degree\npolynomials which is essential for efficient homomorphic encryption schemes.\nThen, we train convolutional neural networks with the approximation polynomials\ninstead of original activation functions and analyze the performance of the\nmodels. Finally, we implement convolutional neural networks over encrypted data\nand measure performance of the models. Our experimental results validate the\nsoundness of our approach with several convolutional neural networks with\nvarying number of layers and structures. When applied to the MNIST optical\ncharacter recognition tasks, our approach achieves 99.52\\% accuracy which\nsignificantly outperforms the state-of-the-art solutions and is very close to\nthe accuracy of the best non-private version, 99.77\\%. Also, it can make close\nto 164000 predictions per hour. We also applied our approach to CIFAR-10, which\nis much more complex compared to MNIST, and were able to achieve 91.5\\%\naccuracy with approximation polynomials used as activation functions. These\nresults show that CryptoDL provides efficient, accurate and scalable\nprivacy-preserving predictions.\n", "title": "CryptoDL: Deep Neural Networks over Encrypted Data" }
null
null
null
null
true
null
3313
null
Default
null
null
null
{ "abstract": " We analyze large-scale data sets about collaborations from two different\ndomains: economics, specifically 22.000 R&D alliances between 14.500 firms, and\nscience, specifically 300.000 co-authorship relations between 95.000\nscientists. Considering the different domains of the data sets, we address two\nquestions: (a) to what extent do the collaboration networks reconstructed from\nthe data share common structural features, and (b) can their structure be\nreproduced by the same agent-based model. In our data-driven modeling approach\nwe use aggregated network data to calibrate the probabilities at which agents\nestablish collaborations with either newcomers or established agents. The model\nis then validated by its ability to reproduce network features not used for\ncalibration, including distributions of degrees, path lengths, local clustering\ncoefficients and sizes of disconnected components. Emphasis is put on comparing\ndomains, but also sub-domains (economic sectors, scientific specializations).\nInterpreting the link probabilities as strategies for link formation, we find\nthat in R&D collaborations newcomers prefer links with established agents,\nwhile in co-authorship relations newcomers prefer links with other newcomers.\nOur results shed new light on the long-standing question about the role of\nendogenous and exogenous factors (i.e., different information available to the\ninitiator of a collaboration) in network formation.\n", "title": "Data-driven modeling of collaboration networks: A cross-domain analysis" }
null
null
null
null
true
null
3314
null
Default
null
null
null
{ "abstract": " We study quantum tunnelling in Dante's Inferno model of large field\ninflation. Such a tunnelling process, which will terminate inflation, becomes\nproblematic if the tunnelling rate is rapid compared to the Hubble time scale\nat the time of inflation. Consequently, we constrain the parameter space of\nDante's Inferno model by demanding a suppressed tunnelling rate during\ninflation. The constraints are derived and explicit numerical bounds are\nprovided for representative examples. Our considerations are at the level of an\neffective field theory; hence, the presented constraints have to hold\nregardless of any UV completion.\n", "title": "Tunnelling in Dante's Inferno" }
null
null
[ "Physics" ]
null
true
null
3315
null
Validated
null
null
null
{ "abstract": " Highly automated robot ecologies (HARE), or societies of independent\nautonomous robots or agents, are rapidly becoming an important part of much of\nthe world's critical infrastructure. As with human societies, regulation,\nwherein a governing body designs rules and processes for the society, plays an\nimportant role in ensuring that HARE meet societal objectives. However, to\ndate, a careful study of interactions between a regulator and HARE is lacking.\nIn this paper, we report on three user studies which give insights into how to\ndesign systems that allow people, acting as the regulatory authority, to\neffectively interact with HARE. As in the study of political systems in which\ngovernments regulate human societies, our studies analyze how interactions\nbetween HARE and regulators are impacted by regulatory power and individual\n(robot or agent) autonomy. Our results show that regulator power, decision\nsupport, and adaptive autonomy can each diminish the social welfare of HARE,\nand hint at how these seemingly desirable mechanisms can be designed so that\nthey become part of successful HARE.\n", "title": "Regulating Highly Automated Robot Ecologies: Insights from Three User Studies" }
null
null
null
null
true
null
3316
null
Default
null
null
null
{ "abstract": " The multilinear normal distribution is a widely used tool in tensor analysis\nof magnetic resonance imaging (MRI). Diffusion tensor MRI provides a\nstatistical estimate of a symmetric 2nd-order diffusion tensor, for each voxel\nwithin an imaging volume. In this article, tensor elliptical (TE) distribution\nis introduced as an extension to the multilinear normal (MLN) distribution.\nSome properties including the characteristic function and distribution of\naffine transformations are given. An integral representation connecting\ndensities of TE and MLN distributions is exhibited that is used in deriving the\nexpectation of any measurable function of a TE variate.\n", "title": "Some theoretical results on tensor elliptical distribution" }
null
null
null
null
true
null
3317
null
Default
null
null
null
{ "abstract": " Data driven activism attempts to collect, analyze and visualize data to\nfoster social change. However, during media censorship it is often impossible\nto collect such data. Here we demonstrate that data from personal stories can\nalso help us to gain insights about protests and activism which can work as a\nvoice for the activists. We analyze protest story data by extracting location\nnetwork from the stories and perform emotion mining to get insight about the\nprotest.\n", "title": "Applying Text Mining to Protest Stories as Voice against Media Censorship" }
null
null
null
null
true
null
3318
null
Default
null
null
null
{ "abstract": " Deep convolutional neural networks have achieved great success in various\napplications. However, training an effective DNN model for a specific task is\nrather challenging because it requires a prior knowledge or experience to\ndesign the network architecture, repeated trial-and-error process to tune the\nparameters, and a large set of labeled data to train the model. In this paper,\nwe propose to overcome these challenges by actively adapting a pre-trained\nmodel to a new task with less labeled examples. Specifically, the pre-trained\nmodel is iteratively fine tuned based on the most useful examples. The examples\nare actively selected based on a novel criterion, which jointly estimates the\npotential contribution of an instance on optimizing the feature representation\nas well as improving the classification model for the target task. On one hand,\nthe pre-trained model brings plentiful information from its original task,\navoiding redesign of the network architecture or training from scratch; and on\nthe other hand, the labeling cost can be significantly reduced by active label\nquerying. Experiments on multiple datasets and different pre-trained models\ndemonstrate that the proposed approach can achieve cost-effective training of\nDNNs.\n", "title": "Cost-Effective Training of Deep CNNs with Active Model Adaptation" }
null
null
null
null
true
null
3319
null
Default
null
null
null
{ "abstract": " Nanographitic structures (NGSs) with multitude of morphological features are\ngrown on SiO2/Si substrates by electron cyclotron resonance - plasma enhanced\nchemical vapor deposition (ECR-PECVD). CH4 is used as source gas with Ar and H2\nas dilutants. Field emission scanning electron microscopy, high resolution\ntransmission electron microscopy (HRTEM) and Raman spectroscopy are used to\nstudy the structural and morphological features of the grown films. Herein, we\ndemonstrate, how the morphology can be tuned from planar to vertical structure\nusing single control parameter namely, dilution of CH4 with Ar and/or H2. Our\nresults show that the competitive growth and etching processes dictate the\nmorphology of the NGSs. While Ar-rich composition favors vertically oriented\ngraphene nanosheets, H2-rich composition aids growth of planar films. Raman\nanalysis reveals dilution of CH4 with either Ar or H2 or in combination helps\nto improve the structural quality of the films. Line shape analysis of Raman 2D\nband shows nearly symmetric Lorentzian profile which confirms the turbostratic\nnature of the grown NGSs. Further, this aspect is elucidated by HRTEM studies\nby observing elliptical diffraction pattern. Based on these experiments, a\ncomprehensive understanding is obtained on the growth and structural properties\nof NGSs grown over a wide range of feedstock compositions.\n", "title": "Flipping growth orientation of nanographitic structures by plasma enhanced chemical vapor deposition" }
null
null
null
null
true
null
3320
null
Default
null
null
null
{ "abstract": " We present the first theoretical evidence of zero magnetic field topological\n(anomalous) thermal Hall effect due to Weyl magnons. Here, we consider Weyl\nmagnons in stacked noncoplanar frustrated kagomé antiferromagnets recently\nproposed by Owerre, [arXiv:1708.04240]. The Weyl magnons in this system result\nfrom macroscopically broken time-reversal symmetry by the scalar spin chirality\nof noncoplanar chiral spin textures. Most importantly, they come from the\nlowest excitation, therefore they can be easily observed experimentally at low\ntemperatures due to the population effect. Similar to electronic Weyl nodes\nclose to the Fermi energy, Weyl magnon nodes in the lowest excitation are the\nmost important. Indeed, we show that the topological (anomalous) thermal Hall\neffect in this system arises from nonvanishing Berry curvature due to Weyl\nmagnon nodes in the lowest excitation, and it depends on their distribution\n(distance) in momentum space. The present result paves the way to directly\nprobe low excitation Weyl magnons and macroscopically broken time-reversal\nsymmetry in three-dimensional frustrated magnets with the anomalous thermal\nHall effect.\n", "title": "Topological thermal Hall effect due to Weyl magnons" }
null
null
null
null
true
null
3321
null
Default
null
null
null
{ "abstract": " We present a simple electromechanical finite difference model to study the\nresponse of a piezoelectric polyvinylidenflourid (PVDF) transducer to\noptoacoustic (OA) pressure waves in the acoustic nearfield prior to thermal\nrelaxation of the OA source volume. The assumption of nearfield conditions,\ni.e. the absence of acoustic diffraction, allows to treat the problem using a\none-dimensional numerical approach. Therein, the computational domain is\nmodeled as an inhomogeneous elastic medium, characterized by its local wave\nvelocities and densities, allowing to explore the effect of stepwise impedance\nchanges on the stress wave propagation. The transducer is modeled as a thin\npiezoelectric sensing layer and the electromechanical coupling is accomplished\nby means of the respective linear constituting equations. Considering a\nlow-pass characteristic of the full experimental setup, we obtain the resulting\ntransducer signal. Complementing transducer signals measured in a controlled\nlaboratory experiment with numerical simulations that result from a model of\nthe experimental setup, we find that, bearing in mind the apparent limitations\nof the one-dimensional approach, the simulated transducer signals can be used\nvery well to predict and interpret the experimental findings.\n", "title": "Numerical prediction of the piezoelectric transducer response in the acoustic nearfield using a one-dimensional electromechanical finite difference approach" }
null
null
null
null
true
null
3322
null
Default
null
null
null
{ "abstract": " Frustrated Lewis pairs (FLPs) are known for its ability to capture CO2.\nAlthough many FLPs have been reported experimentally and several theoretical\nstudies have been carried out to address the reaction mechanism, the individual\nroles of Lewis acids and bases of FLP in the capture of CO2 is still unclear.\nIn this study, we employed density functional theory (DFT) based metadynamics\nsimulations to investigate the complete path for the capture of CO2 by\ntBu3P/B(C6F5)3 pair, and to understand the role of the Lewis acid and base.\nInterestingly, we have found out that the Lewis acids play more important role\nthan Lewis bases. Specifically, the Lewis acids are crucial for catalytical\nproperties and are responsible for both kinetic and thermodynamics control. The\nLewis bases, however, have less impact on the catalytic performance and are\nmainly responsible for the formation of FLP systems. Based on these findings,\nwe propose a thumb of rule for the future synthesis of FLP-based catalyst for\nthe utilization of CO2.\n", "title": "A free energy landscape of the capture of CO2 by frustrated Lewis pairs" }
null
null
[ "Physics" ]
null
true
null
3323
null
Validated
null
null
null
{ "abstract": " Cataloging is challenging in crowded fields because sources are extremely\ncovariant with their neighbors and blending makes even the number of sources\nambiguous. We present the first optical probabilistic catalog, cataloging a\ncrowded (~0.1 sources per pixel brighter than 22nd magnitude in F606W) Sloan\nDigital Sky Survey r band image from M2. Probabilistic cataloging returns an\nensemble of catalogs inferred from the image and thus can capture source-source\ncovariance and deblending ambiguities. By comparing to a traditional catalog of\nthe same image and a Hubble Space Telescope catalog of the same region, we show\nthat our catalog ensemble better recovers sources from the image. It goes more\nthan a magnitude deeper than the traditional catalog while having a lower false\ndiscovery rate brighter than 20th magnitude. We also present an algorithm for\nreducing this catalog ensemble to a condensed catalog that is similar to a\ntraditional catalog, except it explicitly marginalizes over source-source\ncovariances and nuisance parameters. We show that this condensed catalog has a\nsimilar completeness and false discovery rate to the catalog ensemble. Future\ntelescopes will be more sensitive, and thus more of their images will be\ncrowded. Probabilistic cataloging performs better than existing software in\ncrowded fields and so should be considered when creating photometric pipelines\nin the Large Synoptic Space Telescope era.\n", "title": "Improved Point Source Detection in Crowded Fields using Probabilistic Cataloging" }
null
null
null
null
true
null
3324
null
Default
null
null
null
{ "abstract": " We analyze the local convergence of proximal splitting algorithms to solve\noptimization problems that are convex besides a rank constraint. For this, we\nshow conditions under which the proximal operator of a function involving the\nrank constraint is locally identical to the proximal operator of its convex\nenvelope, hence implying local convergence. The conditions imply that the\nnon-convex algorithms locally converge to a solution whenever a convex\nrelaxation involving the convex envelope can be expected to solve the\nnon-convex problem.\n", "title": "Local Convergence of Proximal Splitting Methods for Rank Constrained Problems" }
null
null
null
null
true
null
3325
null
Default
null
null
null
{ "abstract": " Causal relationships among variables are commonly represented via directed\nacyclic graphs. There are many methods in the literature to quantify the\nstrength of arrows in a causal acyclic graph. These methods, however, have\nundesirable properties when the causal system represented by a directed acyclic\ngraph is degenerate. In this paper, we characterize a degenerate causal system\nusing multiplicity of Markov boundaries, and show that in this case, it is\nimpossible to quantify causal effects in a reasonable fashion. We then propose\nalgorithms to identify such degenerate scenarios from observed data.\nPerformance of our algorithms is investigated through synthetic data analysis.\n", "title": "On the boundary between qualitative and quantitative measures of causal effects" }
null
null
null
null
true
null
3326
null
Default
null
null
null
{ "abstract": " The holy grail of deep learning is to come up with an automatic method to\ndesign optimal architectures for different applications. In other words, how\ncan we effectively dimension and organize neurons along the network layers\nbased on the computational resources, input size, and amount of training data?\nWe outline promising research directions based on polyhedral theory and\nmixed-integer representability that may offer an analytical approach to this\nquestion, in contrast to the empirical techniques often employed.\n", "title": "How Could Polyhedral Theory Harness Deep Learning?" }
null
null
null
null
true
null
3327
null
Default
null
null
null
{ "abstract": " It is shown that for a given ordered node-labelled tree of size $n$ and with\n$s$ many different node labels, one can construct in linear time a top dag of\nheight $O(\\log n)$ and size $O(n / \\log_\\sigma n) \\cap O(d \\cdot \\log n)$,\nwhere $\\sigma = \\max\\{ 2, s\\}$ and $d$ is the size of the minimal dag. The size\nbound $O(n / \\log_\\sigma n)$ is optimal and improves on previous bounds.\n", "title": "Optimal top dag compression" }
null
null
[ "Computer Science" ]
null
true
null
3328
null
Validated
null
null
null
{ "abstract": " Delay is omnipresent in modern control systems, which can prompt oscillations\nand may cause deterioration of control performance, invalidate both stability\nand safety properties. This implies that safety or stability certificates\nobtained on idealized, delay-free models of systems prone to delayed coupling\nmay be erratic, and further the incorrectness of the executable code generated\nfrom these models. However, automated methods for system verification and code\ngeneration that ought to address models of system dynamics reflecting delays\nhave not been paid enough attention yet in the computer science community. In\nour previous work, on one hand, we investigated the verification of delay\ndynamical and hybrid systems; on the other hand, we also addressed how to\nsynthesize SystemC code from a verified hybrid system modelled by Hybrid CSP\n(HCSP) without delay. In this paper, we give a first attempt to synthesize\nSystemC code from a verified delay hybrid system modelled by Delay HCSP\n(dHCSP), which is an extension of HCSP by replacing ordinary differential\nequations (ODEs) with delay differential equations (DDEs). We implement a tool\nto support the automatic translation from dHCSP to SystemC.\n", "title": "Synthesizing SystemC Code from Delay Hybrid CSP" }
null
null
null
null
true
null
3329
null
Default
null
null
null
{ "abstract": " This paper discusses the time series trend and variability of the cyclone\nfrequencies over Bay of Bengal, particularly in order to conclude if there is\nany significant difference in the pattern visible before and after the\ndisastrous 2004 Indian ocean tsunami based on the observed annual cyclone\nfrequency data obtained by India Meteorological Department over the years\n1891-2015. Three different categories of cyclones- depression (<34 knots),\ncyclonic storm (34-47 knots) and severe cyclonic storm (>47 knots) have been\nanalyzed separately using a non-homogeneous Poisson process approach. The\nestimated intensity functions of the Poisson processes along with their first\ntwo derivatives are discussed and all three categories show decreasing trend of\nthe intensity functions after the tsunami. Using an exact change-point\nanalysis, we show that the drops in mean intensity functions are significant\nfor all three categories. As of author's knowledge, no study so far have\ndiscussed the relation between cyclones and tsunamis. Bay of Bengal is\nsurrounded by one of the most densely populated areas of the world and any kind\nof significant change in tropical cyclone pattern has a large impact in various\nways, for example, disaster management planning and our study is immensely\nimportant from that perspective.\n", "title": "Analysis of Annual Cyclone Frequencies over Bay of Bengal: Effect of 2004 Indian Ocean Tsunami" }
null
null
null
null
true
null
3330
null
Default
null
null
null
{ "abstract": " Dietzfelbinger and Weidling [DW07] proposed a natural variation of cuckoo\nhashing where each of $cn$ objects is assigned $k = 2$ intervals of size $\\ell$\nin a linear (or cyclic) hash table of size $n$ and both start points are chosen\nindependently and uniformly at random. Each object must be placed into a table\ncell within its intervals, but each cell can only hold one object. Experiments\nsuggested that this scheme outperforms the variant with blocks in which\nintervals are aligned at multiples of $\\ell$. In particular, the load threshold\nis higher, i.e. the load $c$ that can be achieved with high probability. For\ninstance, Lehman and Panigrahy [LP09] empirically observed the threshold for\n$\\ell = 2$ to be around $96.5\\%$ as compared to roughly $89.7\\%$ using blocks.\nThey managed to pin down the asymptotics of the thresholds for large $\\ell$,\nbut the precise values resisted rigorous analysis.\nWe establish a method to determine these load thresholds for all $\\ell \\geq\n2$, and, in fact, for general $k \\geq 2$. For instance, for $k = \\ell = 2$ we\nget $\\approx 96.4995\\%$. The key tool we employ is an insightful and general\ntheorem due to Leconte, Lelarge, and Massoulié [LLM13], which adapts methods\nfrom statistical physics to the world of hypergraph orientability. In effect,\nthe orientability thresholds for our graph families are determined by belief\npropagation equations for certain graph limits. As a side note we provide\nexperimental evidence suggesting that placements can be constructed in linear\ntime with loads close to the threshold using an adapted version of an algorithm\nby Khosla [Kho13].\n", "title": "Load Thresholds for Cuckoo Hashing with Overlapping Blocks" }
null
null
null
null
true
null
3331
null
Default
null
null
null
{ "abstract": " Consider the problem of modeling memory for discrete-state random walks using\nhigher-order Markov chains. This Letter introduces a general Bayesian framework\nunder the principle of minimizing prediction error to select, from data, the\nnumber of prior states of recent history upon which a trajectory is\nstatistically dependent. In this framework, I provide closed-form expressions\nfor several alternative model selection criteria that approximate model\nprediction error for future data. Using simulations, I evaluate the statistical\npower of these criteria. These methods, when applied to data from the\n2016--2017 NBA season, demonstrate evidence of statistical dependencies in\nLeBron James' free throw shooting. In particular, a model depending on the\nprevious shot (single-step Markovian) is approximately as predictive as a model\nwith independent outcomes. A hybrid jagged model of two parameters, where James\nshoots a higher percentage after a missed free throw than otherwise, is more\npredictive than either model.\n", "title": "Evaluating the hot hand phenomenon using predictive memory selection for multistep Markov Chains: LeBron James' error correcting free throws" }
null
null
null
null
true
null
3332
null
Default
null
null
null
{ "abstract": " We show that a closed almost Kähler 4-manifold of globally constant\nholomorphic sectional curvature $k\\geq 0$ with respect to the canonical\nHermitian connection is automatically Kähler. The same result holds for $k<0$\nif we require in addition that the Ricci curvature is J-invariant. The proofs\nare based on the observation that such manifolds are self-dual, so that\nChern-Weil theory implies useful integral formulas, which are then combined\nwith results from Seiberg--Witten theory.\n", "title": "Closed almost-Kähler 4-manifolds of constant non-negative Hermitian holomorphic sectional curvature are Kähler" }
null
null
null
null
true
null
3333
null
Default
null
null
null
{ "abstract": " In this study, we consider preliminary test and shrinkage estimation\nstrategies for quantile regression models. In classical Least Squares\nEstimation (LSE) method, the relationship between the explanatory and explained\nvariables in the coordinate plane is estimated with a mean regression line. In\norder to use LSE, there are three main assumptions on the error terms showing\nwhite noise process of the regression model, also known as Gauss-Markov\nAssumptions, must be met: (1) The error terms have zero mean, (2) The variance\nof the error terms is constant and (3) The covariance between the errors is\nzero i.e., there is no autocorrelation. However, data in many areas, including\neconometrics, survival analysis and ecology, etc. does not provide these\nassumptions. First introduced by Koenker, quantile regression has been used to\ncomplement this deficiency of classical regression analysis and to improve the\nleast square estimation. The aim of this study is to improve the performance of\nquantile regression estimators by using pre-test and shrinkage strategies. A\nMonte Carlo simulation study including a comparison with quantile $L_1$--type\nestimators such as Lasso, Ridge and Elastic Net are designed to evaluate the\nperformances of the estimators. Two real data examples are given for\nillustrative purposes. Finally, we obtain the asymptotic results of suggested\nestimators\n", "title": "Pretest and Stein-Type Estimations in Quantile Regression Model" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
3334
null
Validated
null
null
null
{ "abstract": " Conventional fracture data collection methods are usually implemented on\nplanar surfaces or assuming they are planar; these methods may introduce\nsampling errors on uneven outcrop surfaces. Consequently, data collected on\nlimited types of outcrop surfaces (mainly bedding surfaces) may not be a\nsufficient representation of fracture network characteristic in outcrops.\nRecent development of techniques that obtain DOMs from outcrops and extract the\nfull extent of individual fractures offers the opportunity to address the\nproblem of performing the conventional sampling methods on uneven outcrop\nsurfaces. In this study, we propose a new method that performs outcrop fracture\ncharacterization on suppositional planes cutting through DOMs. The\nsuppositional plane is the best fit plane of the outcrop surface, and the\nfracture trace map is extracted on the suppositional plane so that the fracture\nnetwork can be further characterized. The amount of sampling errors introduced\nby the conventional methods and avoided by the new method on 16 uneven outcrop\nsurfaces with different roughnesses are estimated. The results show that the\nconventional sampling methods don't apply to outcrops other than bedding\nsurfaces or outcrops whose roughness > 0.04 m, and that the proposed method can\ngreatly extend the types of outcrop surfaces for outcrop fracture\ncharacterization with the suppositional plane cutting through DOMs.\n", "title": "Outcrop fracture characterization on suppositional planes cutting through digital outcrop models (DOMs)" }
null
null
[ "Computer Science", "Physics" ]
null
true
null
3335
null
Validated
null
null
null
{ "abstract": " Influenza remains a significant burden on health systems. Effective responses\nrely on the timely understanding of the magnitude and the evolution of an\noutbreak. For monitoring purposes, data on severe cases of influenza in England\nare reported weekly to Public Health England. These data are both readily\navailable and have the potential to provide valuable information to estimate\nand predict the key transmission features of seasonal and pandemic influenza.\nWe propose an epidemic model that links the underlying unobserved influenza\ntransmission process to data on severe influenza cases. Within a Bayesian\nframework, we infer retrospectively the parameters of the epidemic model for\neach seasonal outbreak from 2012 to 2015, including: the effective reproduction\nnumber; the initial susceptibility; the probability of admission to intensive\ncare given infection; and the effect of school closure on transmission. The\nmodel is also implemented in real time to assess whether early forecasting of\nthe number of admission to intensive care is possible. Our model of admissions\ndata allows reconstruction of the underlying transmission dynamics revealing:\nincreased transmission during the season 2013/14 and a noticeable effect of\nChristmas school holiday on disease spread during season 2012/13 and 2014/15.\nWhen information on the initial immunity of the population is available,\nforecasts of the number of admissions to intensive care can be substantially\nimproved. Readily available severe case data can be effectively used to\nestimate epidemiological characteristics and to predict the evolution of an\nepidemic, crucially allowing real-time monitoring of the transmission and\nseverity of the outbreak.\n", "title": "Exploiting routinely collected severe case data to monitor and predict influenza outbreaks" }
null
null
null
null
true
null
3336
null
Default
null
null
null
{ "abstract": " This paper shows how to recover stochastic volatility models (SVMs) from\nmarket models for the VIX futures term structure. Market models have more\nflexibility for fitting of curves than do SVMs, and therefore they are\nbetter-suited for pricing VIX futures and derivatives. But the VIX itself is a\nderivative of the S&P500 (SPX) and it is common practice to price SPX\nderivatives using an SVM. Hence, a consistent model for both SPX and VIX\nderivatives would be one where the SVM is obtained by inverting the market\nmodel. This paper's main result is a method for the recovery of a stochastic\nvolatility function as the output of an inverse problem, with the inputs given\nby a VIX futures market model. Analysis will show that some conditions need to\nbe met in order for there to not be any inter-model arbitrage or mis-priced\nderivatives. Given these conditions the inverse problem can be solved. Several\nmodels are analyzed and explored numerically to gain a better understanding of\nthe theory and its limitations.\n", "title": "Consistent Inter-Model Specification for Time-Homogeneous SPX Stochastic Volatility and VIX Market Models" }
null
null
null
null
true
null
3337
null
Default
null
null
null
{ "abstract": " Mobile edge computing (MEC) is expected to be an effective solution to\ndeliver 360-degree virtual reality (VR) videos over wireless networks. In\ncontrast to previous computation-constrained MEC framework, which reduces the\ncomputation-resource consumption at the mobile VR device by increasing the\ncommunication-resource consumption, we develop a communications-constrained MEC\nframework to reduce communication-resource consumption by increasing the\ncomputation-resource consumption and exploiting the caching resources at the\nmobile VR device in this paper. Specifically, according to the task\nmodularization, the MEC server can only deliver the components which have not\nbeen stored in the VR device, and then the VR device uses the received\ncomponents and the corresponding cached components to construct the task,\nresulting in low communication-resource consumption but high delay. The MEC\nserver can also compute the task by itself to reduce the delay, however, it\nconsumes more communication-resource due to the delivery of entire task.\nTherefore, we then propose a task scheduling strategy to decide which\ncomputation model should the MEC server operates, in order to minimize the\ncommunication-resource consumption under the delay constraint. Finally, we\ndiscuss the tradeoffs between communications, computing, and caching in the\nproposed system.\n", "title": "Optimal Task Scheduling in Communication-Constrained Mobile Edge Computing Systems for Wireless Virtual Reality" }
null
null
null
null
true
null
3338
null
Default
null
null
null
{ "abstract": " Given a regular cardinal $\\kappa$ such that $\\kappa^{<\\kappa}=\\kappa$, we\nstudy a class of toposes with enough points, the $\\kappa$-separable toposes.\nThese are equivalent to sheaf toposes over a site with $\\kappa$-small limits\nthat has at most $\\kappa$ many objects and morphisms, the (basis for the)\ntopology being generated by at most $\\kappa$ many covering families, and that\nsatisfy a further exactness property $T$. We prove that these toposes have\nenough $\\kappa$-points, that is, points whose inverse image preserve all\n$\\kappa$-small limits. This generalizes the separable toposes of Makkai and\nReyes, that are a particular case when $\\kappa=\\omega$, when property $T$ is\ntrivially satisfied. This result is essentially a completeness theorem for a\ncertain infinitary logic that we call $\\kappa$-geometric, where conjunctions of\nless than $\\kappa$ formulas and existential quantification on less than\n$\\kappa$ many variables is allowed. We prove that $\\kappa$-geometric theories\nhave a $\\kappa$-classifying topos having property $T$, the universal property\nbeing that models of the theory in a Grothendieck topos with property $T$\ncorrespond to $\\kappa$-geometric morphisms (geometric morphisms the inverse\nimage of which preserves all $\\kappa$-small limits) into that topos. Moreover,\nwe prove that $\\kappa$-separable toposes occur as the $\\kappa$-classifying\ntoposes of $\\kappa$-geometric theories of at most $\\kappa$ many axioms in\ncanonical form, and that every such $\\kappa$-classifying topos is\n$\\kappa$-separable. Finally, we consider the case when $\\kappa$ is weakly\ncompact and study the $\\kappa$-classifying topos of a $\\kappa$-coherent theory\n(with at most $\\kappa$ many axioms), that is, a theory where only disjunction\nof less than $\\kappa$ formulas are allowed, obtaining a version of Deligne's\ntheorem for $\\kappa$-coherent toposes.\n", "title": "Infinitary generalizations of Deligne's completeness theorem" }
null
null
null
null
true
null
3339
null
Default
null
null
null
{ "abstract": " We describe a new training methodology for generative adversarial networks.\nThe key idea is to grow both the generator and discriminator progressively:\nstarting from a low resolution, we add new layers that model increasingly fine\ndetails as training progresses. This both speeds the training up and greatly\nstabilizes it, allowing us to produce images of unprecedented quality, e.g.,\nCelebA images at 1024^2. We also propose a simple way to increase the variation\nin generated images, and achieve a record inception score of 8.80 in\nunsupervised CIFAR10. Additionally, we describe several implementation details\nthat are important for discouraging unhealthy competition between the generator\nand discriminator. Finally, we suggest a new metric for evaluating GAN results,\nboth in terms of image quality and variation. As an additional contribution, we\nconstruct a higher-quality version of the CelebA dataset.\n", "title": "Progressive Growing of GANs for Improved Quality, Stability, and Variation" }
null
null
null
null
true
null
3340
null
Default
null
null
null
{ "abstract": " In all supersymmetric theories, gravitinos, with mass suppressed by the\nPlanck scale, are an obvious candidate for dark matter; but if gravitinos ever\nreached thermal equilibrium, such dark matter is apparently either too abundant\nor too hot, and is excluded. However, in theories with an axion, a saxion\ncondensate is generated during an early era of cosmological history and its\nlate decay dilutes dark matter. We show that such dilution allows previously\nthermalized gravitinos to account for the observed dark matter over very wide\nranges of gravitino mass, keV < $m_{3/2}$ < TeV, axion decay constant, $10^9$\nGeV < $f_a$ < $10^{16}$ GeV, and saxion mass, 10 MeV < $m_s$ < 100 TeV.\nConstraints on this parameter space are studied from BBN, supersymmetry\nbreaking, gravitino and axino production from freeze-in and saxion decay, and\nfrom axion production from both misalignment and parametric resonance\nmechanisms. Large allowed regions of $(m_{3/2}, f_a, m_s)$ remain, but differ\nfor DFSZ and KSVZ theories. Superpartner production at colliders may lead to\nevents with displaced vertices and kinks, and may contain saxions decaying to\n$(WW,ZZ,hh), gg, \\gamma \\gamma$ or a pair of Standard Model fermions. Freeze-in\nmay lead to a sub-dominant warm component of gravitino dark matter, and saxion\ndecay to axions may lead to dark radiation.\n", "title": "Saxion Cosmology for Thermalized Gravitino Dark Matter" }
null
null
[ "Physics" ]
null
true
null
3341
null
Validated
null
null
null
{ "abstract": " Automatic abusive language detection is a difficult but important task for\nonline social media. Our research explores a two-step approach of performing\nclassification on abusive language and then classifying into specific types and\ncompares it with one-step approach of doing one multi-class classification for\ndetecting sexist and racist languages. With a public English Twitter corpus of\n20 thousand tweets in the type of sexism and racism, our approach shows a\npromising performance of 0.827 F-measure by using HybridCNN in one-step and\n0.824 F-measure by using logistic regression in two-steps.\n", "title": "One-step and Two-step Classification for Abusive Language Detection on Twitter" }
null
null
null
null
true
null
3342
null
Default
null
null
null
{ "abstract": " We show how the massive data compression algorithm MOPED can be used to\nreduce, by orders of magnitude, the number of simulated datasets that are\nrequired to estimate the covariance matrix required for the analysis of\ngaussian-distributed data. This is relevant when the covariance matrix cannot\nbe calculated directly. The compression is especially valuable when the\ncovariance matrix varies with the model parameters. In this case, it may be\nprohibitively expensive to run enough simulations to estimate the full\ncovariance matrix throughout the parameter space. This compression may be\nparticularly valuable for the next-generation of weak lensing surveys, such as\nproposed for Euclid and LSST, for which the number of summary data (such as\nband power or shear correlation estimates) is very large, $\\sim 10^4$, due to\nthe large number of tomographic redshift bins that the data will be divided\ninto. In the pessimistic case where the covariance matrix is estimated\nseparately for all points in an MCMC analysis, this may require an unfeasible\n$10^9$ simulations. We show here that MOPED can reduce this number by a factor\nof 1000, or a factor of $\\sim 10^6$ if some regularity in the covariance matrix\nis assumed, reducing the number of simulations required to a manageable $10^3$,\nmaking an otherwise intractable analysis feasible.\n", "title": "Massive data compression for parameter-dependent covariance matrices" }
null
null
null
null
true
null
3343
null
Default
null
null
null
{ "abstract": " Electronic medical records (EMR) contain longitudinal information about\npatients that can be used to analyze outcomes. Typically, studies on EMR data\nhave worked with established variables that have already been acknowledged to\nbe associated with certain outcomes. However, EMR data may also contain\nhitherto unrecognized factors for risk association and prediction of outcomes\nfor a disease. In this paper, we present a scalable data-driven framework to\nanalyze EMR data corpus in a disease agnostic way that systematically uncovers\nimportant factors influencing outcomes in patients, as supported by data and\nwithout expert guidance. We validate the importance of such factors by using\nthe framework to predict for the relevant outcomes. Specifically, we analyze\nEMR data covering approximately 47 million unique patients to characterize\nrenal failure (RF) among type 2 diabetic (T2DM) patients. We propose a\nspecialized L1 regularized Cox Proportional Hazards (CoxPH) survival model to\nidentify the important factors from those available from patient encounter\nhistory. To validate the identified factors, we use a specialized generalized\nlinear model (GLM) to predict the probability of renal failure for individual\npatients within a specified time window. Our experiments indicate that the\nfactors identified via our data-driven method overlap with the patient\ncharacteristics recognized by experts. Our approach allows for scalable,\nrepeatable and efficient utilization of data available in EMRs, confirms prior\nmedical knowledge and can generate new hypothesis without expert supervision.\n", "title": "A Novel Data-Driven Framework for Risk Characterization and Prediction from Electronic Medical Records: A Case Study of Renal Failure" }
null
null
null
null
true
null
3344
null
Default
null
null
null
{ "abstract": " We examine knotted solutions, the most simple of which is the \"Hopfion\", from\nthe point of view of relations between electromagnetism and ideal fluid\ndynamics. A map between fluid dynamics and electromagnetism works for initial\nconditions or for linear perturbations, allowing us to find new knotted fluid\nsolutions. Knotted solutions are also found to to be solutions of nonlinear\ngeneralizations of electromagnetism, and of quantum-corrected actions for\nelectromagnetism coupled to other modes. For null configurations,\nelectromagnetism can be described as a null pressureless fluid, for which we\ncan find solutions from the knotted solutions of electromagnetism. We also map\nthem to solutions of Euler's equations, obtained from a type of nonrelativistic\nreduction of the relativistic fluid equations.\n", "title": "Knotted solutions for linear and nonlinear theories: electromagnetism and fluid dynamics" }
null
null
null
null
true
null
3345
null
Default
null
null
null
{ "abstract": " We present a method that automatically evaluates emotional response from\nspontaneous facial activity recorded by a depth camera. The automatic\nevaluation of emotional response, or affect, is a fascinating challenge with\nmany applications, including human-computer interaction, media tagging and\nhuman affect prediction. Our approach in addressing this problem is based on\nthe inferred activity of facial muscles over time, as captured by a depth\ncamera recording an individual's facial activity. Our contribution is two-fold:\nFirst, we constructed a database of publicly available short video clips, which\nelicit a strong emotional response in a consistent manner across different\nindividuals. Each video was tagged by its characteristic emotional response\nalong 4 scales: \\emph{Valence, Arousal, Likability} and \\emph{Rewatch} (the\ndesire to watch again). The second contribution is a two-step prediction\nmethod, based on learning, which was trained and tested using this database of\ntagged video clips. Our method was able to successfully predict the\naforementioned 4 dimensional representation of affect, as well as to identify\nthe period of strongest emotional response in the viewing recordings, in a\nmethod that is blind to the video clip being watch, revealing a significantly\nhigh agreement between the recordings of independent viewers.\n", "title": "Implicit Media Tagging and Affect Prediction from video of spontaneous facial expressions, recorded with depth camera" }
null
null
null
null
true
null
3346
null
Default
null
null
null
{ "abstract": " Legal professionals worldwide are currently trying to get up-to-pace with the\nexplosive growth in legal document availability through digital means. This\ndrives a need for high efficiency Legal Information Retrieval (IR) and Question\nAnswering (QA) methods. The IR task in particular has a set of unique\nchallenges that invite the use of semantic motivated NLP techniques. In this\nwork, a two-stage method for Legal Information Retrieval is proposed, combining\nlexical statistics and distributional sentence representations in the context\nof Competition on Legal Information Extraction/Entailment (COLIEE). The\ncombination is done with the use of disambiguation rules, applied over the\nrankings obtained through n-gram statistics. After the ranking is done, its\nresults are evaluated for ambiguity, and disambiguation is done if a result is\ndecided to be unreliable for a given query. Competition and experimental\nresults indicate small gains in overall retrieval performance using the\nproposed approach. Additionally, an analysis of error and improvement cases is\npresented for a better understanding of the contributions.\n", "title": "Improving Legal Information Retrieval by Distributional Composition with Term Order Probabilities" }
null
null
null
null
true
null
3347
null
Default
null
null
null
{ "abstract": " Accurate modeling of light scattering from nanometer scale defects on Silicon\nwafers is critical for enabling increasingly shrinking semiconductor technology\nnodes of the future. Yet, such modeling of defect scattering remains unsolved\nsince existing modeling techniques fail to account for complex defect and wafer\ngeometries. Here, we present results of laser beam scattering from spherical\nand ellipsoidal particles located on the surface of a silicon wafer. A\ncommercially available electromagnetic field solver (HFSS) was deployed on a\nmultiprocessor cluster to obtain results with previously unknown accuracy down\nto light scattering intensity of -170 dB. We compute three dimensional\nscattering patterns of silicon nanospheres located on a semiconductor wafer for\nboth perpendicular and parallel polarization and show the effect of sphere size\non scattering. We further computer scattering patterns of nanometer scale\nellipsoidal particles having different orientation angles and unveil the\neffects of ellipsoidal orientation on scattering.\n", "title": "Modeling Study of Laser Beam Scattering by Defects on Semiconductor Wafers" }
null
null
null
null
true
null
3348
null
Default
null
null
null
{ "abstract": " In 2009, Joselli et al introduced the Neighborhood Grid data structure for\nfast computation of neighborhood estimates in point clouds. Even though the\ndata structure has been used in several applications and shown to be\npractically relevant, it is theoretically not yet well understood. The purpose\nof this paper is to present a polynomial-time algorithm to build the data\nstructure. Furthermore, it is investigated whether the presented algorithm is\noptimal. This investigations leads to several combinatorial questions for which\npartial results are given.\n", "title": "Combinatorial and Asymptotical Results on the Neighborhood Grid" }
null
null
null
null
true
null
3349
null
Default
null
null
null
{ "abstract": " We investigate the structure of join tensors, which may be regarded as the\nmultivariable extension of lattice-theoretic join matrices. Explicit formulae\nfor a polyadic decomposition (i.e., a linear combination of rank-1 tensors) and\na tensor-train decomposition of join tensors are derived on general join\nsemilattices. We discuss conditions under which the obtained decompositions are\noptimal in rank, and examine numerically the storage complexity of the obtained\ndecompositions for a class of LCM tensors as a special case of join tensors. In\naddition, we investigate numerically the sharpness of a theoretical upper bound\non the tensor eigenvalues of LCM tensors.\n", "title": "On the structure of join tensors with applications to tensor eigenvalue problems" }
null
null
[ "Mathematics" ]
null
true
null
3350
null
Validated
null
null
null
{ "abstract": " Complex computer simulators are increasingly used across fields of science as\ngenerative models tying parameters of an underlying theory to experimental\nobservations. Inference in this setup is often difficult, as simulators rarely\nadmit a tractable density or likelihood function. We introduce Adversarial\nVariational Optimization (AVO), a likelihood-free inference algorithm for\nfitting a non-differentiable generative model incorporating ideas from\ngenerative adversarial networks, variational optimization and empirical Bayes.\nWe adapt the training procedure of generative adversarial networks by replacing\nthe differentiable generative network with a domain-specific simulator. We\nsolve the resulting non-differentiable minimax problem by minimizing\nvariational upper bounds of the two adversarial objectives. Effectively, the\nprocedure results in learning a proposal distribution over simulator\nparameters, such that the JS divergence between the marginal distribution of\nthe synthetic data and the empirical distribution of observed data is\nminimized. We evaluate and compare the method with simulators producing both\ndiscrete and continuous data.\n", "title": "Adversarial Variational Optimization of Non-Differentiable Simulators" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3351
null
Validated
null
null
null
{ "abstract": " We show that the equations of reinforcement learning and light transport\nsimulation are related integral equations. Based on this correspondence, a\nscheme to learn importance while sampling path space is derived. The new\napproach is demonstrated in a consistent light transport simulation algorithm\nthat uses reinforcement learning to progressively learn where light comes from.\nAs using this information for importance sampling includes information about\nvisibility, too, the number of light transport paths with zero contribution is\ndramatically reduced, resulting in much less noisy images within a fixed time\nbudget.\n", "title": "Learning Light Transport the Reinforced Way" }
null
null
[ "Computer Science" ]
null
true
null
3352
null
Validated
null
null
null
{ "abstract": " We consider the sparse high-dimensional linear regression model\n$Y=Xb+\\epsilon$ where $b$ is a sparse vector. For the Bayesian approach to this\nproblem, many authors have considered the behavior of the posterior\ndistribution when, in truth, $Y=X\\beta+\\epsilon$ for some given $\\beta$. There\nhave been numerous results about the rate at which the posterior distribution\nconcentrates around $\\beta$, but few results about the shape of that posterior\ndistribution. We propose a prior distribution for $b$ such that the marginal\nposterior distribution of an individual coordinate $b_i$ is asymptotically\nnormal centered around an asymptotically efficient estimator, under the truth.\nSuch a result gives Bayesian credible intervals that match with the confidence\nintervals obtained from an asymptotically efficient estimator for $b_i$. We\nalso discuss ways of obtaining such asymptotically efficient estimators on\nindividual coordinates. We compare the two-step procedure proposed by Zhang and\nZhang (2014) and a one-step modified penalization method.\n", "title": "Posterior Asymptotic Normality for an Individual Coordinate in High-dimensional Linear Regression" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
3353
null
Validated
null
null
null
{ "abstract": " Several fundamental problems that arise in optimization and computer science\ncan be cast as follows: Given vectors $v_1,\\ldots,v_m \\in \\mathbb{R}^d$ and a\nconstraint family ${\\cal B}\\subseteq 2^{[m]}$, find a set $S \\in \\cal{B}$ that\nmaximizes the squared volume of the simplex spanned by the vectors in $S$. A\nmotivating example is the data-summarization problem in machine learning where\none is given a collection of vectors that represent data such as documents or\nimages. The volume of a set of vectors is used as a measure of their diversity,\nand partition or matroid constraints over $[m]$ are imposed in order to ensure\nresource or fairness constraints. Recently, Nikolov and Singh presented a\nconvex program and showed how it can be used to estimate the value of the most\ndiverse set when ${\\cal B}$ corresponds to a partition matroid. This result was\nrecently extended to regular matroids in works of Straszak and Vishnoi, and\nAnari and Oveis Gharan. The question of whether these estimation algorithms can\nbe converted into the more useful approximation algorithms -- that also output\na set -- remained open.\nThe main contribution of this paper is to give the first approximation\nalgorithms for both partition and regular matroids. We present novel\nformulations for the subdeterminant maximization problem for these matroids;\nthis reduces them to the problem of finding a point that maximizes the absolute\nvalue of a nonconvex function over a Cartesian product of probability\nsimplices. The technical core of our results is a new anti-concentration\ninequality for dependent random variables that allows us to relate the optimal\nvalue of these nonconvex functions to their value at a random point. Unlike\nprior work on the constrained subdeterminant maximization problem, our proofs\ndo not rely on real-stability or convexity and could be of independent interest\nboth in algorithms and complexity.\n", "title": "Subdeterminant Maximization via Nonconvex Relaxations and Anti-concentration" }
null
null
[ "Computer Science", "Mathematics", "Statistics" ]
null
true
null
3354
null
Validated
null
null
null
{ "abstract": " This paper considers channel estimation and uplink achievable rate of the\ncoarsely quantized massive multiple-input multiple-output (MIMO) system with\nradio frequency (RF) impairments. We utilize additive quantization noise model\n(AQNM) and extended error vector magnitude (EEVM) model to analyze the impacts\nof low-resolution analog-to-digital converters (ADCs) and RF impairments\nrespectively. We show that hardware impairments cause a nonzero floor on the\nchannel estimation error, which contraries to the conventional case with ideal\nhardware. The maximal-ratio combining (MRC) technique is then used at the\nreceiver, and an approximate tractable expression for the uplink achievable\nrate is derived. The simulation results illustrate the appreciable\ncompensations between ADCs' resolution and RF impairments. The proposed studies\nsupport the feasibility of equipping economical coarse ADCs and economical\nimperfect RF components in practical massive MIMO systems.\n", "title": "On the Uplink Achievable Rate of Massive MIMO System With Low-Resolution ADC and RF Impairments" }
null
null
null
null
true
null
3355
null
Default
null
null
null
{ "abstract": " We investigate the relationship between several enumeration complexity\nclasses and focus in particular on problems having enumeration algorithms with\nincremental and polynomial delay (IncP and DelayP respectively). We show that,\nfor some algorithms, we can turn an average delay into a worst case delay\nwithout increasing the space complexity, suggesting that IncP_1 = DelayP even\nwith polynomially bounded space. We use the Exponential Time Hypothesis to\nexhibit a strict hierarchy inside IncP which gives the first separation of\nDelayP and IncP. Finally we relate the uniform generation of solutions to\nprobabilistic enumeration algorithms with polynomial delay and polynomial\nspace.\n", "title": "On The Complexity of Enumeration" }
null
null
null
null
true
null
3356
null
Default
null
null
null
{ "abstract": " In this note we investigate the $p$-degree function of elliptic curves over\nthe field $\\mathbb{Q}_p$ of $p$-adic numbers. The $p$-degree measures the least\ncomplexity of a non-zero $p$-torsion point on an elliptic curve. We prove some\nproperties of this function and compute it explicitly in some special cases.\n", "title": "On $p$-degree of elliptic curves" }
null
null
null
null
true
null
3357
null
Default
null
null
null
{ "abstract": " In \\cite{y1} Yin generalized the definition of $W$-graph ideal $E_J$ in\nweighted Coxeter groups and introduced the weighted Kazhdan-Lusztig polynomials\n$ \\left \\{ P_{x,y} \\mid x,y\\in E_J\\right \\}$, where $J$ is a subset of simple\ngenerators $S$. In this paper, we study the combinatorial formulas for those\npolynomials, which extend the results of Deodhar \\cite{v3} and Tagawa\n\\cite{h1}.\n", "title": "Combinatorial formulas for Kazhdan-Lusztig polynomials with respect to W-graph ideals" }
null
null
null
null
true
null
3358
null
Default
null
null
null
{ "abstract": " Mazur, Rubin, and Stein have recently formulated a series of conjectures\nabout statistical properties of modular symbols in order to understand central\nvalues of twists of elliptic curve $L$-functions. Two of these conjectures\nrelate to the asymptotic growth of the first and second moments of the modular\nsymbols. We prove these on average by using analytic properties of Eisenstein\nseries twisted by modular symbols. Another of their conjectures predicts the\nGaussian distribution of normalized modular symbols ordered according to the\nsize of the denominator of the cusps. We prove this conjecture in a refined\nversion that also allows restrictions on the location of the cusps.\n", "title": "Arithmetic statistics of modular symbols" }
null
null
null
null
true
null
3359
null
Default
null
null
null
{ "abstract": " There has been an increasing demand for formal methods in the design process\nof safety-critical synthetic genetic circuits. Probabilistic model checking\ntechniques have demonstrated significant potential in analyzing the intrinsic\nprobabilistic behaviors of complex genetic circuit designs. However, its\ninability to scale limits its applicability in practice. This chapter addresses\nthe scalability problem by presenting a state-space approximation method to\nremove unlikely states resulting in a reduced, finite state representation of\nthe infinite-state continuous-time Markov chain that is amenable to\nprobabilistic model checking. The proposed method is evaluated on a design of a\ngenetic toggle switch. Comparisons with another state-of-art tool demonstrates\nboth accuracy and efficiency of the presented method.\n", "title": "Approximation Techniques for Stochastic Analysis of Biological Systems" }
null
null
null
null
true
null
3360
null
Default
null
null
null
{ "abstract": " We study the effects of local perturbations on the dynamics of disordered\nfermionic systems in order to characterize time-irreversibility. We focus on\nthree different systems, the non-interacting Anderson and Aubry-André-Harper\n(AAH-) models, and the interacting spinless disordered t-V chain. First, we\nconsider the effect on the full many-body wave-functions by measuring the\nLoschmidt echo (LE). We show that in the extended/ergodic phase the LE decays\nexponentially fast with time, while in the localized phase the decay is\nalgebraic. We demonstrate that the exponent of the decay of the LE in the\nlocalized phase diverges proportionally to the single-particle localization\nlength as we approach the metal-insulator transition in the AAH model. Second,\nwe probe different phases of disordered systems by studying the time\nexpectation value of local observables evolved with two Hamiltonians that\ndiffer by a spatially local perturbation. Remarkably, we find that many-body\nlocalized systems could lose memory of the initial state in the long-time\nlimit, in contrast to the non-interacting localized phase where some memory is\nalways preserved.\n", "title": "Characterizing time-irreversibility in disordered fermionic systems by the effect of local perturbations" }
null
null
null
null
true
null
3361
null
Default
null
null
null
{ "abstract": " We study the indices of the geodesic central configurations on $\\H^2$. We\nthen show that central configurations are bounded away from the singularity\nset. With Morse's inequality, we get a lower bound for the number of central\nconfigurations on $\\H^2$.\n", "title": "A Lower Bound for the Number of Central Configurations on H^2" }
null
null
null
null
true
null
3362
null
Default
null
null
null
{ "abstract": " The controversy regarding the precise nature of the high-temperature phase of\n1T-TiSe2 lasts for decades. It has intensified in recent times when new\nevidence for the excitonic origin of the low-temperature charge-density wave\nstate started to unveil. Here we address the problem of the high-temperature\nphase through precise measurements and detailed analysis of the optical\nresponse of 1T-TiSe2 single crystals. The separate responses of electron and\nhole subsystems are identified and followed in temperature. We show that\nneither semiconductor nor semimetal pictures can be applied in their generic\nforms as the scattering for both types of carriers is in the vicinity of the\nIoffe-Regel limit with decay rates being comparable to or larger than the\noffsets of band extrema. The nonmetallic temperature dependence of transport\nproperties comes from the anomalous temperature dependence of scattering rates.\nNear the transition temperature the heavy electrons and the light holes\ncontribute equally to the conductivity. This surprising coincidence is regarded\nas the consequence of dominant intervalley scattering that precedes the\ntransition. The low-frequency peak in the optical spectra is identified and\nattributed to the critical softening of the L-point collective mode.\n", "title": "Scattering dominated high-temperature phase of 1T-TiSe2: an optical conductivity study" }
null
null
null
null
true
null
3363
null
Default
null
null
null
{ "abstract": " We report Raman sideband cooling of a single sodium atom to its\nthree-dimensional motional ground state in an optical tweezer. Despite a large\nLamb-Dicke parameter, high initial temperature, and large differential light\nshifts between the excited state and the ground state, we achieve a ground\nstate population of $93.5(7)$% after $53$ ms of cooling. Our technique includes\naddressing high-order sidebands, where several motional quanta are removed by a\nsingle laser pulse, and fast modulation of the optical tweezer intensity. We\ndemonstrate that Raman sideband cooling to the 3D motional ground state is\npossible, even without tight confinement and low initial temperature.\n", "title": "Motional Ground State Cooling Outside the Lamb-Dicke Regime" }
null
null
null
null
true
null
3364
null
Default
null
null
null
{ "abstract": " Meaningful climate predictions must be accompanied by their corresponding\nrange of uncertainty. Quantifying the uncertainties is non-trivial, and\ndifferent methods have been suggested and used in the past. Here, we propose a\nmethod that does not rely on any assumptions regarding the distribution of the\nensemble member predictions. The method is tested using the CMIP5 1981-2010\ndecadal predictions and is shown to perform better than two other methods\nconsidered here. The improved estimate of the uncertainties is of great\nimportance for both practical use and for better assessing the significance of\nthe effects seen in theoretical studies.\n", "title": "Quantifying the uncertainties in an ensemble of decadal climate predictions" }
null
null
null
null
true
null
3365
null
Default
null
null
null
{ "abstract": " Situational awareness in vehicular networks could be substantially improved\nutilizing reliable trajectory prediction methods. More precise situational\nawareness, in turn, results in notably better performance of critical safety\napplications, such as Forward Collision Warning (FCW), as well as comfort\napplications like Cooperative Adaptive Cruise Control (CACC). Therefore,\nvehicle trajectory prediction problem needs to be deeply investigated in order\nto come up with an end to end framework with enough precision required by the\nsafety applications' controllers. This problem has been tackled in the\nliterature using different methods. However, machine learning, which is a\npromising and emerging field with remarkable potential for time series\nprediction, has not been explored enough for this purpose. In this paper, a\ntwo-layer neural network-based system is developed which predicts the future\nvalues of vehicle parameters, such as velocity, acceleration, and yaw rate, in\nthe first layer and then predicts the two-dimensional, i.e. longitudinal and\nlateral, trajectory points based on the first layer's outputs. The performance\nof the proposed framework has been evaluated in realistic cut-in scenarios from\nSafety Pilot Model Deployment (SPMD) dataset and the results show a noticeable\nimprovement in the prediction accuracy in comparison with the kinematics model\nwhich is the dominant employed model by the automotive industry. Both ideal and\nnonideal communication circumstances have been investigated for our system\nevaluation. For non-ideal case, an estimation step is included in the framework\nbefore the parameter prediction block to handle the drawbacks of packet drops\nor sensor failures and reconstruct the time series of vehicle parameters at a\ndesirable frequency.\n", "title": "A Learning-Based Framework for Two-Dimensional Vehicle Maneuver Prediction over V2V Networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3366
null
Validated
null
null
null
{ "abstract": " Spatial-temporal prediction is a fundamental problem for constructing smart\ncity, which is useful for tasks such as traffic control, taxi dispatching, and\nenvironmental policy making. Due to data collection mechanism, it is common to\nsee data collection with unbalanced spatial distributions. For example, some\ncities may release taxi data for multiple years while others only release a few\ndays of data; some regions may have constant water quality data monitored by\nsensors whereas some regions only have a small collection of water samples. In\nthis paper, we tackle the problem of spatial-temporal prediction for the cities\nwith only a short period of data collection. We aim to utilize the long-period\ndata from other cities via transfer learning. Different from previous studies\nthat transfer knowledge from one single source city to a target city, we are\nthe first to leverage information from multiple cities to increase the\nstability of transfer. Specifically, our proposed model is designed as a\nspatial-temporal network with a meta-learning paradigm. The meta-learning\nparadigm learns a well-generalized initialization of the spatial-temporal\nnetwork, which can be effectively adapted to target cities. In addition, a\npattern-based spatial-temporal memory is designed to distill long-term temporal\ninformation (i.e., periodicity). We conduct extensive experiments on two tasks:\ntraffic (taxi and bike) prediction and water quality prediction. The\nexperiments demonstrate the effectiveness of our proposed model over several\ncompetitive baseline models.\n", "title": "Learning from Multiple Cities: A Meta-Learning Approach for Spatial-Temporal Prediction" }
null
null
null
null
true
null
3367
null
Default
null
null
null
{ "abstract": " We prove Runge-type theorems and universality results for locally univalent\nholomorphic and meromorphic functions. Refining a result of M. Heins, we also\nshow that there is a universal bounded locally univalent function on the unit\ndisk. These results are used to prove that on any hyperbolic simply connected\nplane domain there exist universal conformal metrics with prescribed constant\ncurvature.\n", "title": "Universal locally univalent functions and universal conformal metrics with constant curvature" }
null
null
null
null
true
null
3368
null
Default
null
null
null
{ "abstract": " In recent times, the use of separable convolutions in deep convolutional\nneural network architectures has been explored. Several researchers, most\nnotably (Chollet, 2016) and (Ghosh, 2017) have used separable convolutions in\ntheir deep architectures and have demonstrated state of the art or close to\nstate of the art performance. However, the underlying mechanism of action of\nseparable convolutions are still not fully understood. Although their\nmathematical definition is well understood as a depthwise convolution followed\nby a pointwise convolution, deeper interpretations such as the extreme\nInception hypothesis (Chollet, 2016) have failed to provide a thorough\nexplanation of their efficacy. In this paper, we propose a hybrid\ninterpretation that we believe is a better model for explaining the efficacy of\nseparable convolutions.\n", "title": "Towards a New Interpretation of Separable Convolutions" }
null
null
null
null
true
null
3369
null
Default
null
null
null
{ "abstract": " We present recent improvements in the simulation of regolith sampling\nprocesses in microgravity using the numerical particle method smooth particle\nhydrodynamics (SPH). We use an elastic-plastic soil constitutive model for\nlarge deformation and failure flows for dynamical behaviour of regolith. In the\ncontext of projected small body (asteroid or small moons) sample return\nmissions, we investigate the efficiency and feasibility of a particular\nmaterial sampling method: Brushes sweep material from the asteroid's surface\ninto a collecting tray. We analyze the influence of different material\nparameters of regolith such as cohesion and angle of internal friction on the\nsampling rate. Furthermore, we study the sampling process in two environments\nby varying the surface gravity (Earth's and Phobos') and we apply different\nrotation rates for the brushes. We find good agreement of our sampling\nsimulations on Earth with experiments and provide estimations for the influence\nof the material properties on the collecting rate.\n", "title": "Numerical Simulations of Regolith Sampling Processes" }
null
null
null
null
true
null
3370
null
Default
null
null
null
{ "abstract": " In recent years, social bots have been using increasingly more sophisticated,\nchallenging detection strategies. While many approaches and features have been\nproposed, social bots evade detection and interact much like humans making it\ndifficult to distinguish real human accounts from bot accounts. For detection\nsystems, various features under the broader categories of account profile,\ntweet content, network and temporal pattern have been utilised. The use of\ntweet content features is limited to analysis of basic terms such as URLs,\nhashtags, name entities and sentiment. Given a set of tweet contents with no\nobvious pattern can we distinguish contents produced by social bots from that\nof humans? We aim to answer this question by analysing the lexical richness of\ntweets produced by the respective accounts using large collections of different\ndatasets. Our results show a clear margin between the two classes in lexical\ndiversity, lexical sophistication and distribution of emoticons. We found that\nthe proposed lexical features significantly improve the performance of\nclassifying both account types. These features are useful for training a\nstandard machine learning classifier for effective detection of social bot\naccounts. A new dataset is made freely available for further exploration.\n", "title": "Lexical analysis of automated accounts on Twitter" }
null
null
null
null
true
null
3371
null
Default
null
null
null
{ "abstract": " A set of Smoothed Particle Hydrodynamics simulations of the influence of\nphotoionising radiation and stellar winds on a series of 10$^{4}$M$_{\\odot}$\nturbulent molecular clouds with initial virial ratios of 0.7, 1.1, 1.5, 1.9 and\n2.3 and initial mean densities of 136, 1135 and 9096\\,cm$^{-3}$ are presented.\nReductions in star formation efficiency rates are found to be modest, in the\nrange $30\\%-50\\%$ and to not vary greatly across the parameter space. In no\ncase was star formation entirely terminated over the $\\approx3$\\,Myr duration\nof the simulations. The fractions of material unbound by feedback are in the\nrange $20-60\\%$, clouds with the lowest escape velocities being the most\nstrongly affected.\\\\ Leakage of ionised gas leads to the HII regions rapidly\nbecoming underpressured. The destructive effects of ionisation are thus largely\nnot due to thermally--driven expansion of the HII regions, but to momentum\ntransfer by photoevaporation of fresh material. Our simulations have similar\nglobal ionisation rates and we show that the effects of feedback upon them can\nbe adequately modelled as a steady injection of momentum, resembling a\nmomentum--conserving wind.\n", "title": "The effect of the virial state of molecular clouds on the influence of feedback from massive stars" }
null
null
null
null
true
null
3372
null
Default
null
null
null
{ "abstract": " A number of formal methods exist for capturing stimulus-response requirements\nin a declarative form. Someone yet needs to translate the resulting declarative\nstatements into imperative programs. The present article describes a method for\nspecification and verification of stimulus-response requirements in the form of\nimperative program routines with conditionals and assertions. A program prover\nthen checks a candidate program directly against the stated requirements. The\narticle illustrates the approach by applying it to an ASM model of the Landing\nGear System, a widely used realistic example proposed for evaluating\nspecification and verification techniques.\n", "title": "A contract-based method to specify stimulus-response requirements" }
null
null
null
null
true
null
3373
null
Default
null
null
null
{ "abstract": " We propose and analyze a new estimator of the covariance matrix that admits\nstrong theoretical guarantees under weak assumptions on the underlying\ndistribution, such as existence of moments of only low order. While estimation\nof covariance matrices corresponding to sub-Gaussian distributions is\nwell-understood, much less in known in the case of heavy-tailed data. As K.\nBalasubramanian and M. Yuan write, \"data from real-world experiments oftentimes\ntend to be corrupted with outliers and/or exhibit heavy tails. In such cases,\nit is not clear that those covariance matrix estimators .. remain optimal\" and\n\"..what are the other possible strategies to deal with heavy tailed\ndistributions warrant further studies.\" We make a step towards answering this\nquestion and prove tight deviation inequalities for the proposed estimator that\ndepend only on the parameters controlling the \"intrinsic dimension\" associated\nto the covariance matrix (as opposed to the dimension of the ambient space); in\nparticular, our results are applicable in the case of high-dimensional\nobservations.\n", "title": "Estimation of the covariance structure of heavy-tailed distributions" }
null
null
null
null
true
null
3374
null
Default
null
null
null
{ "abstract": " We propose a novel computational strategy for de novo design of molecules\nwith desired properties termed ReLeaSE (Reinforcement Learning for Structural\nEvolution). Based on deep and reinforcement learning approaches, ReLeaSE\nintegrates two deep neural networks - generative and predictive - that are\ntrained separately but employed jointly to generate novel targeted chemical\nlibraries. ReLeaSE employs simple representation of molecules by their SMILES\nstrings only. Generative models are trained with stack-augmented memory network\nto produce chemically feasible SMILES strings, and predictive models are\nderived to forecast the desired properties of the de novo generated compounds.\nIn the first phase of the method, generative and predictive models are trained\nseparately with a supervised learning algorithm. In the second phase, both\nmodels are trained jointly with the reinforcement learning approach to bias the\ngeneration of new chemical structures towards those with the desired physical\nand/or biological properties. In the proof-of-concept study, we have employed\nthe ReLeaSE method to design chemical libraries with a bias toward structural\ncomplexity or biased toward compounds with either maximal, minimal, or specific\nrange of physical properties such as melting point or hydrophobicity, as well\nas to develop novel putative inhibitors of JAK2. The approach proposed herein\ncan find a general use for generating targeted chemical libraries of novel\ncompounds optimized for either a single desired property or multiple\nproperties.\n", "title": "Deep Reinforcement Learning for De-Novo Drug Design" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3375
null
Validated
null
null
null
{ "abstract": " We consider the two-armed bandit problem as applied to data processing if\nthere are two alternative processing methods available with different a priori\nunknown efficiencies. One should determine the most effective method and\nprovide its predominant application. Gaussian two-armed bandit describes the\nbatch, and possibly parallel, processing when the same methods are applied to\nsufficiently large packets of data and accumulated incomes are used for the\ncontrol. If the number of packets is large enough then such control does not\ndeteriorate the control performance, i.e. does not increase the minimax risk.\nFor example, in case of 50 packets the minimax risk is about 2% larger than\nthat one corresponding to one-by-one optimal processing. However, this is\ncompletely true only for methods with close efficiencies because otherwise\nthere may be significant expected losses at the initial stage of control when\nboth actions are applied turn-by-turn. To avoid significant losses at the\ninitial stage of control one should take initial packets of data having smaller\nsizes.\n", "title": "Batch Data Processing and Gaussian Two-Armed Bandit" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
3376
null
Validated
null
null
null
{ "abstract": " Accurate estimates of the under-5 mortality rate (U5MR) in a developing world\ncontext are a key barometer of the health of a nation. This paper describes new\nmodels to analyze survey data on mortality in this context. We are interested\nin both spatial and temporal description, that is, wishing to estimate U5MR\nacross regions and years, and to investigate the association between the U5MR\nand spatially-varying covariate surfaces. We illustrate the methodology by\nproducing yearly estimates for subnational areas in Kenya over the period 1980\n- 2014 using data from demographic health surveys (DHS). We use a binomial\nlikelihood with fixed effects for the urban/rural stratification to account for\nthe complex survey design. We carry out smoothing using Bayesian hierarchical\nmodels with continuous spatial and temporally discrete components. A key\ncomponent of the model is an offset to adjust for bias due to the effects of\nHIV epidemics. Substantively, there has been a sharp decline in U5MR in the\nperiod 1980 - 2014, but large variability in estimated subnational rates\nremains. A priority for future research is understanding this variability.\nTemperature, precipitation and a measure of malaria infection prevalence were\ncandidates for inclusion in the covariate model.\n", "title": "Estimating Under Five Mortality in Space and Time in a Developing World Context" }
null
null
null
null
true
null
3377
null
Default
null
null
null
{ "abstract": " Strong external difference family (SEDF) and its generalizations GSEDF,\nBGSEDF in a finite abelian group $G$ are combinatorial designs raised by\nPaterson and Stinson [7] in 2016 and have applications in communication theory\nto construct optimal strong algebraic manipulation detection codes. In this\npaper we firstly present some general constructions of these combinatorial\ndesigns by using difference sets and partial difference sets in $G$. Then, as\napplications of the general constructions, we construct series of SEDF, GSEDF\nand BGSEDF in finite fields by using cyclotomic classes.\n", "title": "Cyclotomic Construction of Strong External Difference Families in Finite Fields" }
null
null
null
null
true
null
3378
null
Default
null
null
null
{ "abstract": " Using a Multiple Scattering Theory algorithm, we investigate numerically the\ntransmission of ultrasonic waves through a disordered locally resonant\nmetamaterial containing only monopolar resonators. By comparing the cases of a\nperfectly random medium with its pair correlated counterpart, we show that the\nintroduction of short range correlation can substantially impact the effective\nparameters of the sample. We report, notably, the opening of an acoustic\ntransparency window in the region of the hybridization band gap. Interestingly,\nthe transparency window is found to be associated with negative values of both\neffective compressibility and density. Despite this feature being unexpected\nfor a disordered medium of monopolar resonators, we show that it can be fully\ndescribed analytically and that it gives rise to negative refraction of waves.\n", "title": "Acoustic double negativity induced by position correlations within a disordered set of monopolar resonators" }
null
null
null
null
true
null
3379
null
Default
null
null
null
{ "abstract": " Typically, AI researchers and roboticists try to realize intelligent behavior\nin machines by tuning parameters of a predefined structure (body plan and/or\nneural network architecture) using evolutionary or learning algorithms. Another\nbut not unrelated longstanding property of these systems is their brittleness\nto slight aberrations, as highlighted by the growing deep learning literature\non adversarial examples. Here we show robustness can be achieved by evolving\nthe geometry of soft robots, their control systems, and how their material\nproperties develop in response to one particular interoceptive stimulus\n(engineering stress) during their lifetimes. By doing so we realized robots\nthat were equally fit but more robust to extreme material defects (such as\nmight occur during fabrication or by damage thereafter) than robots that did\nnot develop during their lifetimes, or developed in response to a different\ninteroceptive stimulus (pressure). This suggests that the interplay between\nchanges in the containing systems of agents (body plan and/or neural\narchitecture) at different temporal scales (evolutionary and developmental)\nalong different modalities (geometry, material properties, synaptic weights)\nand in response to different signals (interoceptive and external perception)\nall dictate those agents' abilities to evolve or learn capable and robust\nstrategies.\n", "title": "Interoceptive robustness through environment-mediated morphological development" }
null
null
null
null
true
null
3380
null
Default
null
null
null
{ "abstract": " We consider Schrödinger equation with a non-degenerate metric on the\nEuclidean space. We study local in time Strichartz estimates for the\nSchrödinger equation without loss of derivatives including the endpoint case.\nIn contrast to the Riemannian metric case, we need the additional assumptions\nfor the well-posedness of our Schrödinger equation and for proving Strichartz\nestimates without loss.\n", "title": "Strichartz estimates for non-degenerate Schrödinger equations" }
null
null
null
null
true
null
3381
null
Default
null
null
null
{ "abstract": " Let $F_n$ denote the $n^{th}$ term of the Fibonacci sequence. In this paper,\nwe investigate the Diophantine equation $F_1^p+2F_2^p+\\cdots+kF_{k}^p=F_{n}^q$\nin the positive integers $k$ and $n$, where $p$ and $q$ are given positive\nintegers. A complete solution is given if the exponents are included in the set\n$\\{1,2\\}$. Based on the specific cases we could solve, and a computer search\nwith $p,q,k\\le100$ we conjecture that beside the trivial solutions only\n$F_8=F_1+2F_2+3F_3+4F_4$, $F_4^2=F_1+2F_2+3F_3$, and\n$F_4^3=F_1^3+2F_2^3+3F_3^3$ satisfy the title equation.\n", "title": "On the Diophantine equation $\\sum_{j=1}^kjF_j^p=F_n^q$" }
null
null
null
null
true
null
3382
null
Default
null
null
null
{ "abstract": " Social dilemmas have been regarded as the essence of evolution game theory,\nin which the prisoner's dilemma game is the most famous metaphor for the\nproblem of cooperation. Recent findings revealed people's behavior violated the\nSure Thing Principle in such games. Classic probability methodologies have\ndifficulty explaining the underlying mechanisms of people's behavior. In this\npaper, a novel quantum-like Bayesian Network was proposed to accommodate the\nparadoxical phenomenon. The special network can take interference into\nconsideration, which is likely to be an efficient way to describe the\nunderlying mechanism. With the assistance of belief entropy, named as Deng\nentropy, the paper proposes Belief Distance to render the model practical.\nTested with empirical data, the proposed model is proved to be predictable and\neffective.\n", "title": "Uncertainty measurement with belief entropy on interference effect in Quantum-Like Bayesian Networks" }
null
null
null
null
true
null
3383
null
Default
null
null
null
{ "abstract": " We study the diffusion (or heat) equation on a finite 1-dimensional spatial\ndomain, but we replace one of the boundary conditions with a \"nonlocal\ncondition\", through which we specify a weighted average of the solution over\nthe spatial interval. We provide conditions on the regularity of both the data\nand weight for the problem to admit a unique solution, and also provide a\nsolution representation in terms of contour integrals. The solution and\nwell-posedness results rely upon an extension of the Fokas (or unified)\ntransform method to initial-nonlocal value problems for linear equations; the\nnecessary extensions are described in detail. Despite arising naturally from\nthe Fokas transform method, the uniqueness argument appears to be novel even\nfor initial-boundary value problems.\n", "title": "The diffusion equation with nonlocal data" }
null
null
null
null
true
null
3384
null
Default
null
null
null
{ "abstract": " In this work, we perform an empirical comparison among the CTC,\nRNN-Transducer, and attention-based Seq2Seq models for end-to-end speech\nrecognition. We show that, without any language model, Seq2Seq and\nRNN-Transducer models both outperform the best reported CTC models with a\nlanguage model, on the popular Hub5'00 benchmark. On our internal diverse\ndataset, these trends continue - RNNTransducer models rescored with a language\nmodel after beam search outperform our best CTC models. These results simplify\nthe speech recognition pipeline so that decoding can now be expressed purely as\nneural network operations. We also study how the choice of encoder architecture\naffects the performance of the three models - when all encoder layers are\nforward only, and when encoders downsample the input representation\naggressively.\n", "title": "Exploring Neural Transducers for End-to-End Speech Recognition" }
null
null
null
null
true
null
3385
null
Default
null
null
null
{ "abstract": " By combining Shannon's cryptography model with an assumption to the lower\nbound of adversaries' uncertainty to the queried dataset, we develop a secure\nBayesian inference-based privacy model and then in some extent answer Dwork et\nal.'s question [1]: \"why Bayesian risk factors are the right measure for\nprivacy loss\".\nThis model ensures an adversary can only obtain little information of each\nindividual from the model's output if the adversary's uncertainty to the\nqueried dataset is larger than the lower bound. Importantly, the assumption to\nthe lower bound almost always holds, especially for big datasets. Furthermore,\nthis model is flexible enough to balance privacy and utility: by using four\nparameters to characterize the assumption, there are many approaches to balance\nprivacy and utility and to discuss the group privacy and the composition\nprivacy properties of this model.\n", "title": "Information Theory of Data Privacy" }
null
null
[ "Computer Science" ]
null
true
null
3386
null
Validated
null
null
null
{ "abstract": " Generative models, either by simple clustering algorithms or deep neural\nnetwork architecture, have been developed as a probabilistic estimation method\nfor dimension reduction or to model the underlying properties of data\nstructures. Although their apparent use has largely been limited to image\nrecognition and classification, generative machine learning algorithms can be a\npowerful tool for travel behaviour research. In this paper, we examine the\ngenerative machine learning approach for analyzing multiple discrete-continuous\n(MDC) travel behaviour data to understand the underlying heterogeneity and\ncorrelation, increasing the representational power of such travel behaviour\nmodels. We show that generative models are conceptually similar to choice\nselection behaviour process through information entropy and variational\nBayesian inference. Specifically, we consider a restricted Boltzmann machine\n(RBM) based algorithm with multiple discrete-continuous layer, formulated as a\nvariational Bayesian inference optimization problem. We systematically describe\nthe proposed machine learning algorithm and develop a process of analyzing\ntravel behaviour data from a generative learning perspective. We show parameter\nstability from model analysis and simulation tests on an open dataset with\nmultiple discrete-continuous dimensions and a size of 293,330 observations. For\ninterpretability, we derive analytical methods for conditional probabilities as\nwell as elasticities. Our results indicate that latent variables in generative\nmodels can accurately represent joint distribution consistently w.r.t multiple\ndiscrete-continuous variables. Lastly, we show that our model can generate\nstatistically similar data distributions for travel forecasting and prediction.\n", "title": "A combined entropy and utility based generative model for large scale multiple discrete-continuous travel behaviour data" }
null
null
null
null
true
null
3387
null
Default
null
null
null
{ "abstract": " We study a spectral initialization method that serves a key role in recent\nwork on estimating signals in nonconvex settings. Previous analysis of this\nmethod focuses on the phase retrieval problem and provides only performance\nbounds. In this paper, we consider arbitrary generalized linear sensing models\nand present a precise asymptotic characterization of the performance of the\nmethod in the high-dimensional limit. Our analysis also reveals a phase\ntransition phenomenon that depends on the ratio between the number of samples\nand the signal dimension. When the ratio is below a minimum threshold, the\nestimates given by the spectral method are no better than random guesses drawn\nfrom a uniform distribution on the hypersphere, thus carrying no information;\nabove a maximum threshold, the estimates become increasingly aligned with the\ntarget signal. The computational complexity of the method, as measured by the\nspectral gap, is also markedly different in the two phases. Worked examples and\nnumerical results are provided to illustrate and verify the analytical\npredictions. In particular, simulations show that our asymptotic formulas\nprovide accurate predictions for the actual performance of the spectral method\neven at moderate signal dimensions.\n", "title": "Phase Transitions of Spectral Initialization for High-Dimensional Nonconvex Estimation" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
3388
null
Validated
null
null
null
{ "abstract": " Generative Adversarial Networks (GANs) have become a popular method to learn\na probability model from data. In this paper, we aim to provide an\nunderstanding of some of the basic issues surrounding GANs including their\nformulation, generalization and stability on a simple benchmark where the data\nhas a high-dimensional Gaussian distribution. Even in this simple benchmark,\nthe GAN problem has not been well-understood as we observe that existing\nstate-of-the-art GAN architectures may fail to learn a proper generative\ndistribution owing to (1) stability issues (i.e., convergence to bad local\nsolutions or not converging at all), (2) approximation issues (i.e., having\nimproper global GAN optimizers caused by inappropriate GAN's loss functions),\nand (3) generalizability issues (i.e., requiring large number of samples for\ntraining). In this setup, we propose a GAN architecture which recovers the\nmaximum-likelihood solution and demonstrates fast generalization. Moreover, we\nanalyze global stability of different computational approaches for the proposed\nGAN optimization and highlight their pros and cons. Finally, we outline an\nextension of our model-based approach to design GANs in more complex setups\nthan the considered Gaussian benchmark.\n", "title": "Understanding GANs: the LQG Setting" }
null
null
null
null
true
null
3389
null
Default
null
null
null
{ "abstract": " {The numerical approximation of the solution of the Fokker--Planck equation\nis a challenging problem that has been extensively investigated starting from\nthe pioneering paper of Chang and Cooper in 1970. We revisit this problem at\nthe light of the approximation of the solution to the heat equation proposed by\nRosenau in 1992. Further, by means of the same idea, we address the problem of\na consistent approximation to higher-order linear diffusion equations.\n", "title": "A Rosenau-type approach to the approximation of the linear Fokker--Planck equation" }
null
null
null
null
true
null
3390
null
Default
null
null
null
{ "abstract": " In interactive information retrieval, researchers consider the user behavior\ntowards systems and search tasks in order to adapt search results by analyzing\ntheir past interactions. In this paper, we analyze the user behavior towards\nMarcia Bates' search stratagems such as 'footnote chasing' and 'citation\nsearch' in an academic search engine. We performed a preliminary analysis of\ntheir frequency and stage of use in the social sciences search engine sowiport.\nIn addition, we explored the impact of these stratagems on the whole search\nprocess performance. We can conclude that the appearance of these two search\nfeatures in real retrieval sessions lead to an improvement of the precision in\nterms of positive interactions with 16% when using footnote chasing and 17% for\nthe citation search stratagem.\n", "title": "Analysis of Footnote Chasing and Citation Searching in an Academic Search Engine" }
null
null
null
null
true
null
3391
null
Default
null
null
null
{ "abstract": " The excited states of polyatomic systems are rather complex, and often\nexhibit meta-stable dynamical behaviors. Static analysis of reaction pathway\noften fails to sufficiently characterize excited state motions due to their\nhighly non-equilibrium nature. Here, we proposed a time series guided\nclustering algorithm to generate most relevant meta-stable patterns directly\nfrom ab initio dynamic trajectories. Based on the knowledge of these\nmeta-stable patterns, we suggested an interpolation scheme with only a concrete\nand finite set of known patterns to accurately predict the ground and excited\nstate properties of the entire dynamics trajectories. As illustrated with the\nexample of sinapic acids, the estimation error for both ground and excited\nstate is very close, which indicates one could predict the ground and excited\nstate molecular properties with similar accuracy. These results may provide us\nsome insights to construct an excited state force field with compatible energy\nterms as traditional ones.\n", "title": "Direct Mapping Hidden Excited State Interaction Patterns from ab initio Dynamics and Its Implications on Force Field Development" }
null
null
null
null
true
null
3392
null
Default
null
null
null
{ "abstract": " This work offers a design of a video surveillance system based on a soft\nbiometric -- gait identification from MoCap data. The main focus is on two\nsubstantial issues of the video surveillance scenario: (1) the walkers do not\ncooperate in providing learning data to establish their identities and (2) the\ndata are often noisy or incomplete. We show that only a few examples of human\ngait cycles are required to learn a projection of raw MoCap data onto a\nlow-dimensional sub-space where the identities are well separable. Latent\nfeatures learned by Maximum Margin Criterion (MMC) method discriminate better\nthan any collection of geometric features. The MMC method is also highly robust\nto noisy data and works properly even with only a fraction of joints tracked.\nThe overall workflow of the design is directly applicable for a day-to-day\noperation based on the available MoCap technology and algorithms for gait\nanalysis. In the concept we introduce, a walker's identity is represented by a\ncluster of gait data collected at their incidents within the surveillance\nsystem: They are how they walk.\n", "title": "You Are How You Walk: Uncooperative MoCap Gait Identification for Video Surveillance with Incomplete and Noisy Data" }
null
null
[ "Computer Science" ]
null
true
null
3393
null
Validated
null
null
null
{ "abstract": " Recent work by Cohen \\emph{et al.} has achieved state-of-the-art results for\nlearning spherical images in a rotation invariant way by using ideas from group\nrepresentation theory and noncommutative harmonic analysis. In this paper we\npropose a generalization of this work that generally exhibits improved\nperformace, but from an implementation point of view is actually simpler. An\nunusual feature of the proposed architecture is that it uses the\nClebsch--Gordan transform as its only source of nonlinearity, thus avoiding\nrepeated forward and backward Fourier transforms. The underlying ideas of the\npaper generalize to constructing neural networks that are invariant to the\naction of other compact groups.\n", "title": "Clebsch-Gordan Nets: a Fully Fourier Space Spherical Convolutional Neural Network" }
null
null
[ "Statistics" ]
null
true
null
3394
null
Validated
null
null
null
{ "abstract": " Catastrophic events, though rare, do occur and when they occur, they have\ndevastating effects. It is, therefore, of utmost importance to understand the\ncomplexity of the underlying dynamics and signatures of catastrophic events,\nsuch as market crashes. For deeper understanding, we choose the US and Japanese\nmarkets from 1985 onward, and study the evolution of the cross-correlation\nstructures of stock return matrices and their eigenspectra over different short\ntime-intervals or \"epochs\". A slight non-linear distortion is applied to the\ncorrelation matrix computed for any epoch, leading to the emerging spectrum of\neigenvalues. The statistical properties of the emerging spectrum display: (i)\nthe shape of the emerging spectrum reflects the market instability, (ii) the\nsmallest eigenvalue may be able to statistically distinguish the nature of a\nmarket turbulence or crisis -- internal instability or external shock, and\n(iii) the time-lagged smallest eigenvalue has a statistically significant\ncorrelation with the mean market cross-correlation. The smallest eigenvalue\nseems to indicate that the financial market has become more turbulent in a\nsimilar way as the mean does. Yet we show features of the smallest eigenvalue\nof the emerging spectrum that distinguish different types of market\ninstabilities related to internal or external causes. Based on the paradigmatic\ncharacter of financial time series for other complex systems, the capacity of\nthe emerging spectrum to understand the nature of instability may be a new\nfeature, which can be broadly applied.\n", "title": "Characterization of catastrophic instabilities: Market crashes as paradigm" }
null
null
null
null
true
null
3395
null
Default
null
null
null
{ "abstract": " The quantum anomalous Hall (QAH) phase is a novel topological state of matter\ncharacterized by a nonzero quantized Hall conductivity without an external\nmagnetic field. The realizations of QAH effect, however, are experimentally\nchallengeable. Based on ab initio calculations, here we propose an intrinsic\nQAH phase in DCA Kagome lattice. The nontrivial topology in Kagome bands are\nconfirmed by the nonzero chern number, quantized Hall conductivity, and gapless\nchiral edge states of Mn-DCA lattice. A tight-binding (TB) model is further\nconstructed to clarify the origin of QAH effect. Furthermore, its Curie\ntemperature, estimated to be ~ 253 K using Monte-Carlo simulation, is\ncomparable with room temperature and higher than most of two-dimensional\nferromagnetic thin films. Our findings present a reliable material platform for\nthe observation of QAH effect in covalent-organic frameworks.\n", "title": "Discovery of Intrinsic Quantum Anomalous Hall Effect in Organic Mn-DCA Lattice" }
null
null
[ "Physics" ]
null
true
null
3396
null
Validated
null
null
null
{ "abstract": " The automatic analysis of ultrasound sequences can substantially improve the\nefficiency of clinical diagnosis. In this work we present our attempt to\nautomate the challenging task of measuring the vascular diameter of the fetal\nabdominal aorta from ultrasound images. We propose a neural network\narchitecture consisting of three blocks: a convolutional layer for the\nextraction of imaging features, a Convolution Gated Recurrent Unit (C-GRU) for\nenforcing the temporal coherence across video frames and exploiting the\ntemporal redundancy of a signal, and a regularized loss function, called\n\\textit{CyclicLoss}, to impose our prior knowledge about the periodicity of the\nobserved signal. We present experimental evidence suggesting that the proposed\narchitecture can reach an accuracy substantially superior to previously\nproposed methods, providing an average reduction of the mean squared error from\n$0.31 mm^2$ (state-of-art) to $0.09 mm^2$, and a relative error reduction from\n$8.1\\%$ to $5.3\\%$. The mean execution speed of the proposed approach of 289\nframes per second makes it suitable for real time clinical use.\n", "title": "Temporal Convolution Networks for Real-Time Abdominal Fetal Aorta Analysis with Ultrasound" }
null
null
null
null
true
null
3397
null
Default
null
null
null
{ "abstract": " Data analysis plays an important role in the development of intelligent\nenergy networks (IENs). This article reviews and discusses the application of\ndata analysis methods for energy big data. The installation of smart energy\nmeters has provided a huge volume of data at different time resolutions,\nsuggesting data analysis is required for clustering, demand forecasting, energy\ngeneration optimization, energy pricing, monitoring and diagnostics. The\ncurrently adopted data analysis technologies for IENs include pattern\nrecognition, machine learning, data mining, statistics methods, etc. However,\nexisting methods for data analysis cannot fully meet the requirements for\nprocessing the big data produced by the IENs and, therefore, more comprehensive\ndata analysis methods are needed to handle the increasing amount of data and to\nmine more valuable information.\n", "title": "The Role of Data Analysis in the Development of Intelligent Energy Networks" }
null
null
null
null
true
null
3398
null
Default
null
null
null
{ "abstract": " Dynamics of a system in general depends on its initial state and how the\nsystem is driven, but in many-body systems the memory is usually averaged out\nduring evolution. Here, interacting quantum systems without external\nrelaxations are shown to retain long-time memory effects in steady states. To\nidentify memory effects, we first show quasi-steady state currents form in\nfinite, isolated Bose and Fermi Hubbard models driven by interaction imbalance\nand they become steady-state currents in the thermodynamic limit. By comparing\nthe steady state currents from different initial states or ramping rates of the\nimbalance, long-time memory effects can be quantified. While the memory effects\nof initial states are more ubiquitous, the memory effects of switching\nprotocols are mostly visible in interaction-induced transport in lattices. Our\nsimulations suggest the systems enter a regime governed by a generalized Fick's\nlaw and memory effects lead to initial-state dependent diffusion coefficients.\nWe also identify conditions for enhancing memory effects and discuss possible\nexperimental implications.\n", "title": "Quantification of the memory effect of steady-state currents from interaction-induced transport in quantum systems" }
null
null
null
null
true
null
3399
null
Default
null
null
null
{ "abstract": " We study laser-driven isomerization reactions through an excited electronic\nstate using the recently developed Geometrical Optimization procedure [J. Phys.\nChem. Lett. 6, 1724 (2015)]. The goal is to analyze whether an initial wave\npacket in the ground state, with optimized amplitudes and phases, can be used\nto enhance the yield of the reaction at faster rates, exploring how the\ngeometrical restrictions induced by the symmetry of the system impose\nlimitations in the optimization procedure. As an example we model the\nisomerization in an oriented 2,2'-dimethyl biphenyl molecule with a simple\nquartic potential. Using long (picosecond) pulses we find that the\nisomerization can be achieved driven by a single pulse. The phase of the\ninitial superposition state does not affect the yield. However, using short\n(femtosecond) pulses, one always needs a pair of pulses to force the reaction.\nHigh yields can only be obtained by optimizing both the initial state, and the\nwave packet prepared in the excited state, implying the well known pump-dump\nmechanism.\n", "title": "Geometrical optimization approach to isomerization: Models and limitations" }
null
null
null
null
true
null
3400
null
Default
null
null