text
null | inputs
dict | prediction
null | prediction_agent
null | annotation
list | annotation_agent
null | multi_label
bool 1
class | explanation
null | id
stringlengths 1
5
| metadata
null | status
stringclasses 2
values | event_timestamp
null | metrics
null |
---|---|---|---|---|---|---|---|---|---|---|---|---|
null |
{
"abstract": " There is widespread confusion about the role of projectivity in\nlikelihood-based inference for random graph models. The confusion is rooted in\nclaims that projectivity, a form of marginalizability, may be necessary for\nlikelihood-based inference and consistency of maximum likelihood estimators. We\nshow that likelihood-based superpopulation inference is not affected by lack of\nprojectivity and that projectivity is not a necessary condition for consistency\nof maximum likelihood estimators.\n",
"title": "A note on the role of projectivity in likelihood-based inference for random graph models"
}
| null | null | null | null | true | null |
11701
| null |
Default
| null | null |
null |
{
"abstract": " In this work, we derive relations between generating functions of double\nstuffle relations and double shuffle relations to express the alternating\ndouble Euler sums $\\zeta\\left(\\overline{r}, s\\right)$, $\\zeta\\left(r,\n\\overline{s}\\right)$ and $\\zeta\\left(\\overline{r}, \\overline{s}\\right)$ with\n$r+s$ odd in terms of zeta values. We also give a direct proof of a\nhypergeometric identity which is a limiting case of a basic hypergeometric\nidentity of Andrews. Finally, we gave another proof for the formula of Zagier\non the multiple zeta values $\\zeta(2,\\ldots,2,3,2,\\ldots,2)$.\n",
"title": "Alternating Double Euler Sums, Hypergeometric Identities and a Theorem of Zagier"
}
| null | null | null | null | true | null |
11702
| null |
Default
| null | null |
null |
{
"abstract": " Considerable literature has been developed for various fundamental\ndistributed problems in the SINR (Signal-to-Interference-plus-Noise-Ratio)\nmodel for radio transmission. A setting typically studied is when all nodes\ntransmit a signal of the same strength, and each device only has access to\nknowledge about the total number of nodes in the network $n$, the range from\nwhich each node's label is taken $[1,\\dots,N]$, and the label of the device\nitself. In addition, an assumption is made that each node also knows its\ncoordinates in the Euclidean plane. In this paper, we create a technique which\nallows algorithm designers to remove that last assumption. The assumption about\nthe unavailability of the knowledge of the physical coordinates of the nodes\ntruly captures the `ad-hoc' nature of wireless networks.\nPrevious work in this area uses a flavor of a technique called dilution, in\nwhich nodes transmit in a (predetermined) round-robin fashion, and are able to\nreach all their neighbors. However, without knowing the physical coordinates,\nit's not possible to know the coordinates of their containing (pivotal) grid\nbox and seemingly not possible to use dilution (to coordinate their\ntransmissions). We propose a new technique to achieve dilution without using\nthe knowledge of physical coordinates. This technique exploits the\nunderstanding that the transmitting nodes lie in 2-D space, segmented by an\nappropriate pivotal grid, without explicitly referring to the actual physical\ncoordinates of these nodes. Using this technique, it is possible for every weak\ndevice to successfully transmit its message to all of its neighbors in\n$\\Theta(\\lg N)$ rounds, as long as the density of transmitting nodes in any\nphysical grid box is bounded by a known constant. This technique, we feel, is\nan important generic tool for devising practical protocols when physical\ncoordinates of the nodes are not known.\n",
"title": "Achieving Dilution without Knowledge of Coordinates in the SINR Model"
}
| null | null | null | null | true | null |
11703
| null |
Default
| null | null |
null |
{
"abstract": " The mixedness of a quantum state is usually seen as an adversary to\ntopological quantization of observables. For example, exact quantization of the\ncharge transported in a so-called Thouless adiabatic pump is lifted at any\nfinite temperature in symmetry-protected topological insulators. Here, we show\nthat certain directly observable many-body correlators preserve the integrity\nof topological invariants for mixed Gaussian quantum states in one dimension.\nOur approach relies on the expectation value of the many-body\nmomentum-translation operator, and leads to a physical observable --- the\n\"ensemble geometric phase\" (EGP) --- which represents a bona fide geometric\nphase for mixed quantum states, in the thermodynamic limit. In cyclic\nprotocols, the EGP provides a topologically quantized observable which detects\nencircled spectral singularities (\"purity-gap\" closing points) of density\nmatrices. While we identify the many-body nature of the EGP as a key\ningredient, we propose a conceptually simple, interferometric setup to directly\nmeasure the latter in experiments with mesoscopic ensembles of ultracold atoms.\n",
"title": "Probing the topology of density matrices"
}
| null | null | null | null | true | null |
11704
| null |
Default
| null | null |
null |
{
"abstract": " This paper introduces a new concept of stochastic dependence among many\nrandom variables which we call conditional neighborhood dependence (CND).\nSuppose that there are a set of random variables and a set of sigma algebras\nwhere both sets are indexed by the same set endowed with a neighborhood system.\nWhen the set of random variables satisfies CND, any two non-adjacent sets of\nrandom variables are conditionally independent given sigma algebras having\nindices in one of the two sets' neighborhood. Random variables with CND include\nthose with conditional dependency graphs and a class of Markov random fields\nwith a global Markov property. The CND property is useful for modeling\ncross-sectional dependence governed by a complex, large network. This paper\nprovides two main results. The first result is a stable central limit theorem\nfor a sum of random variables with CND. The second result is a Donsker-type\nresult of stable convergence of empirical processes indexed by a class of\nfunctions satisfying a certain bracketing entropy condition when the random\nvariables satisfy CND.\n",
"title": "Stable Limit Theorems for Empirical Processes under Conditional Neighborhood Dependence"
}
| null | null | null | null | true | null |
11705
| null |
Default
| null | null |
null |
{
"abstract": " We experimentally explore the topological Maxwell metal bands by mapping the\nmomentum space of condensed-matter models to the tunable parameter space of\nsuperconducting quantum circuits. An exotic band structure that is effectively\ndescribed by the spin-1 Maxwell equations is imaged. Three-fold degenerate\npoints dubbed Maxwell points are observed in the Maxwell metal bands. Moreover,\nwe engineer and observe the topological phase transition from the topological\nMaxwell metal to a trivial insulator, and report the first experiment to\nmeasure the Chern numbers that are higher than one.\n",
"title": "Topological Maxwell Metal Bands in a Superconducting Qutrit"
}
| null | null | null | null | true | null |
11706
| null |
Default
| null | null |
null |
{
"abstract": " We report a detailed study of the transport coefficients of\n$\\beta$-Bi$_4$I$_4$ quasi-one dimensional topological insulator. Electrical\nresistivity, thermoelectric power, thermal conductivity and Hall coefficient\nmeasurements are consistent with the possible appearance of a charge density\nwave order at low temperatures. Both electrons and holes contribute to the\nconduction in $\\beta$-Bi$_4$I$_4$ and the dominant type of charge carrier\nchanges with temperature as a consequence of temperature-dependent carrier\ndensities and mobilities. Measurements of resistivity and Seebeck coefficient\nunder hydrostatic pressure up to 2 GPa show a shift of the charge density wave\norder to higher temperatures suggesting a strongly one-dimensional character at\nambient pressure. Surprisingly, superconductivity is induced in\n$\\beta$-Bi$_4$I$_4$ above 10 GPa with of 4.0 K which is slightly decreasing\nupon increasing the pressure up to 20 GPa. Chemical characterisation of the\npressure-treated samples shows amorphization of $\\beta$-Bi$_4$I$_4$ under\npressure and rules out decomposition into Bi and BiI$_3$ at room-temperature\nconditions.\n",
"title": "Pressure effect and Superconductivity in $β$-Bi$_4$I$_4$ Topological Insulator"
}
| null | null | null | null | true | null |
11707
| null |
Default
| null | null |
null |
{
"abstract": " In real-world scenarios, it is appealing to learn a model carrying out\nstochastic operations internally, known as stochastic computation graphs\n(SCGs), rather than learning a deterministic mapping. However, standard\nbackpropagation is not applicable to SCGs. We attempt to address this issue\nfrom the angle of cost propagation, with local surrogate costs, called\nQ-functions, constructed and learned for each stochastic node in an SCG. Then,\nthe SCG can be trained based on these surrogate costs using standard\nbackpropagation. We propose the entire framework as a solution to generalize\nbackpropagation for SCGs, which resembles an actor-critic architecture but\nbased on a graph. For broad applicability, we study a variety of SCG structures\nfrom one cost to multiple costs. We utilize recent advances in reinforcement\nlearning (RL) and variational Bayes (VB), such as off-policy critic learning\nand unbiased-and-low-variance gradient estimation, and review them in the\ncontext of SCGs. The generalized backpropagation extends transported learning\nsignals beyond gradients between stochastic nodes while preserving the benefit\nof backpropagating gradients through deterministic nodes. Experimental\nsuggestions and concerns are listed to help design and test any specific model\nusing this framework.\n",
"title": "Backprop-Q: Generalized Backpropagation for Stochastic Computation Graphs"
}
| null | null | null | null | true | null |
11708
| null |
Default
| null | null |
null |
{
"abstract": " NEWAGE is a direction-sensitive dark-matter-search experiment that uses a\nmicro-patterned gaseous detector, or {\\mu}-PIC, as the readout. The main\nbackground sources are {\\alpha}-rays from radioactive contaminants in the\n{\\mu}-PIC. We have therefore developed a low-alpha-emitting {\\mu}-PICs and\nmeasured its performances. We measured the surface {\\alpha}-ray emission rate\nof the {\\mu}-PIC in the Kamioka mine using a surface {\\alpha}-ray counter based\non a micro TPC.\n",
"title": "Development of a low-alpha-emitting μ-PIC for NEWAGE direction-sensitive dark-matter search"
}
| null | null | null | null | true | null |
11709
| null |
Default
| null | null |
null |
{
"abstract": " The lack of open-source tools for hyperspectral data visualization and\nanalysiscreates a demand for new tools. In this paper we present the new\nPlanetServer,a set of tools comprising a web Geographic Information System\n(GIS) and arecently developed Python Application Programming Interface (API)\ncapableof visualizing and analyzing a wide variety of hyperspectral data from\ndifferentplanetary bodies. Current WebGIS open-source tools are evaluated in\norderto give an overview and contextualize how PlanetServer can help in this\nmat-ters. The web client is thoroughly described as well as the datasets\navailablein PlanetServer. Also, the Python API is described and exposed the\nreason ofits development. Two different examples of mineral characterization of\ndifferenthydrosilicates such as chlorites, prehnites and kaolinites in the Nili\nFossae areaon Mars are presented. As the obtained results show positive outcome\nin hyper-spectral analysis and visualization compared to previous literature,\nwe suggestusing the PlanetServer approach for such investigations.\n",
"title": "Online characterization of planetary surfaces: PlanetServer, an open-source analysis and visualization tool"
}
| null | null | null | null | true | null |
11710
| null |
Default
| null | null |
null |
{
"abstract": " We study the problem of searching for and tracking a collection of moving\ntargets using a robot with a limited Field-Of-View (FOV) sensor. The actual\nnumber of targets present in the environment is not known a priori. We propose\na search and tracking framework based on the concept of Bayesian Random Finite\nSets (RFSs). Specifically, we generalize the Gaussian Mixture Probability\nHypothesis Density (GM-PHD) filter which was previously applied for tracking\nproblems to allow for simultaneous search and tracking with a limited FOV\nsensor. The proposed framework can extract individual target tracks as well as\nestimate the number and the spatial density of targets. We also show how to use\nthe Gaussian Process (GP) regression to extract and predict non-linear target\ntrajectories in this framework. We demonstrate the efficacy of our techniques\nthrough representative simulations and a real data collected from an aerial\nrobot.\n",
"title": "GM-PHD Filter for Searching and Tracking an Unknown Number of Targets with a Mobile Sensor with Limited FOV"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11711
| null |
Validated
| null | null |
null |
{
"abstract": " We prove regularity estimates for entropy solutions to scalar conservation\nlaws with a force. Based on the kinetic form of a scalar conservation law, a\nnew decomposition of entropy solutions is introduced, by means of a\ndecomposition in the velocity variable, adapted to the non-degeneracy\nproperties of the flux function. This allows a finer control of the degeneracy\nbehavior of the flux. In addition, this decomposition allows to make use of the\nfact that the entropy dissipation measure has locally finite singular moments.\nBased on these observations, improved regularity estimates for entropy\nsolutions to (forced) scalar conservation laws are obtained.\n",
"title": "Regularity of solutions to scalar conservation laws with a force"
}
| null | null | null | null | true | null |
11712
| null |
Default
| null | null |
null |
{
"abstract": " We analyze the ground state localization properties of an array of identical\ninteracting spinless fermionic chains with quasi-random disorder, using\nnon-perturbative Renormalization Group methods. In the single or two chains\ncase localization persists while for a larger number of chains a different\nqualitative behavior is generically expected, unless the many body interaction\nis vanishing. This is due to number theoretical properties of the frequency,\nsimilar to the ones assumed in KAM theory, and cancellations due to Pauli\nprinciple which in the single or two chains case imply that all the effective\ninteractions are irrelevant; in contrast for a larger number of chains relevant\neffective interactions are present.\n",
"title": "Coupled identical localized fermionic chains with quasi-random disorder"
}
| null | null | null | null | true | null |
11713
| null |
Default
| null | null |
null |
{
"abstract": " We propose a calibrated filtered reduced order model (CF-ROM) framework for\nthe numerical simulation of general nonlinear PDEs that are amenable to reduced\norder modeling. The novel CF-ROM framework consists of two steps: (i) In the\nfirst step, we use explicit ROM spatial filtering of the nonlinear PDE to\nconstruct a filtered ROM. This filtered ROM is low-dimensional, but is not\nclosed (because of the nonlinearity in the given PDE). (ii) In the second step,\nwe use a calibration procedure to close the filtered ROM, i.e., to model the\ninteraction between the resolved and unresolved modes. To this end, we use a\nlinear or quadratic ansatz to model this interaction and close the filtered\nROM. To find the new coefficients in the closed filtered ROM, we solve an\noptimization problem that minimizes the difference between the full order model\ndata and our ansatz. Although we use a fluid dynamics setting to illustrate how\nto construct and use the CF-ROM framework, we emphasize that it is built on\ngeneral ideas of spatial filtering and optimization and is independent of\n(restrictive) phenomenological arguments. Thus, the CF-ROM framework can be\napplied to a wide variety of PDEs.\n",
"title": "Calibrated Filtered Reduced Order Modeling"
}
| null | null | null | null | true | null |
11714
| null |
Default
| null | null |
null |
{
"abstract": " The interaction of light with an atomic sample containing a large number of\nparticles gives rise to many collective (or cooperative) effects, such as\nmultiple scattering, superradiance and subradiance, even if the atomic density\nis low and the incident optical intensity weak (linear optics regime). Tracing\nover the degrees of freedom of the light field, the system can be well\ndescribed by an effective atomic Hamiltonian, which contains the light-mediated\ndipole-dipole interaction between atoms. This long-range interaction is at the\norigin of the various collective effects, or of collective excitation modes of\nthe system. Even though an analysis of the eigenvalues and eigenfunctions of\nthese collective modes does allow distinguishing superradiant modes, for\ninstance, from other collective modes, this is not sufficient to understand the\ndynamics of a driven system, as not all collective modes are significantly\npopulated. Here, we study how the excitation parameters, i.e. the driving\nfield, determines the population of the collective modes. We investigate in\nparticular the role of the laser detuning from the atomic transition, and\ndemonstrate a simple relation between the detuning and the steady-state\npopulation of the modes. This relation allows understanding several properties\nof cooperative scattering, such as why superradiance and subradiance become\nindependent of the detuning at large enough detuning without vanishing, and why\nsuperradiance, but not subradiance, is suppressed near resonance.\n",
"title": "Population of collective modes in light scattering by many atoms"
}
| null | null | null | null | true | null |
11715
| null |
Default
| null | null |
null |
{
"abstract": " Technological advancement in Wireless Sensor Networks (WSN) has made it\nbecome an invaluable component of a reliable environmental monitoring system;\nthey form the digital skin' through which to 'sense' and collect the context of\nthe surroundings and provides information on the process leading to complex\nevents such as drought. However, these environmental properties are measured by\nvarious heterogeneous sensors of different modalities in distributed locations\nmaking up the WSN, using different abstruse terms and vocabulary in most cases\nto denote the same observed property, causing data heterogeneity. Adding\nsemantics and understanding the relationships that exist between the observed\nproperties, and augmenting it with local indigenous knowledge is necessary for\nan accurate drought forecasting system. In this paper, we propose the framework\nfor the semantic representation of sensor data and integration with indigenous\nknowledge on drought using a middleware for an efficient drought forecasting\nsystem.\n",
"title": "A Framework for Accurate Drought Forecasting System Using Semantics-Based Data Integration Middleware"
}
| null | null | null | null | true | null |
11716
| null |
Default
| null | null |
null |
{
"abstract": " J. Willard Gibbs' Elementary Principles in Statistical Mechanics was the\ndefinitive work of one of America's greatest physicists. Gibbs' book on\nstatistical mechanics establishes the basic principles and fundamental results\nthat have flowered into the modern field of statistical mechanics. However, at\na number of points, Gibbs' teachings on statistical mechanics diverge from\npositions on the canonical ensemble found in more recent works, at points where\nseemingly there should be agreement. The objective of this paper is to note\nsome of these points, so that Gibbs' actual positions are not misrepresented to\nfuture generations of students.\n",
"title": "Readings and Misreadings of J. Willard Gibbs Elementary Principles in Statistical Mechanics"
}
| null | null |
[
"Physics"
] | null | true | null |
11717
| null |
Validated
| null | null |
null |
{
"abstract": " In this work, we focus on multilingual systems based on recurrent neural\nnetworks (RNNs), trained using the Connectionist Temporal Classification (CTC)\nloss function. Using a multilingual set of acoustic units poses difficulties.\nTo address this issue, we proposed Language Feature Vectors (LFVs) to train\nlanguage adaptive multilingual systems. Language adaptation, in contrast to\nspeaker adaptation, needs to be applied not only on the feature level, but also\nto deeper layers of the network. In this work, we therefore extended our\nprevious approach by introducing a novel technique which we call \"modulation\".\nBased on this method, we modulated the hidden layers of RNNs using LFVs. We\nevaluated this approach in both full and low resource conditions, as well as\nfor grapheme and phone based systems. Lower error rates throughout the\ndifferent conditions could be achieved by the use of the modulation.\n",
"title": "Multilingual Adaptation of RNN Based ASR Systems"
}
| null | null | null | null | true | null |
11718
| null |
Default
| null | null |
null |
{
"abstract": " SPIDERS (SPectroscopic IDentification of eROSITA Sources) is an SDSS-IV\nsurvey running in parallel to the eBOSS cosmology project. SPIDERS will obtain\noptical spectroscopy for large numbers of X-ray-selected AGN and galaxy cluster\nmembers detected in wide area eROSITA, XMM-Newton and ROSAT surveys. We\ndescribe the methods used to choose spectroscopic targets for two\nsub-programmes of SPIDERS: X-ray selected AGN candidates detected in the ROSAT\nAll Sky and the XMM-Newton Slew surveys. We have exploited a Bayesian\ncross-matching algorithm, guided by priors based on mid-IR colour-magnitude\ninformation from the WISE survey, to select the most probable optical\ncounterpart to each X-ray detection. We empirically demonstrate the high\nfidelity of our counterpart selection method using a reference sample of bright\nwell-localised X-ray sources collated from XMM-Newton, Chandra and Swift-XRT\nserendipitous catalogues, and also by examining blank-sky locations. We\ndescribe the down-selection steps which resulted in the final set of\nSPIDERS-AGN targets put forward for spectroscopy within the eBOSS/TDSS/SPIDERS\nsurvey, and present catalogues of these targets. We also present catalogues of\n~12000 ROSAT and ~1500 XMM-Newton Slew survey sources which have existing\noptical spectroscopy from SDSS-DR12, including the results of our visual\ninspections. On completion of the SPIDERS program, we expect to have collected\nhomogeneous spectroscopic redshift information over a footprint of ~7500\ndeg$^2$ for >85 percent of the ROSAT and XMM-Newton Slew survey sources having\noptical counterparts in the magnitude range 17<r<22.5, producing a large and\nhighly complete sample of bright X-ray-selected AGN suitable for statistical\nstudies of AGN evolution and clustering.\n",
"title": "SPIDERS: Selection of spectroscopic targets using AGN candidates detected in all-sky X-ray surveys"
}
| null | null |
[
"Physics"
] | null | true | null |
11719
| null |
Validated
| null | null |
null |
{
"abstract": " Task-specific word identification aims to choose the task-related words that\nbest describe a short text. Existing approaches require well-defined seed words\nor lexical dictionaries (e.g., WordNet), which are often unavailable for many\napplications such as social discrimination detection and fake review detection.\nHowever, we often have a set of labeled short texts where each short text has a\ntask-related class label, e.g., discriminatory or non-discriminatory, specified\nby users or learned by classification algorithms. In this paper, we focus on\nidentifying task-specific words and phrases from short texts by exploiting\ntheir class labels rather than using seed words or lexical dictionaries. We\nconsider the task-specific word and phrase identification as feature learning.\nWe train a convolutional neural network over a set of labeled texts and use\nscore vectors to localize the task-specific words and phrases. Experimental\nresults on sentiment word identification show that our approach significantly\noutperforms existing methods. We further conduct two case studies to show the\neffectiveness of our approach. One case study on a crawled tweets dataset\ndemonstrates that our approach can successfully capture the\ndiscrimination-related words/phrases. The other case study on fake review\ndetection shows that our approach can identify the fake-review words/phrases.\n",
"title": "Task-specific Word Identification from Short Texts Using a Convolutional Neural Network"
}
| null | null | null | null | true | null |
11720
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a simple sub-universal quantum computing model, which we call\nthe Hadamard-classical circuit with one-qubit (HC1Q) model. It consists of a\nclassical reversible circuit sandwiched by two layers of Hadamard gates, and\ntherefore it is in the second level of the Fourier hierarchy. We show that\noutput probability distributions of the HC1Q model cannot be classically\nefficiently sampled within a multiplicative error unless the polynomial-time\nhierarchy collapses to the second level. The proof technique is different from\nthose used for previous sub-universal models, such as IQP, Boson Sampling, and\nDQC1, and therefore the technique itself might be useful for finding other\nsub-universal models that are hard to classically simulate. We also study the\nclassical verification of quantum computing in the second level of the Fourier\nhierarchy. To this end, we define a promise problem, which we call the\nprobability distribution distinguishability with maximum norm (PDD-Max). It is\na promise problem to decide whether output probability distributions of two\nquantum circuits are far apart or close. We show that PDD-Max is BQP-complete,\nbut if the two circuits are restricted to some types in the second level of the\nFourier hierarchy, such as the HC1Q model or the IQP model, PDD-Max has a\nMerlin-Arthur system with quantum polynomial-time Merlin and classical\nprobabilistic polynomial-time Arthur.\n",
"title": "Merlin-Arthur with efficient quantum Merlin and quantum supremacy for the second level of the Fourier hierarchy"
}
| null | null | null | null | true | null |
11721
| null |
Default
| null | null |
null |
{
"abstract": " We present $\\psi'$MSSM, a model based on a $U(1)_{\\psi'}$ extension of the\nminimal supersymmetric standard model. The gauge symmetry $U(1)_{\\psi'}$, also\nknown as $U(1)_N$, is a linear combination of the $U(1)_\\chi$ and $U(1)_\\psi$\nsubgroups of $E_6$. The model predicts the existence of three sterile neutrinos\nwith masses $\\lesssim 0.1~{\\rm eV}$, if the $U(1)_{\\psi'}$ breaking scale is of\norder 10 TeV. Their contribution to the effective number of neutrinos at\nnucleosynthesis is $\\Delta N_{\\nu}\\simeq 0.29$. The model can provide a variety\nof possible cold dark matter candidates including the lightest sterile\nsneutrino. If the $U(1)_{\\psi'}$ breaking scale is increased to $10^3~{\\rm\nTeV}$, the sterile neutrinos, which are stable on account of a $Z_2$ symmetry,\nbecome viable warm dark matter candidates. The observed value of the standard\nmodel Higgs boson mass can be obtained with relatively light stop quarks thanks\nto the D-term contribution from $U(1)_{\\psi'}$. The model predicts diquark and\ndiphoton resonances which may be found at an updated LHC. The well-known $\\mu$\nproblem is resolved and the observed baryon asymmetry of the universe can be\ngenerated via leptogenesis. The breaking of $U(1)_{\\psi'}$ produces\nsuperconducting strings that may be present in our galaxy. A $U(1)$ R symmetry\nplays a key role in keeping the proton stable and providing the light sterile\nneutrinos.\n",
"title": "Light sterile neutrinos, dark matter, and new resonances in a $U(1)$ extension of the MSSM"
}
| null | null | null | null | true | null |
11722
| null |
Default
| null | null |
null |
{
"abstract": " Reliable extraction of cosmological information from clustering measurements\nof galaxy surveys requires estimation of the error covariance matrices of\nobservables. The accuracy of covariance matrices is limited by our ability to\ngenerate sufficiently large number of independent mock catalogs that can\ndescribe the physics of galaxy clustering across a wide range of scales.\nFurthermore, galaxy mock catalogs are required to study systematics in galaxy\nsurveys and to test analysis tools. In this investigation, we present a fast\nand accurate approach for generation of mock catalogs for the upcoming galaxy\nsurveys. Our method relies on low-resolution approximate gravity solvers to\nsimulate the large scale dark matter field, which we then populate with halos\naccording to a flexible nonlinear and stochastic bias model. In particular, we\nextend the \\textsc{patchy} code with an efficient particle mesh algorithm to\nsimulate the dark matter field (the \\textsc{FastPM} code), and with a robust\nMCMC method relying on the \\textsc{emcee} code for constraining the parameters\nof the bias model. Using the halos in the BigMultiDark high-resolution $N$-body\nsimulation as a reference catalog, we demonstrate that our technique can model\nthe bivariate probability distribution function (counts-in-cells), power\nspectrum, and bispectrum of halos in the reference catalog. Specifically, we\nshow that the new ingredients permit us to reach percentage accuracy in the\npower spectrum up to $k\\sim 0.4\\; \\,h\\,{\\rm Mpc}^{-1}$ (within 5\\% up to $k\\sim\n0.6\\; \\,h\\,{\\rm Mpc}^{-1}$) with accurate bispectra improving previous results\nbased on Lagrangian perturbation theory.\n",
"title": "Accurate halo-galaxy mocks from automatic bias estimation and particle mesh gravity solvers"
}
| null | null | null | null | true | null |
11723
| null |
Default
| null | null |
null |
{
"abstract": " Given a geometric path, the Time-Optimal Path Tracking problem consists in\nfinding the control strategy to traverse the path time-optimally while\nregulating tracking errors. A simple yet effective approach to this problem is\nto decompose the controller into two components: (i)~a path controller, which\nmodulates the parameterization of the desired path in an online manner,\nyielding a reference trajectory; and (ii)~a tracking controller, which takes\nthe reference trajectory and outputs joint torques for tracking. However, there\nis one major difficulty: the path controller might not find any feasible\nreference trajectory that can be tracked by the tracking controller because of\ntorque bounds. In turn, this results in degraded tracking performances. Here,\nwe propose a new path controller that is guaranteed to find feasible reference\ntrajectories by accounting for possible future perturbations. The main\ntechnical tool underlying the proposed controller is Reachability Analysis, a\nnew method for analyzing path parameterization problems. Simulations show that\nthe proposed controller outperforms existing methods.\n",
"title": "Time-Optimal Path Tracking via Reachability Analysis"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11724
| null |
Validated
| null | null |
null |
{
"abstract": " We consider deep classifying neural networks. We expose a structure in the\nderivative of the logits with respect to the parameters of the model, which is\nused to explain the existence of outliers in the spectrum of the Hessian.\nPrevious works decomposed the Hessian into two components, attributing the\noutliers to one of them, the so-called Covariance of gradients. We show this\nterm is not a Covariance but a second moment matrix, i.e., it is influenced by\nmeans of gradients. These means possess an additive two-way structure that is\nthe source of the outliers in the spectrum. This structure can be used to\napproximate the principal subspace of the Hessian using certain \"averaging\"\noperations, avoiding the need for high-dimensional eigenanalysis. We\ncorroborate this claim across different datasets, architectures and sample\nsizes.\n",
"title": "Measurements of Three-Level Hierarchical Structure in the Outliers in the Spectrum of Deepnet Hessians"
}
| null | null | null | null | true | null |
11725
| null |
Default
| null | null |
null |
{
"abstract": " We study statistical inference for small-noise-perturbed multiscale dynamical\nsystems under the assumption that we observe a single time series from the slow\nprocess only. We construct estimators for both averaging and homogenization\nregimes, based on an appropriate misspecified model motivated by a second-order\nstochastic Taylor expansion of the slow process with respect to a function of\nthe time-scale separation parameter. In the case of a fixed number of\nobservations, we establish consistency, asymptotic normality, and asymptotic\nstatistical efficiency of a minimum contrast estimator (MCE), the limiting\nvariance having been identified explicitly; we furthermore establish\nconsistency and asymptotic normality of a simplified minimum constrast\nestimator (SMCE), which is however not in general efficient. These results are\nthen extended to the case of high-frequency observations under a condition\nrestricting the rate at which the number of observations may grow vis-à-vis\nthe separation of scales. Numerical simulations illustrate the theoretical\nresults.\n",
"title": "Discrete-Time Statistical Inference for Multiscale Diffusions"
}
| null | null | null | null | true | null |
11726
| null |
Default
| null | null |
null |
{
"abstract": " Visualization of tabular data---for both presentation and exploration\npurposes---is a well-researched area. Although effective visual presentations\nof complex tables are supported by various plotting libraries, creating such\ntables is a tedious process and requires scripting skills. In contrast,\ninteractive table visualizations that are designed for exploration purposes\neither operate at the level of individual rows, where large parts of the table\nare accessible only via scrolling, or provide a high-level overview that often\nlacks context-preserving drill-down capabilities. In this work we present\nTaggle, a novel visualization technique for exploring and presenting large and\ncomplex tables that are composed of individual columns of categorical or\nnumerical data and homogeneous matrices. The key contribution of Taggle is the\nhierarchical aggregation of data subsets, for which the user can also choose\nsuitable visual representations.The aggregation strategy is complemented by the\nability to sort hierarchically such that groups of items can be flexibly\ndefined by combining categorical stratifications and by rich data selection and\nfiltering capabilities. We demonstrate the usefulness of Taggle for interactive\nanalysis and presentation of complex genomics data for the purpose of drug\ndiscovery.\n",
"title": "Taggle: Scalable Visualization of Tabular Data through Aggregation"
}
| null | null | null | null | true | null |
11727
| null |
Default
| null | null |
null |
{
"abstract": " Strong electron interactions can drive metallic systems toward a variety of\nwell-known symmetry-broken phases, but the instabilities of correlated metals\nwith strong spin-orbit coupling have only recently begun to be explored. We\nuncovered a multipolar nematic phase of matter in the metallic pyrochlore\nCd$_2$Re$_2$O$_7$ using spatially resolved second-harmonic optical anisotropy\nmeasurements. Like previously discovered electronic nematic phases, this\nmultipolar phase spontaneously breaks rotational symmetry while preserving\ntranslational invariance. However, it has the distinguishing property of being\nodd under spatial inversion, which is allowed only in the presence of\nspin-orbit coupling. By examining the critical behavior of the multipolar\nnematic order parameter, we show that it drives the thermal phase transition\nnear 200 kelvin in Cd$_2$Re$_2$O$_7$ and induces a parity-breaking lattice\ndistortion as a secondary order.\n",
"title": "A parity-breaking electronic nematic phase transition in the spin-orbit coupled metal Cd$_2$Re$_2$O$_7$"
}
| null | null | null | null | true | null |
11728
| null |
Default
| null | null |
null |
{
"abstract": " We use a variant of the technique in [Lac17a] to give sparse L^p(log(L))^4\nbounds for a class of model singular and maximal Radon transforms\n",
"title": "Sparse bounds for a prototypical singular Radon transform"
}
| null | null | null | null | true | null |
11729
| null |
Default
| null | null |
null |
{
"abstract": " Sterile neutrinos are natural extensions to the standard model of particle\nphysics in neutrino mass generation mechanisms. If they are relatively light,\nless than approximately 10 keV, they can alter cosmology significantly, from\nthe early Universe to the matter and radiation energy density today. Here, we\nreview the cosmological role such light sterile neutrinos can play from the\nearly Universe, including production of keV-scale sterile neutrinos as dark\nmatter candidates, and dynamics of light eV-scale sterile neutrinos during the\nweakly-coupled active neutrino era. We review proposed signatures of light\nsterile neutrinos in cosmic microwave background and large scale structure\ndata. We also discuss keV-scale sterile neutrino dark matter decay signatures\nin X-ray observations, including recent candidate $\\sim$3.5 keV X-ray line\ndetections consistent with the decay of a $\\sim$7 keV sterile neutrino dark\nmatter particle.\n",
"title": "Sterile neutrinos in cosmology"
}
| null | null | null | null | true | null |
11730
| null |
Default
| null | null |
null |
{
"abstract": " Probabilistic mixture models have been widely used for different machine\nlearning and pattern recognition tasks such as clustering, dimensionality\nreduction, and classification. In this paper, we focus on trying to solve the\nmost common challenges related to supervised learning algorithms by using\nmixture probability distribution functions. With this modeling strategy, we\nidentify sub-labels and generate synthetic data in order to reach better\nclassification accuracy. It means we focus on increasing the training data\nsynthetically to increase the classification accuracy.\n",
"title": "A Statistical Approach to Increase Classification Accuracy in Supervised Learning Algorithms"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
11731
| null |
Validated
| null | null |
null |
{
"abstract": " This paper presents the kinematic analysis of the 3-PPPS parallel robot with\nan equilateral mobile platform and a U-shape base. The proposed design and\nappropriate selection of parameters allow to formulate simpler direct and\ninverse kinematics for the manipulator under study. The parallel singularities\nassociated with the manipulator depend only on the orientation of the\nend-effector, and thus depend only on the orientation of the end effector. The\nquaternion parameters are used to represent the aspects, i.e. the singularity\nfree regions of the workspace. A cylindrical algebraic decomposition is used to\ncharacterize the workspace and joint space with a low number of cells. The\ndis-criminant variety is obtained to describe the boundaries of each cell. With\nthese simplifications, the 3-PPPS parallel robot with proposed design can be\nclaimed as the simplest 6 DOF robot, which further makes it useful for the\nindustrial applications.\n",
"title": "Kinematics and workspace analysis of a 3ppps parallel robot with u-shaped base"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11732
| null |
Validated
| null | null |
null |
{
"abstract": " Context: The gravitational lensing time delay method provides a one-step\ndetermination of the Hubble constant (H0) with an uncertainty level on par with\nthe cosmic distance ladder method. However, to further investigate the nature\nof the dark energy, a H0 estimate down to 1% level is greatly needed. This\nrequires dozens of strongly lensed quasars that are yet to be delivered by\nongoing and forthcoming all-sky surveys.\nAims: In this work we aim to determine the spectroscopic redshift of\nPSOJ0147, the first strongly lensed quasar candidate found in the Pan-STARRS\nsurvey. The main goal of our work is to derive an accurate redshift estimate of\nthe background quasar for cosmography.\nMethods: To obtain timely spectroscopically follow-up, we took advantage of\nthe fast-track service programme that is carried out by the Nordic Optical\nTelescope. Using a grism covering 3200 - 9600 A, we identified prominent\nemission line features, such as Ly-alpha, N V, O I, C II, Si IV, C IV, and [C\nIII] in the spectra of the background quasar of the PSOJ0147 lens system. This\nenables us to determine accurately the redshift of the background quasar.\nResults: The spectrum of the background quasar exhibits prominent absorption\nfeatures bluewards of the strong emission lines, such as Ly-alpha, N V, and C\nIV. These blue absorption lines indicate that the background source is a broad\nabsorption line (BAL) quasar. Unfortunately, the BAL features hamper an\naccurate determination of redshift using the above-mentioned strong emission\nlines. Nevertheless, we are able to determine a redshift of 2.341+/-0.001 from\nthree of the four lensed quasar images with the clean forbidden line [C III].\nIn addition, we also derive a maximum outflow velocity of ~ 9800 km/s with the\nbroad absorption features bluewards of the C IV emission line. This value of\nmaximum outflow velocity is in good agreement with other BAL quasars.\n",
"title": "Accurate spectroscopic redshift of the multiply lensed quasar PSOJ0147 from the Pan-STARRS survey"
}
| null | null |
[
"Physics"
] | null | true | null |
11733
| null |
Validated
| null | null |
null |
{
"abstract": " In a projective plane $\\Pi_{q}$ (not necessarily Desarguesian) of order $q$,\na point subset $\\mathcal{S}$ is saturating (or dense) if any point of\n$\\Pi_{q}\\setminus \\mathcal{S}$ is collinear with two points in $\\mathcal{S}$.\nModifying an approach of [31], we proved the following upper bound on the\nsmallest size $s(2,q)$ of a saturating set in $\\Pi_{q}$: \\begin{equation*}\ns(2,q)\\leq \\sqrt{(q+1)\\left(3\\ln q+\\ln\\ln q\n+\\ln\\frac{3}{4}\\right)}+\\sqrt{\\frac{q}{3\\ln q}}+3. \\end{equation*} The bound\nholds for all q, not necessarily large.\nBy using inductive constructions, upper bounds on the smallest size of a\nsaturating set in the projective space $\\mathrm{PG}(N,q)$ with even dimension\n$N$ are obtained.\nAll the results are also stated in terms of linear covering codes.\n",
"title": "Upper bounds on the smallest size of a saturating set in projective planes and spaces of even dimension"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
11734
| null |
Validated
| null | null |
null |
{
"abstract": " We present sketch-rnn, a recurrent neural network (RNN) able to construct\nstroke-based drawings of common objects. The model is trained on thousands of\ncrude human-drawn images representing hundreds of classes. We outline a\nframework for conditional and unconditional sketch generation, and describe new\nrobust training methods for generating coherent sketch drawings in a vector\nformat.\n",
"title": "A Neural Representation of Sketch Drawings"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
11735
| null |
Validated
| null | null |
null |
{
"abstract": " This paper presents privileged multi-label learning (PrML) to explore and\nexploit the relationship between labels in multi-label learning problems. We\nsuggest that for each individual label, it cannot only be implicitly connected\nwith other labels via the low-rank constraint over label predictors, but also\nits performance on examples can receive the explicit comments from other labels\ntogether acting as an \\emph{Oracle teacher}. We generate privileged label\nfeature for each example and its individual label, and then integrate it into\nthe framework of low-rank based multi-label learning. The proposed algorithm\ncan therefore comprehensively explore and exploit label relationships by\ninheriting all the merits of privileged information and low-rank constraints.\nWe show that PrML can be efficiently solved by dual coordinate descent\nalgorithm using iterative optimization strategy with cheap updates. Experiments\non benchmark datasets show that through privileged label features, the\nperformance can be significantly improved and PrML is superior to several\ncompeting methods in most cases.\n",
"title": "Privileged Multi-label Learning"
}
| null | null | null | null | true | null |
11736
| null |
Default
| null | null |
null |
{
"abstract": " We refine a result of the last two Authors of [8] on a Diophantine\napproximation problem with two primes and a $k$-th power of a prime which was\nonly proved to hold for $1<k<4/3$. We improve the $k$-range to $1<k\\le 3$ by\ncombining Harman's technique on the minor arc with a suitable estimate for the\n$L^4$-norm of the relevant exponential sum over primes $S_k$. In the common\nrange we also give a stronger bound for the approximation.\n",
"title": "A Diophantine approximation problem with two primes and one $k$-th power of a prime"
}
| null | null | null | null | true | null |
11737
| null |
Default
| null | null |
null |
{
"abstract": " Trace norm regularization is a widely used approach for learning low rank\nmatrices. A standard optimization strategy is based on formulating the problem\nas one of low rank matrix factorization which, however, leads to a non-convex\nproblem. In practice this approach works well, and it is often computationally\nfaster than standard convex solvers such as proximal gradient methods.\nNevertheless, it is not guaranteed to converge to a global optimum, and the\noptimization can be trapped at poor stationary points. In this paper we show\nthat it is possible to characterize all critical points of the non-convex\nproblem. This allows us to provide an efficient criterion to determine whether\na critical point is also a global minimizer. Our analysis suggests an iterative\nmeta-algorithm that dynamically expands the parameter space and allows the\noptimization to escape any non-global critical point, thereby converging to a\nglobal minimizer. The algorithm can be applied to problems such as matrix\ncompletion or multitask learning, and our analysis holds for any random\ninitialization of the factor matrices. Finally, we confirm the good performance\nof the algorithm on synthetic and real datasets.\n",
"title": "Reexamining Low Rank Matrix Factorization for Trace Norm Regularization"
}
| null | null | null | null | true | null |
11738
| null |
Default
| null | null |
null |
{
"abstract": " Reaction networks are mainly used to model the time-evolution of molecules of\ninteracting chemical species. Stochastic models are typically used when the\ncounts of the molecules are low, whereas deterministic models are used when the\ncounts are in high abundance. In 2011, the notion of `tiers' was introduced to\nstudy the long time behavior of deterministically modeled reaction networks\nthat are weakly reversible and have a single linkage class. This `tier' based\nargument was analytical in nature. Later, in 2014, the notion of a strongly\nendotactic network was introduced in order to generalize the previous results\nfrom weakly reversible networks with a single linkage class to this wider\nfamily of networks. The point of view of this later work was more geometric and\nalgebraic in nature. The notion of strongly endotactic networks was later used\nin 2018 to prove a large deviation principle for a class of stochastically\nmodeled reaction networks.\nWe provide an analytical characterization of strongly endotactic networks in\nterms of tier structures. By doing so, we shed light on the connection between\nthe two points of view, and also make available a new proof technique for the\nstudy of strongly endotactic networks. We show the power of this new technique\nin two distinct ways. First, we demonstrate how the main previous results\nrelated to strongly endotactic networks, both for the deterministic and\nstochastic modeling choices, can be quickly obtained from our characterization.\nSecond, we demonstrate how new results can be obtained by proving that a\nsub-class of strongly endotactic networks, when modeled stochastically, is\npositive recurrent. Finally, and similarly to recent independent work by Agazzi\nand Mattingly, we provide an example which closes a conjecture in the negative\nby showing that stochastically modeled strongly endotactic networks can be\ntransient (and even explosive).\n",
"title": "Tier structure of strongly endotactic reaction networks"
}
| null | null | null | null | true | null |
11739
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we introduce the notions of ${\\rm FP}_n$-injective and ${\\rm\nFP}_n$-flat complexes in terms of complexes of type ${\\rm FP}_n$. We show that\nsome characterizations analogous to that of injective, FP-injective and flat\ncomplexes exist for ${\\rm FP}_n$-injective and ${\\rm FP}_n$-flat complexes. We\nalso introduce and study ${\\rm FP}_n$-injective and ${\\rm FP}_n$-flat\ndimensions of modules and complexes, and give a relation between them in terms\nof Pontrjagin duality. The existence of pre-envelopes and covers in this\nsetting is discussed, and we prove that any complex has an ${\\rm FP}_n$-flat\ncover and an ${\\rm FP}_n$-flat pre-envelope, and in the case $n \\geq 2$ that\nany complex has an ${\\rm FP}_n$-injective cover and an ${\\rm FP}_n$-injective\npre-envelope. Finally, we construct model structures on the category of\ncomplexes from the classes of modules with bounded ${\\rm FP}_n$-injective and\n${\\rm FP}_n$-flat dimensions, and analyze several conditions under which it is\npossible to connect these model structures via Quillen functors and Quillen\nequivalences.\n",
"title": "Relative FP-injective and FP-flat complexes and their model structures"
}
| null | null | null | null | true | null |
11740
| null |
Default
| null | null |
null |
{
"abstract": " Given a straight-line drawing $\\Gamma$ of a graph $G=(V,E)$, for every vertex\n$v$ the ply disk $D_v$ is defined as a disk centered at $v$ where the radius of\nthe disk is half the length of the longest edge incident to $v$. The ply number\nof a given drawing is defined as the maximum number of overlapping disks at\nsome point in $\\mathbb{R}^2$. Here we present a tool to explore and evaluate\nthe ply number for graphs with instant visual feedback for the user. We\nevaluate our methods in comparison to an existing ply computation by De Luca et\nal. [WALCOM'17]. We are able to reduce the computation time from seconds to\nmilliseconds for given drawings and thereby contribute to further research on\nthe ply topic by providing an efficient tool to examine graphs extensively by\nuser interaction as well as some automatic features to reduce the ply number.\n",
"title": "An Interactive Tool to Explore and Improve the Ply Number of Drawings"
}
| null | null | null | null | true | null |
11741
| null |
Default
| null | null |
null |
{
"abstract": " Light curves show the flux variation from the target star and its orbiting\nplanets as a function of time. In addition to the transit features created by\nthe planets, the flux also includes the reflected light component of each\nplanet, which depends on the planetary albedo. This signal is typically\nreferred to as phase curve and could be easily identified if there were no\nadditional noise. As well as instrumental noise, stellar activity, such as\nspots, can create a modulation in the data, which may be very difficult to\ndistinguish from the planetary signal. We analyze the limitations imposed by\nthe stellar activity on the detection of the planetary albedo, considering the\nlimitations imposed by the predicted level of instrumental noise and the short\nduration of the observations planned in the context of the CHEOPS mission. As\ninitial condition, we have assumed that each star is characterized by just one\norbiting planet. We built mock light curves that included a realistic stellar\nactivity pattern, the reflected light component of the planet and an\ninstrumental noise level, which we have chosen to be at the same level as\npredicted for CHEOPS. We then fit these light curves to try to recover the\nreflected light component, assuming the activity patterns can be modeled with a\nGaussian process.We estimate that at least one full stellar rotation is\nnecessary to obtain a reliable detection of the planetary albedo. This result\nis independent of the level of noise, but it depends on the limitation of the\nGaussian process to describe the stellar activity when the light curve\ntime-span is shorter than the stellar rotation. Finally, in presence of typical\nCHEOPS gaps in the simulations, we confirm that it is still possible to obtain\na reliable albedo.\n",
"title": "Distinguishing the albedo of exoplanets from stellar activity"
}
| null | null |
[
"Physics"
] | null | true | null |
11742
| null |
Validated
| null | null |
null |
{
"abstract": " We consider the problem of performing inverse reinforcement learning when the\ntrajectory of the expert is not perfectly observed by the learner. Instead, a\nnoisy continuous-time observation of the trajectory is provided to the learner.\nThis problem exhibits wide-ranging applications and the specific application we\nconsider here is the scenario in which the learner seeks to penetrate a\nperimeter patrolled by a robot. The learner's field of view is limited due to\nwhich it cannot observe the patroller's complete trajectory. Instead, we allow\nthe learner to listen to the expert's movement sound, which it can also use to\nestimate the expert's state and action using an observation model. We treat the\nexpert's state and action as hidden data and present an algorithm based on\nexpectation maximization and maximum entropy principle to solve the non-linear,\nnon-convex problem. Related work considers discrete-time observations and an\nobservation model that does not include actions. In contrast, our technique\ntakes expectations over both state and action of the expert, enabling learning\neven in the presence of extreme noise and broader applications.\n",
"title": "Inverse Reinforcement Learning Under Noisy Observations"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11743
| null |
Validated
| null | null |
null |
{
"abstract": " In this paper we introduce and analyse Langevin samplers that consist of\nperturbations of the standard underdamped Langevin dynamics. The perturbed\ndynamics is such that its invariant measure is the same as that of the\nunperturbed dynamics. We show that appropriate choices of the perturbations can\nlead to samplers that have improved properties, at least in terms of reducing\nthe asymptotic variance. We present a detailed analysis of the new Langevin\nsampler for Gaussian target distributions. Our theoretical results are\nsupported by numerical experiments with non-Gaussian target measures.\n",
"title": "Using Perturbed Underdamped Langevin Dynamics to Efficiently Sample from Probability Distributions"
}
| null | null | null | null | true | null |
11744
| null |
Default
| null | null |
null |
{
"abstract": " The success of automated driving deployment is highly depending on the\nability to develop an efficient and safe driving policy. The problem is well\nformulated under the framework of optimal control as a cost optimization\nproblem. Model based solutions using traditional planning are efficient, but\nrequire the knowledge of the environment model. On the other hand, model free\nsolutions suffer sample inefficiency and require too many interactions with the\nenvironment, which is infeasible in practice. Methods under the Reinforcement\nLearning framework usually require the notion of a reward function, which is\nnot available in the real world. Imitation learning helps in improving sample\nefficiency by introducing prior knowledge obtained from the demonstrated\nbehavior, on the risk of exact behavior cloning without generalizing to unseen\nenvironments. In this paper we propose a Meta learning framework, based on data\nset aggregation, to improve generalization of imitation learning algorithms.\nUnder the proposed framework, we propose MetaDAgger, a novel algorithm which\ntackles the generalization issues in traditional imitation learning. We use The\nOpen Race Car Simulator (TORCS) to test our algorithm. Results on unseen test\ntracks show significant improvement over traditional imitation learning\nalgorithms, improving the learning time and sample efficiency in the same time.\nThe results are also supported by visualization of the learnt features to prove\ngeneralization of the captured details.\n",
"title": "Meta learning Framework for Automated Driving"
}
| null | null | null | null | true | null |
11745
| null |
Default
| null | null |
null |
{
"abstract": " America's transportation infrastructure is the backbone of our economy. A\nstrong infrastructure means a strong America - an America that competes\nglobally, supports local and regional economic development, and creates jobs.\nStrategic investments in our transportation infrastructure are vital to our\nnational security, economic growth, transportation safety and our technology\nleadership. This document outlines critical needs for our transportation\ninfrastructure, identifies new technology drivers and proposes strategic\ninvestments for safe and efficient air, ground, rail and marine mobility of\npeople and goods.\n",
"title": "MOBILITY21: Strategic Investments for Transportation Infrastructure & Technology"
}
| null | null | null | null | true | null |
11746
| null |
Default
| null | null |
null |
{
"abstract": " Cross-laminated timber (CLT) is a prefabricated solid engineered wood product\nmade of at least three orthogonally bonded layers of solid-sawn lumber that are\nlaminated by gluing longitudinal and transverse layers with structural\nadhesives to form a solid panel. Previous studies have shown that the CLT\nbuildings can perform well in seismic loading and are recognized as the\nessential role of connector performance in structural design, modelling, and\nanalysis of CLT buildings. When CLT is composed of high-grade/high-density\nlayers for the outer lamellas and low-grade/low-density for the core of the\npanels, the CLT panels are herein designated as hybrid CLT panels as opposed to\nconventional CLT panels that are built using one lumber type for both outer and\ncore lamellas. This paper presents results of a testing program developed to\nestimate the cyclic performance of CLT connectors applied on hybrid CLT layups.\nTwo connectors are selected, which can be used in wall-to-floor connections.\nThese are readily available in the North American market. Characterization of\nthe performance of connectors is done in two perpendicular directions under a\nmodified CUREE cyclic loading protocol. Depending on the mode of failure, in\nsome cases, testing results indicate that when the nails or screws penetrate\nthe low-grade/low-density core lumber, a statistically significant difference\nis obtained between hybrid and conventional layups. However, in other cases,\ndue to damage in the face layer or in the connection, force-displacement\nresults for conventional and hybrid CLT layups were not statistically\nsignificant.\n",
"title": "Hysteretic behaviour of metal connectors for hybrid (high- and low-grade mixed species) cross laminated timber"
}
| null | null | null | null | true | null |
11747
| null |
Default
| null | null |
null |
{
"abstract": " Representing domain knowledge is crucial for any task. There has been a wide\nrange of techniques developed to represent this knowledge, from older logic\nbased approaches to the more recent deep learning based techniques (i.e.\nembeddings). In this paper, we discuss some of these methods, focusing on the\nrepresentational expressiveness tradeoffs that are often made. In particular,\nwe focus on the the ability of various techniques to encode `partial knowledge'\n- a key component of successful knowledge systems. We introduce and describe\nthe concepts of `ensembles of embeddings' and `aggregate embeddings' and\ndemonstrate how they allow for partial knowledge.\n",
"title": "Partial Knowledge In Embeddings"
}
| null | null | null | null | true | null |
11748
| null |
Default
| null | null |
null |
{
"abstract": " We investigate how the constraint results of inflation models are affected by\nconsidering the latest local measurement of $H_0$ in the global fit. We use the\nobservational data, including the Planck CMB full data, the BICEP2 and Keck\nArray CMB B-mode data, the BAO data, and the latest measurement of Hubble\nconstant, to constrain the $\\Lambda$CDM+$r$+$N_{\\rm eff}$ model, and the\nobtained 1$\\sigma$ and 2$\\sigma$ contours of $(n_s, r)$ are compared to the\ntheoretical predictions of selected inflationary models. We find that, in this\nfit, the scale invariance is only excluded at the 3.3$\\sigma$ level, and\n$\\Delta N_{\\rm eff}>0$ is favored at the 1.6$\\sigma$ level. The natural\ninflation model is now excluded at more than 2$\\sigma$ level; the Starobinsky\n$R^2$ model becomes only favored at around 2$\\sigma$ level; the most favored\nmodel becomes the spontaneously broken SUSY inflation model; and, the brane\ninflation model is also well consistent with the current data, in this case.\n",
"title": "Impact of the latest measurement of Hubble constant on constraining inflation models"
}
| null | null | null | null | true | null |
11749
| null |
Default
| null | null |
null |
{
"abstract": " In classical mechanics, a light particle bound by a strong elastic force just\noscillates at high frequency in the region allowed by its initial position and\nvelocity. In quantum mechanics, instead, the ground state of the particle\nbecomes completely de-localized in the limit $m \\to 0$. The harmonic oscillator\nthus ceases to be a useful microscopic physical model in the limit $m \\to 0$,\nbut its Feynman path integral has interesting singularities which make it a\nprototype of other systems exhibiting a \"quantum runaway\" from the classical\nconfigurations near the minimum of the action. The probability density of the\ncoherent runaway modes can be obtained as the solution of a Fokker-Planck\nequation associated to the condition $S=S_{min}$. This technique can be applied\nalso to other systems, notably to a dimensional reduction of the\nEinstein-Hilbert action.\n",
"title": "Ultra-light and strong: the massless harmonic oscillator and its singular path integral"
}
| null | null | null | null | true | null |
11750
| null |
Default
| null | null |
null |
{
"abstract": " We introduce a new isomorphism-invariant notion of entropy for measure\npreserving actions of arbitrary countable groups on probability spaces, which\nwe call cocycle entropy. We develop methods to show that cocycle entropy\nsatisfies many of the properties of classical amenable entropy theory, but\napplies in much greater generality to actions of non-amenable groups. One key\ningredient in our approach is a proof of a subadditive convergence principle\nwhich is valid for measure-preserving amenable equivalence relations, going\nbeyond the Ornstein-Weiss Lemma for amenable groups.\nFor a large class of countable groups, which may in fact include all of them,\nwe prove the Shannon-McMillan-Breiman pointwise convergence theorem for cocycle\nentropy in their measure-preserving actions.\nWe also compare cocycle entropy to Rokhlin entropy, and using an important\nrecent result of Seward we show that they coincide for free, ergodic actions of\nany countable group in the class. Finally, we use the example of the free group\nto demonstrate the geometric significance of the entropy equipartition property\nimplied by the Shannon-McMillan-Breiman theorem.\n",
"title": "The Shannon-McMillan-Breiman theorem beyond amenable groups"
}
| null | null | null | null | true | null |
11751
| null |
Default
| null | null |
null |
{
"abstract": " Assistive robotic devices can be used to help people with upper body\ndisabilities gaining more autonomy in their daily life. Although basic motions\nsuch as positioning and orienting an assistive robot gripper in space allow\nperformance of many tasks, it might be time consuming and tedious to perform\nmore complex tasks. To overcome these difficulties, improvements can be\nimplemented at different levels, such as mechanical design, control interfaces\nand intelligent control algorithms. In order to guide the design of solutions,\nit is important to assess the impact and potential of different innovations.\nThis paper thus presents the evaluation of three intelligent algorithms aiming\nto improve the performance of the JACO robotic arm (Kinova Robotics). The\nevaluated algorithms are 'preset position', 'fluidity filter' and 'drinking\nmode'. The algorithm evaluation was performed with 14 motorized wheelchair's\nusers and showed a statistically significant improvement of the robot's\nperformance.\n",
"title": "Assistive robotic device: evaluation of intelligent algorithms"
}
| null | null | null | null | true | null |
11752
| null |
Default
| null | null |
null |
{
"abstract": " Influence diagrams are a decision-theoretic extension of probabilistic\ngraphical models. In this paper we show how they can be used to solve the\nBrachistochrone problem. We present results of numerical experiments on this\nproblem, compare the solution provided by the influence diagram with the\noptimal solution. The R code used for the experiments is presented in the\nAppendix.\n",
"title": "Solving the Brachistochrone Problem by an Influence Diagram"
}
| null | null |
[
"Computer Science",
"Mathematics"
] | null | true | null |
11753
| null |
Validated
| null | null |
null |
{
"abstract": " We have obtained OH spectra of four transitions in the $^2\\Pi_{3/2}$ ground\nstate, at 1612, 1665, 1667, and 1720 MHz, toward 51 sightlines that were\nobserved in the Herschel project Galactic Observations of Terahertz C+. The\nobservations cover the longitude range of (32$^\\circ$, 64$^\\circ$) and\n(189$^\\circ$, 207$^\\circ$) in the northern Galactic plane. All of the diffuse\nOH emissions conform to the so-called 'Sum Rule' of the four brightness\ntemperatures, indicating optically thin emission condition for OH from diffuse\nclouds in the Galactic plane. The column densities of the HI `halos' N(HI)\nsurrounding molecular clouds increase monotonically with OH column density,\nN(OH), until saturating when N(HI)=1.0 x 10$^{21}$ cm$^{-2}$ and N (OH) $\\geq\n4.5\\times 10^{15}$ cm$^{-2}$, indicating the presence of molecular gas that\ncannot be traced by HI. Such a linear correlation, albeit weak, is suggestive\nof HI halos' contribution to the UV shielding required for molecular formation.\nAbout 18% of OH clouds have no associated CO emission (CO-dark) at a\nsensitivity of 0.07 K but are associated with C$^+$ emission. A weak\ncorrelation exists between C$^+$ intensity and OH column density for CO-dark\nmolecular clouds. These results imply that OH seems to be a better tracer of\nmolecular gas than CO in diffuse molecular regions.\n",
"title": "OH Survey along Sightlines of Galactic Observations of Terahertz C+"
}
| null | null | null | null | true | null |
11754
| null |
Default
| null | null |
null |
{
"abstract": " Graphene nanoribbons with armchair edges are studied for externally enhanced,\nbut realistic parameter values: enhanced Rashba spin-orbit coupling due to\nproximity to a transition metal dichalcogenide like WS$_{2}$, and enhanced\nZeeman field due to exchange coupling with a magnetic insulator like EuS under\napplied magnetic field. The presence of s--wave superconductivity, induced\neither by proximity or by decoration with alkali metal atoms like Ca or Li,\nleads to a topological superconducting phase with Majorana end modes. The\ntopological phase is highly sensitive to the application of uniaxial strain,\nwith a transition to the trivial state above a critical strain well below\n$0.1\\%$. This sensitivity allows for real space manipulation of Majorana\nfermions by applying non-uniform strain profiles. Similar manipulation is also\npossible by applying inhomogeneous Zeeman field or chemical potential.\n",
"title": "Strain manipulation of Majorana fermions in graphene armchair nanoribbons"
}
| null | null | null | null | true | null |
11755
| null |
Default
| null | null |
null |
{
"abstract": " Given the important role that the galaxy bispectrum has recently acquired in\ncosmology and the scale and precision of forthcoming galaxy clustering\nobservations, it is timely to derive the full expression of the large-scale\nbispectrum going beyond approximated treatments which neglect integrated terms\nor higher-order bias terms or use the Limber approximation. On cosmological\nscales, relativistic effects that arise from observing on the past light-cone\nalter the observed galaxy number counts, therefore leaving their imprints on\nN-point correlators at all orders. In this paper we compute for the first time\nthe bispectrum including all general relativistic, local and integrated,\neffects at second order, the tracers' bias at second order, geometric effects\nas well as the primordial non-Gaussianity contribution. This is timely\nconsidering that future surveys will probe scales comparable to the horizon\nwhere approximations widely used currently may not hold; neglecting these\neffects may introduce biases in estimation of cosmological parameters as well\nas primordial non-Gaussianity.\n",
"title": "Relativistic wide-angle galaxy bispectrum on the light-cone"
}
| null | null | null | null | true | null |
11756
| null |
Default
| null | null |
null |
{
"abstract": " This article is a brief introduction to the rapidly evolving field of\nmany-body localization. Rather than giving an in-depth review of the subject,\nour aspiration here is simply to introduce the problem and its general context,\noutlining a few directions where notable progress has been achieved in recent\nyears. We hope that this will prepare the readers for the more specialized\narticles appearing in the forthcoming dedicated volume of Annalen der Physik,\nwhere these developments are discussed in more detail.\n",
"title": "Recent progress in many-body localization"
}
| null | null | null | null | true | null |
11757
| null |
Default
| null | null |
null |
{
"abstract": " We measure the field dependence of spin glass free energy barriers in a thin\namorphous Ge:Mn film through the time dependence of the magnetization. After\nthe correlation length $\\xi(t, T)$ has reached the film thickness $\\mathcal\n{L}=155$~\\AA~so that the dynamics are activated, we change the initial magnetic\nfield by $\\delta H$. In agreement with the scaling behavior exhibited in a\ncompanion Letter [Janus collaboration: M. Baity-Jesi {\\it et al.}, Phys. Rev.\nLett. {\\bf 118}, 157202 (2017)], we find the activation energy is increased\nwhen $\\delta H < 0$. The change is proportional to $(\\delta H)^2$ with the\naddition of a small $(\\delta H)^4$ term. The magnitude of the change of the\nspin glass free energy barriers is in near quantitative agreement with the\nprediction of a barrier model.\n",
"title": "Magnetic Field Dependence of Spin Glass Free Energy Barriers"
}
| null | null | null | null | true | null |
11758
| null |
Default
| null | null |
null |
{
"abstract": " Two channels are said to be equivalent if they are degraded from each other.\nThe space of equivalent channels with input alphabet $X$ and output alphabet\n$Y$ can be naturally endowed with the quotient of the Euclidean topology by the\nequivalence relation. A topology on the space of equivalent channels with fixed\ninput alphabet $X$ and arbitrary but finite output alphabet is said to be\nnatural if and only if it induces the quotient topology on the subspaces of\nequivalent channels sharing the same output alphabet. We show that every\nnatural topology is $\\sigma$-compact, separable and path-connected. On the\nother hand, if $|X|\\geq 2$, a Hausdorff natural topology is not Baire and it is\nnot locally compact anywhere. This implies that no natural topology can be\ncompletely metrized if $|X|\\geq 2$. The finest natural topology, which we call\nthe strong topology, is shown to be compactly generated, sequential and $T_4$.\nOn the other hand, the strong topology is not first-countable anywhere, hence\nit is not metrizable. We show that in the strong topology, a subspace is\ncompact if and only if it is rank-bounded and strongly-closed. We introduce a\nmetric distance on the space of equivalent channels which compares the noise\nlevels between channels. The induced metric topology, which we call the\nnoisiness topology, is shown to be natural. We also study topologies that are\ninherited from the space of meta-probability measures by identifying channels\nwith their Blackwell measures. We show that the weak-* topology is exactly the\nsame as the noisiness topology and hence it is natural. We prove that if\n$|X|\\geq 2$, the total variation topology is not natural nor Baire, hence it is\nnot completely metrizable. Moreover, it is not locally compact anywhere.\nFinally, we show that the Borel $\\sigma$-algebra is the same for all Hausdorff\nnatural topologies.\n",
"title": "Topological Structures on DMC spaces"
}
| null | null | null | null | true | null |
11759
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents transient numerical simulations of hydraulic systems in\nengineering applications using the spectral element method (SEM). Along with a\ndetailed description of the underlying numerical method, it is shown that the\nSEM yields highly accurate numerical approximations at modest computational\ncosts, which is in particular useful for optimization-based control\napplications. In order to enable fast explicit time stepping methods, the\nboundary conditions are imposed weakly using a numerically stable upwind\ndiscretization. The benefits of the SEM in the area of hydraulic system\nsimulations are demonstrated in various examples including several simulations\nof strong water hammer effects. Due to its exceptional convergence\ncharacteristics, the SEM is particularly well suited to be used in real-time\ncapable control applications. As an example, it is shown that the time\nevolution of pressure waves in a large scale pumped-storage power plant can be\nwell approximated using a low-dimensional system representation utilizing a\nminimum number of dynamical states.\n",
"title": "The spectral element method as an efficient tool for transient simulations of hydraulic systems"
}
| null | null | null | null | true | null |
11760
| null |
Default
| null | null |
null |
{
"abstract": " Charge modulations are considered as a leading competitor of high-temperature\nsuperconductivity in the underdoped cuprates, and their relationship to Fermi\nsurface reconstructions and to the pseudogap state is an important subject of\ncurrent research. Overdoped cuprates, on the other hand, are widely regarded as\nconventional Fermi liquids without collective electronic order. For the\noverdoped (Bi,Pb)2.12Sr1.88CuO6+{\\delta} (Bi2201) high-temperature\nsuperconductor, here we report resonant x-ray scattering measurements revealing\nincommensurate charge order reflections, with correlation lengths of 40-60\nlattice units, that persist up to at least 250K. Charge order is markedly more\nrobust in the overdoped than underdoped regime but the incommensurate wave\nvectors follow a common trend; moreover it coexists with a single,\nunreconstructed Fermi surface, without pseudogap or nesting features, as\ndetermined from angle-resolved photoemission spectroscopy. This re-entrant\ncharge order is reproduced by model calculations that consider a strong van\nHove singularity within a Fermi liquid framework.\n",
"title": "Re-entrant charge order in overdoped (Bi,Pb)$_{2.12}$Sr$_{1.88}$CuO$_{6+δ}$ outside the pseudogap regime"
}
| null | null |
[
"Physics"
] | null | true | null |
11761
| null |
Validated
| null | null |
null |
{
"abstract": " This paper proposes a new approach to construct high quality space-filling\nsample designs. First, we propose a novel technique to quantify the\nspace-filling property and optimally trade-off uniformity and randomness in\nsample designs in arbitrary dimensions. Second, we connect the proposed metric\n(defined in the spatial domain) to the objective measure of the design\nperformance (defined in the spectral domain). This connection serves as an\nanalytic framework for evaluating the qualitative properties of space-filling\ndesigns in general. Using the theoretical insights provided by this\nspatial-spectral analysis, we derive the notion of optimal space-filling\ndesigns, which we refer to as space-filling spectral designs. Third, we propose\nan efficient estimator to evaluate the space-filling properties of sample\ndesigns in arbitrary dimensions and use it to develop an optimization framework\nto generate high quality space-filling designs. Finally, we carry out a\ndetailed performance comparison on two different applications in 2 to 6\ndimensions: a) image reconstruction and b) surrogate modeling on several\nbenchmark optimization functions and an inertial confinement fusion (ICF)\nsimulation code. We demonstrate that the propose spectral designs significantly\noutperform existing approaches especially in high dimensions.\n",
"title": "A Spectral Approach for the Design of Experiments: Design, Analysis and Algorithms"
}
| null | null | null | null | true | null |
11762
| null |
Default
| null | null |
null |
{
"abstract": " This paper studies the problem of detection and tracking of general objects\nwith long-term dynamics, observed by a mobile robot moving in a large\nenvironment. A key problem is that due to the environment scale, it can only\nobserve a subset of the objects at any given time. Since some time passes\nbetween observations of objects in different places, the objects might be moved\nwhen the robot is not there. We propose a model for this movement in which the\nobjects typically only move locally, but with some small probability they jump\nlonger distances, through what we call global motion. For filtering, we\ndecompose the posterior over local and global movements into two linked\nprocesses. The posterior over the global movements and measurement associations\nis sampled, while we track the local movement analytically using Kalman\nfilters. This novel filter is evaluated on point cloud data gathered\nautonomously by a mobile robot over an extended period of time. We show that\ntracking jumping objects is feasible, and that the proposed probabilistic\ntreatment outperforms previous methods when applied to real world data. The key\nto efficient probabilistic tracking in this scenario is focused sampling of the\nobject posteriors.\n",
"title": "Detection and Tracking of General Movable Objects in Large 3D Maps"
}
| null | null | null | null | true | null |
11763
| null |
Default
| null | null |
null |
{
"abstract": " We demonstrate a new approach to calibrating the spectral-spatial response of\na wide-field spectrograph using a fibre etalon comb. Conventional wide-field\ninstruments employed on front-line telescopes are mapped with a grid of\ndiffraction-limited holes cut into a focal plane mask. The aberrated grid\npattern in the image plane typically reveals n-symmetric (e.g. pincushion)\ndistortion patterns over the field arising from the optical train. This\napproach is impractical in the presence of a dispersing element because the\ndiffraction-limited spots in the focal plane are imaged as an array of\noverlapping spectra. Instead we propose a compact solution that builds on\nrecent developments in fibre-based Fabry-Perot etalons. We introduce a novel\napproach to near-field illumination that exploits a 25cm commercial telescope\nand the propagation of skew rays in a multimode fibre. The mapping of the\noptical transfer function across the full field is represented accurately\n(<0.5% rms residual) by an orthonormal set of Chebyshev moments. Thus we are\nable to reconstruct the full 4Kx4K CCD image of the dispersed output from the\noptical fibres using this mapping, as we demonstrate. Our method removes one of\nthe largest sources of systematic error in multi-object spectroscopy.\n",
"title": "Mapping the aberrations of a wide-field spectrograph using a photonic comb"
}
| null | null |
[
"Physics"
] | null | true | null |
11764
| null |
Validated
| null | null |
null |
{
"abstract": " The field of discrete event simulation and optimization techniques motivates\nresearchers to adjust classic ranking and selection (R&S) procedures to the\nsettings where the number of populations is large. We use insights from extreme\nvalue theory in order to reveal the asymptotic properties of R&S procedures.\nNamely, we generalize the asymptotic result of Robbins and Siegmund regarding\nselection from independent Gaussian populations with known constant variance by\ntheir means to the case of selecting a subset of varying size out of a given\nset of populations. In addition, we revisit the problem of selecting the\npopulation with the highest mean among independent Gaussian populations with\nunknown and possibly different variances. Particularly, we derive the relative\nasymptotic efficiency of Dudewicz and Dalal's and Rinott's procedures, showing\nthat the former can be asymptotically superior by a multiplicative factor which\nis larger than one, but this factor may be reduced by proper choice of\nparameters. We also use our asymptotic results to suggest that the sample size\nin the first stage of the two procedures should be logarithmic in the number of\npopulations.\n",
"title": "On The Asymptotic Efficiency of Selection Procedures for Independent Gaussian Populations"
}
| null | null | null | null | true | null |
11765
| null |
Default
| null | null |
null |
{
"abstract": " We present Deeply Supervised Object Detector (DSOD), a framework that can\nlearn object detectors from scratch. State-of-the-art object objectors rely\nheavily on the off-the-shelf networks pre-trained on large-scale classification\ndatasets like ImageNet, which incurs learning bias due to the difference on\nboth the loss functions and the category distributions between classification\nand detection tasks. Model fine-tuning for the detection task could alleviate\nthis bias to some extent but not fundamentally. Besides, transferring\npre-trained models from classification to detection between discrepant domains\nis even more difficult (e.g. RGB to depth images). A better solution to tackle\nthese two critical problems is to train object detectors from scratch, which\nmotivates our proposed DSOD. Previous efforts in this direction mostly failed\ndue to much more complicated loss functions and limited training data in object\ndetection. In DSOD, we contribute a set of design principles for training\nobject detectors from scratch. One of the key findings is that deep\nsupervision, enabled by dense layer-wise connections, plays a critical role in\nlearning a good detector. Combining with several other principles, we develop\nDSOD following the single-shot detection (SSD) framework. Experiments on PASCAL\nVOC 2007, 2012 and MS COCO datasets demonstrate that DSOD can achieve better\nresults than the state-of-the-art solutions with much more compact models. For\ninstance, DSOD outperforms SSD on all three benchmarks with real-time detection\nspeed, while requires only 1/2 parameters to SSD and 1/10 parameters to Faster\nRCNN. Our code and models are available at: this https URL .\n",
"title": "DSOD: Learning Deeply Supervised Object Detectors from Scratch"
}
| null | null | null | null | true | null |
11766
| null |
Default
| null | null |
null |
{
"abstract": " Data enables Non-Governmental Organisations (NGOs) to quantify the impact of\ntheir initiatives to themselves and to others. The increasing amount of data\nstored today can be seen as a direct consequence of the falling costs in\nobtaining it. Cheap data acquisition harnesses existing communications networks\nto collect information. Globally, more people are connected by the mobile phone\nnetwork than by the Internet. We worked with Vita, a development organisation\nimplementing green initiatives to develop an SMS-based data collection\napplication to collect social data surrounding the impacts of their\ninitiatives. We present our system design and lessons learned from\non-the-ground testing.\n",
"title": "Data Capture & Analysis to Assess Impact of Carbon Credit Schemes"
}
| null | null | null | null | true | null |
11767
| null |
Default
| null | null |
null |
{
"abstract": " The past decade has seen an increasing body of literature devoted to the\nestimation of causal effects in network-dependent data. However, the validity\nof many classical statistical methods in such data is often questioned. There\nis an emerging need for objective and practical ways to assess which causal\nmethodologies might be applicable and valid in network-dependent data. This\npaper describes a set of tools implemented in the simcausal R package that\nallow simulating data based on user-specified structural equation model for\nconnected units. Specification and simulation of counterfactual data is\nimplemented for static, dynamic and stochastic interventions. A new interface\naims to simplify the specification of network-based functional relationships\nbetween connected units. A set of examples illustrates how these simulations\nmay be applied to evaluation of different statistical methods for estimation of\ncausal effects in network-dependent data.\n",
"title": "Conducting Simulations in Causal Inference with Networks-Based Structural Equation Models"
}
| null | null | null | null | true | null |
11768
| null |
Default
| null | null |
null |
{
"abstract": " Necessary and sufficient conditions for finite semihypergroups to be built\nfrom groups of the same order are established\n",
"title": "Finite Semihypergroups Built From Groups"
}
| null | null | null | null | true | null |
11769
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, we introduce the notion of Auslander modules, inspired from\nAuslander's zero-divisor conjecture (theorem) and give some interesting results\nfor these modules. We also investigate torsion-free modules.\n",
"title": "Auslander Modules"
}
| null | null |
[
"Mathematics"
] | null | true | null |
11770
| null |
Validated
| null | null |
null |
{
"abstract": " Current formal approaches have been successfully used to find design flaws in\nmany security protocols. However, it is still challenging to automatically\nanalyze protocols due to their large or infinite state spaces. In this paper,\nwe propose a novel framework that can automatically verifying security\nprotocols without any human intervention. Experimental results show that\nSmartVerif automatically verifies security protocols that cannot be\nautomatically verified by existing approaches. The case studies also validate\nthe effectiveness of our dynamic strategy.\n",
"title": "Verifying Security Protocols using Dynamic Strategies"
}
| null | null | null | null | true | null |
11771
| null |
Default
| null | null |
null |
{
"abstract": " Compared with conventional accelerators, laser plasma accelerators can\ngenerate high energy ions at a greatly reduced scale, due to their TV/m\nacceleration gradient. A compact laser plasma accelerator (CLAPA) has been\nbuilt at the Institute of Heavy Ion Physics at Peking University. It will be\nused for applied research like biological irradiation, astrophysics\nsimulations, etc. A beamline system with multiple quadrupoles and an analyzing\nmagnet for laser-accelerated ions is proposed here. Since laser-accelerated ion\nbeams have broad energy spectra and large angular divergence, the parameters\n(beam waist position in the Y direction, beam line layout, drift distance,\nmagnet angles etc.) of the beamline system are carefully designed and optimised\nto obtain a radially symmetric proton distribution at the irradiation platform.\nRequirements of energy selection and differences in focusing or defocusing in\napplication systems greatly influence the evolution of proton distributions.\nWith optimal parameters, radially symmetric proton distributions can be\nachieved and protons with different energy spread within 5% have similar\ntransverse areas at the experiment target.\n",
"title": "Distribution uniformity of laser-accelerated proton beams"
}
| null | null | null | null | true | null |
11772
| null |
Default
| null | null |
null |
{
"abstract": " We study the problem of detecting human-object interactions (HOI) in static\nimages, defined as predicting a human and an object bounding box with an\ninteraction class label that connects them. HOI detection is a fundamental\nproblem in computer vision as it provides semantic information about the\ninteractions among the detected objects. We introduce HICO-DET, a new large\nbenchmark for HOI detection, by augmenting the current HICO classification\nbenchmark with instance annotations. To solve the task, we propose Human-Object\nRegion-based Convolutional Neural Networks (HO-RCNN). At the core of our\nHO-RCNN is the Interaction Pattern, a novel DNN input that characterizes the\nspatial relations between two bounding boxes. Experiments on HICO-DET\ndemonstrate that our HO-RCNN, by exploiting human-object spatial relations\nthrough Interaction Patterns, significantly improves the performance of HOI\ndetection over baseline approaches.\n",
"title": "Learning to Detect Human-Object Interactions"
}
| null | null | null | null | true | null |
11773
| null |
Default
| null | null |
null |
{
"abstract": " This paper presents a model for a dynamical system where particles dominate\nedges in a complex network. The proposed dynamical system is then extended to\nan application on the problem of community detection and data clustering. In\nthe case of the data clustering problem, 6 different techniques were simulated\non 10 different datasets in order to compare with the proposed technique. The\nresults show that the proposed algorithm performs well when prior knowledge of\nthe number of clusters is known to the algorithm.\n",
"title": "Data clustering with edge domination in complex networks"
}
| null | null | null | null | true | null |
11774
| null |
Default
| null | null |
null |
{
"abstract": " In this paper, scalable Whole Slide Imaging (sWSI), a novel high-throughput,\ncost-effective and robust whole slide imaging system on both Android and iOS\nplatforms is introduced and analyzed. With sWSI, most mainstream smartphone\nconnected to a optical eyepiece of any manually controlled microscope can be\nautomatically controlled to capture sequences of mega-pixel fields of views\nthat are synthesized into giga-pixel virtual slides. Remote servers carry out\nthe majority of computation asynchronously to support clients running at\nsatisfying frame rates without sacrificing image quality nor robustness. A\ntypical 15x15mm sample can be digitized in 30 seconds with 4X or in 3 minutes\nwith 10X object magnification, costing under $1. The virtual slide quality is\nconsidered comparable to existing high-end scanners thus satisfying for\nclinical usage by surveyed pathologies. The scan procedure with features such\nas supporting magnification up to 100x, recoding z-stacks,\nspecimen-type-neutral and giving real-time feedback, is deemed\nwork-flow-friendly and reliable.\n",
"title": "sWSI: A Low-cost and Commercial-quality Whole Slide Imaging System on Android and iOS Smartphones"
}
| null | null | null | null | true | null |
11775
| null |
Default
| null | null |
null |
{
"abstract": " Motivation: Understanding functions of proteins in specific human tissues is\nessential for insights into disease diagnostics and therapeutics, yet\nprediction of tissue-specific cellular function remains a critical challenge\nfor biomedicine.\nResults: Here we present OhmNet, a hierarchy-aware unsupervised node feature\nlearning approach for multi-layer networks. We build a multi-layer network,\nwhere each layer represents molecular interactions in a different human tissue.\nOhmNet then automatically learns a mapping of proteins, represented as nodes,\nto a neural embedding based low-dimensional space of features. OhmNet\nencourages sharing of similar features among proteins with similar network\nneighborhoods and among proteins activated in similar tissues. The algorithm\ngeneralizes prior work, which generally ignores relationships between tissues,\nby modeling tissue organization with a rich multiscale tissue hierarchy. We use\nOhmNet to study multicellular function in a multi-layer protein interaction\nnetwork of 107 human tissues. In 48 tissues with known tissue-specific cellular\nfunctions, OhmNet provides more accurate predictions of cellular function than\nalternative approaches, and also generates more accurate hypotheses about\ntissue-specific protein actions. We show that taking into account the tissue\nhierarchy leads to improved predictive power. Remarkably, we also demonstrate\nthat it is possible to leverage the tissue hierarchy in order to effectively\ntransfer cellular functions to a functionally uncharacterized tissue. Overall,\nOhmNet moves from flat networks to multiscale models able to predict a range of\nphenotypes spanning cellular subsystems\n",
"title": "Predicting multicellular function through multi-layer tissue networks"
}
| null | null | null | null | true | null |
11776
| null |
Default
| null | null |
null |
{
"abstract": " We investigate the prospects for micron-scale acoustic wave components and\ncircuits on chip in solid planar structures that do not require suspension. We\nleverage evanescent guiding of acoustic waves by high slowness contrast\nmaterials readily available in silicon complementary metal-oxide semiconductor\n(CMOS) processes. High slowness contrast provides strong confinement of GHz\nfrequency acoustic fields in micron-scale structures. We address the\nfundamental implications of intrinsic material and radiation losses on\noperating frequency, bandwidth, device size and as a result practicality of\nmulti-element microphononic circuits based on solid embedded waveguides. We\nshow that a family of acoustic components based on evanescently guided acoustic\nwaves, including waveguide bends, evanescent couplers, Y-splitters, and\nacoustic-wave microring resonators, can be realized in compact, micron-scale\nstructures, and provide basic scaling and performance arguments for these\ncomponents based on material properties and simulations. We further find that\nwave propagation losses are expected to permit high quality factor (Q),\nnarrowband resonators and propagation lengths allowing delay lines and the\ncoupling or cascading of multiple components to form functional circuits, of\npotential utility in guided acoustic signal processing on chip. We also address\nand simulate bends and radiation loss, providing insight into routing and\nresonators. Such circuits could be monolithically integrated with electronic\nand photonic circuits on a single chip with expanded capabilities.\n",
"title": "Toward Microphononic Circuits on Chip: An Evaluation of Components based on High-Contrast Evanescent Confinement of Acoustic Waves"
}
| null | null | null | null | true | null |
11777
| null |
Default
| null | null |
null |
{
"abstract": " This paper examines the problem of adaptive influence maximization in social\nnetworks. As adaptive decision making is a time-critical task, a realistic\nfeedback model has been considered, called myopic. In this direction, we\npropose the myopic adaptive greedy policy that is guaranteed to provide a (1 -\n1/e)-approximation of the optimal policy under a variant of the independent\ncascade diffusion model. This strategy maximizes an alternative utility\nfunction that has been proven to be adaptive monotone and adaptive submodular.\nThe proposed utility function considers the cumulative number of active nodes\nthrough the time, instead of the total number of the active nodes at the end of\nthe diffusion. Our empirical analysis on real-world social networks reveals the\nbenefits of the proposed myopic strategy, validating our theoretical results.\n",
"title": "Adaptive Submodular Influence Maximization with Myopic Feedback"
}
| null | null |
[
"Computer Science"
] | null | true | null |
11778
| null |
Validated
| null | null |
null |
{
"abstract": " This work presents an evaluation study using a force feedback evaluation\nframework for a novel direct needle force volume rendering concept in the\ncontext of liver puncture simulation. PTC/PTCD puncture interventions targeting\nthe bile ducts have been selected to illustrate this concept. The haptic\nalgorithms of the simulator system are based on (1) partially segmented patient\nimage data and (2) a non-linear spring model effective at organ borders. The\nprimary aim is to quantitatively evaluate force errors caused by our patient\nmodeling approach, in comparison to haptic force output obtained from using\ngold-standard, completely manually-segmented data. The evaluation of the force\nalgorithms compared to a force output from fully manually segmented\ngold-standard patient models, yields a low mean of 0.12 N root mean squared\nforce error and up to 1.6 N for systematic maximum absolute errors. Force\nerrors were evaluated on 31,222 preplanned test paths from 10 patients. Only\ntwelve percent of the emitted forces along these paths were affected by errors.\nThis is the first study evaluating haptic algorithms with deformable virtual\npatients in silico. We prove haptic rendering plausibility on a very high\nnumber of test paths. Important errors are below just noticeable differences\nfor the hand-arm system.\n",
"title": "Evaluation of Direct Haptic 4D Volume Rendering of Partially Segmented Data for Liver Puncture Simulation"
}
| null | null | null | null | true | null |
11779
| null |
Default
| null | null |
null |
{
"abstract": " Despite being popularly referred to as the ultimate solution for all problems\nof our current electric power system, smart grid is still a growing and\nunstable concept. It is usually considered as a set of advanced features\npowered by promising technological solutions. In this paper, we describe smart\ngrid as a socio-technical transition and illustrate the evolutionary path on\nwhich a smart grid can be realized. Through this conceptual lens, we reveal the\nrole of big data, and how it can fuel the organic growth of smart grid. We also\nprovide a rough estimate of how much data will be potentially generated from\ndifferent data sources, which helps clarify the big data challenges during the\nevolutionary process.\n",
"title": "The Role of Big Data on Smart Grid Transition"
}
| null | null | null | null | true | null |
11780
| null |
Default
| null | null |
null |
{
"abstract": " Deep neural networks are known to be difficult to train due to the\ninstability of back-propagation. A deep \\emph{residual network} (ResNet) with\nidentity loops remedies this by stabilizing gradient computations. We prove a\nboosting theory for the ResNet architecture. We construct $T$ weak module\nclassifiers, each contains two of the $T$ layers, such that the combined strong\nlearner is a ResNet. Therefore, we introduce an alternative Deep ResNet\ntraining algorithm, \\emph{BoostResNet}, which is particularly suitable in\nnon-differentiable architectures. Our proposed algorithm merely requires a\nsequential training of $T$ \"shallow ResNets\" which are inexpensive. We prove\nthat the training error decays exponentially with the depth $T$ if the\n\\emph{weak module classifiers} that we train perform slightly better than some\nweak baseline. In other words, we propose a weak learning condition and prove a\nboosting theory for ResNet under the weak learning condition. Our results apply\nto general multi-class ResNets. A generalization error bound based on margin\ntheory is proved and suggests ResNet's resistant to overfitting under network\nwith $l_1$ norm bounded weights.\n",
"title": "Learning Deep ResNet Blocks Sequentially using Boosting Theory"
}
| null | null | null | null | true | null |
11781
| null |
Default
| null | null |
null |
{
"abstract": " The main aim of this paper is to extend one of the main results of Iwaniec\nand Onninen (Arch. Ration. Mech. Anal., 194: 927-986, 2009). We prove that, the\nso called total energy functional defined on the class of radial streachings\nbetween annuli attains its minimum on a total energy diffeomorphism between\nannuli. This involves a subtle analysis of some special ODE.\n",
"title": "Total energy of radial mappings"
}
| null | null | null | null | true | null |
11782
| null |
Default
| null | null |
null |
{
"abstract": " Logarithmic score and information divergence appear in information theory,\nstatistics, statistical mechanics, and portfolio theory. We demonstrate that\nall these topics involve some kind of optimization that leads directly to\nregret functions and such regret functions are often given by a Bregman\ndivergence. If the regret function also fulfills a sufficiency condition it\nmust be proportional to information divergence. We will demonstrate that\nsufficiency is equivalent to the apparently weaker notion of locality and it is\nalso equivalent to the apparently stronger notion of monotonicity. These\nsufficiency conditions have quite different relevance in the different areas of\napplication, and often they are not fulfilled. Therefore sufficiency conditions\ncan be used to explain when results from one area can be transferred directly\nto another and when one will experience differences.\n",
"title": "Divergence and Sufficiency for Convex Optimization"
}
| null | null |
[
"Computer Science",
"Physics",
"Mathematics"
] | null | true | null |
11783
| null |
Validated
| null | null |
null |
{
"abstract": " For each integer $n$ we present an explicit formulation of a compact linear\nprogram, with $O(n^3)$ variables and constraints, which determines the\nsatisfiability of any 2SAT formula with $n$ boolean variables by a single\nlinear optimization. This contrasts with the fact that the natural polytope for\nthis problem, formed from the convex hull of all satisfiable formulas and their\nsatisfying assignments, has superpolynomial extension complexity. Our\nformulation is based on multicommodity flows. We also discuss connections of\nthese results to the stable matching problem.\n",
"title": "Compact linear programs for 2SAT"
}
| null | null | null | null | true | null |
11784
| null |
Default
| null | null |
null |
{
"abstract": " While online services emerge in all areas of life, the voting procedure in\nmany democracies remains paper-based as the security of current online voting\ntechnology is highly disputed. We address the issue of trustworthy online\nvoting protocols and recall therefore their security concepts with its trust\nassumptions. Inspired by the Bitcoin protocol, the prospects of distributed\nonline voting protocols are analysed. No trusted authority is assumed to ensure\nballot secrecy. Further, the integrity of the voting is enforced by all voters\nthemselves and without a weakest link, the protocol becomes more robust. We\nintroduce a taxonomy of notions of distribution in online voting protocols that\nwe apply on selected online voting protocols. Accordingly, blockchain-based\nprotocols seem to be promising for online voting due to their similarity with\npaper-based protocols.\n",
"title": "Distributed Protocols at the Rescue for Trustworthy Online Voting"
}
| null | null | null | null | true | null |
11785
| null |
Default
| null | null |
null |
{
"abstract": " This paper, the third in a series, completes our description of all (radial)\nsolutions on C* of the tt*-Toda equations, using a combination of methods from\np.d.e., isomonodromic deformations (Riemann-Hilbert method), and loop groups.\nWe place these global solutions into the broader context of solutions which are\nsmooth near 0. For such solutions, we compute explicitly the Stokes data and\nconnection matrix of the associated meromorphic system, in the resonant cases\nas well as the non-resonant case. This allows us to give a complete picture of\nthe monodromy data of the global solutions.\n",
"title": "Isomonodromy aspects of the tt* equations of Cecotti and Vafa III. Iwasawa factorization and asymptotics"
}
| null | null | null | null | true | null |
11786
| null |
Default
| null | null |
null |
{
"abstract": " Low-profile patterned plasmonic surfaces are synergized with a broad class of\nsilicon microstructures to greatly enhance near-field nanoscale imaging,\nsensing, and energy harvesting coupled with far-field free-space detection.\nThis concept has a clear impact on several key areas of interest for the MEMS\ncommunity, including but not limited to ultra-compact microsystems for\nsensitive detection of small number of target molecules, and surface devices\nfor optical data storage, micro-imaging and displaying. In this paper, we\nreview the current state-of-the-art in plasmonic theory as well as derive\ndesign guidance for plasmonic integration with microsystems, fabrication\ntechniques, and selected applications in biosensing, including refractive-index\nbased label-free biosensing, plasmonic integrated lab-on-chip systems,\nplasmonic near-field scanning optical microscopy and plasmonics on-chip systems\nfor cellular imaging. This paradigm enables low-profile conformal surfaces on\nmicrodevices, rather than bulk material or coatings, which provide clear\nadvantages for physical, chemical and biological-related sensing, imaging, and\nlight harvesting, in addition to easier realization, enhanced flexibility, and\ntunability.\n",
"title": "Decorative Plasmonic Surfaces"
}
| null | null | null | null | true | null |
11787
| null |
Default
| null | null |
null |
{
"abstract": " Hamiltonian dynamics has been applied to study the slip-stacking dynamics.\nThe canonical-perturbation method is employed to obtain the second-harmonic\ncorrection term in the slip-stacking Hamiltonian. The Hamiltonian approach\nprovides a clear optimal method for choosing the slip-stacking parameter and\nimproving stacking efficiency. The dynamics are applied specifically to the\nFermilab Booster-Recycler complex. The dynamics can also be applied to other\naccelerator complexes.\n",
"title": "Hamiltonian approach to slip-stacking dynamics"
}
| null | null | null | null | true | null |
11788
| null |
Default
| null | null |
null |
{
"abstract": " Intense, pulsed ion beams locally heat materials and deliver dense electronic\nexcitations that can induce materials modifications and phase transitions.\nMaterials properties can potentially be stabilized by rapid quenching. Pulsed\nion beams with (sub-) ns pulse lengths have recently become available for\nmaterials processing. Here, we optimize mask geometries for local modification\nof materials by intense ion pulses. The goal is to rapidly excite targets\nvolumetrically to the point where a phase transition or local lattice\nreconstruction is induced followed by rapid cooling that stabilizes desired\nmaterials properties fast enough before the target is altered or damaged by e.\ng. hydrodynamic expansion. We performed HYDRA simulations that calculate peak\ntemperatures for a series of excitation conditions and cooling rates of silicon\ntargets with micro-structured masks and compare these to a simple analytical\nmodel. The model gives scaling laws that can guide the design of targets over a\nwide range of pulsed ion beam parameters.\n",
"title": "Materials processing with intense pulsed ion beams and masked targets"
}
| null | null | null | null | true | null |
11789
| null |
Default
| null | null |
null |
{
"abstract": " We undertake a systematic comparison between implied volatility, as\nrepresented by VIX (new methodology) and VXO (old methodology), and realized\nvolatility. We compare visually and statistically distributions of realized and\nimplied variance (volatility squared) and study the distribution of their\nratio. We find that the ratio is best fitted by heavy-tailed -- lognormal and\nfat-tailed (power-law) -- distributions, depending on whether preceding or\nconcurrent month of realized variance is used. We do not find substantial\ndifference in accuracy between VIX and VXO. Additionally, we study the variance\nof theoretical realized variance for Heston and multiplicative models of\nstochastic volatility and compare those with realized variance obtained from\nhistoric market data.\n",
"title": "Distributions of Historic Market Data -- Implied and Realized Volatility"
}
| null | null | null | null | true | null |
11790
| null |
Default
| null | null |
null |
{
"abstract": " We propose a novel method to directly learn a stochastic transition operator\nwhose repeated application provides generated samples. Traditional undirected\ngraphical models approach this problem indirectly by learning a Markov chain\nmodel whose stationary distribution obeys detailed balance with respect to a\nparameterized energy function. The energy function is then modified so the\nmodel and data distributions match, with no guarantee on the number of steps\nrequired for the Markov chain to converge. Moreover, the detailed balance\ncondition is highly restrictive: energy based models corresponding to neural\nnetworks must have symmetric weights, unlike biological neural circuits. In\ncontrast, we develop a method for directly learning arbitrarily parameterized\ntransition operators capable of expressing non-equilibrium stationary\ndistributions that violate detailed balance, thereby enabling us to learn more\nbiologically plausible asymmetric neural networks and more general non-energy\nbased dynamical systems. The proposed training objective, which we derive via\nprincipled variational methods, encourages the transition operator to \"walk\nback\" in multi-step trajectories that start at data-points, as quickly as\npossible back to the original data points. We present a series of experimental\nresults illustrating the soundness of the proposed approach, Variational\nWalkback (VW), on the MNIST, CIFAR-10, SVHN and CelebA datasets, demonstrating\nsuperior samples compared to earlier attempts to learn a transition operator.\nWe also show that although each rapid training trajectory is limited to a\nfinite but variable number of steps, our transition operator continues to\ngenerate good samples well past the length of such trajectories, thereby\ndemonstrating the match of its non-equilibrium stationary distribution to the\ndata distribution. Source Code: this http URL\n",
"title": "Variational Walkback: Learning a Transition Operator as a Stochastic Recurrent Net"
}
| null | null |
[
"Computer Science",
"Statistics"
] | null | true | null |
11791
| null |
Validated
| null | null |
null |
{
"abstract": " We address problems underlying the algorithmic question of automating the\nco-design of robot hardware in tandem with its apposite software. Specifically,\nwe consider the impact that degradations of a robot's sensor and actuation\nsuites may have on the ability of that robot to complete its tasks. We\nintroduce a new formal structure that generalizes and consolidates a variety of\nwell-known structures including many forms of plans, planning problems, and\nfilters, into a single data structure called a procrustean graph, and give\nthese graph structures semantics in terms of ideas based in formal language\ntheory. We describe a collection of operations on procrustean graphs (both\nsemantics-preserving and semantics-mutating), and show how a family of\nquestions about the destructiveness of a change to the robot hardware can be\nanswered by applying these operations. We also highlight the connections\nbetween this new approach and existing threads of research, including\ncombinatorial filtering, Erdmann's strategy complexes, and hybrid automata.\n",
"title": "Toward a language-theoretic foundation for planning and filtering"
}
| null | null | null | null | true | null |
11792
| null |
Default
| null | null |
null |
{
"abstract": " We calculate model theoretic ranks of Painlevé equations in this article,\nshowing in particular, that any equation in any of the Painlevé families has\nMorley rank one, extending results of Nagloo and Pillay (2011). We show that\nthe type of the generic solution of any equation in the second Painlevé\nfamily is geometrically trivial, extending a result of Nagloo (2015).\nWe also establish the orthogonality of various pairs of equations in the\nPainlevé families, showing at least generically, that all instances of\nnonorthogonality between equations in the same Painlevé family come from\nclassically studied B{ä}cklund transformations. For instance, we show that if\nat least one of $\\alpha, \\beta$ is transcendental, then $P_{II} (\\alpha)$ is\nnonorthogonal to $P_{II} ( \\beta )$ if and only if $\\alpha+ \\beta \\in \\mathbb\nZ$ or $\\alpha - \\beta \\in \\mathbb Z$. Our results have concrete interpretations\nin terms of characterizing the algebraic relations between solutions of\nPainlevé equations. We give similar results for orthogonality relations\nbetween equations in different Painlevé families, and formulate some general\nquestions which extend conjectures of Nagloo and Pillay (2011) on transcendence\nand algebraic independence of solutions to Painlevé equations. We also apply\nour analysis of ranks to establish some orthogonality results for pairs of\nPainlevé equations from different families. For instance, we answer several\nopen questions of Nagloo (2016), and in the process answer a question of Boalch\n(2012).\n",
"title": "Algebraic relations between solutions of Painlevé equations"
}
| null | null | null | null | true | null |
11793
| null |
Default
| null | null |
null |
{
"abstract": " We review some of the basic concepts and the possible pore structures\nassociated with electroporation (EP) for times after electrical pulsing. We\npurposefully give only a short description of pore creation and subsequent\nevolution of pore populations, as these are adequately discussed in both\nreviews and original research reports. In contrast, post-pulse pore concepts\nhave changed dramatically. For perspective we note that pores are not directly\nobserved. Instead understanding of pores is based on inference from experiments\nand, increasingly, molecular dynamics (MD) simulations. In the past decade\nconcepts for post-pulse pores have changed significantly: The idea of pure\nlipidic transient pores (TPs) that exist for milliseconds or longer post-pulse\nhas become inconsistent with MD results, which support TP lifetimes of only\n$\\sim$100 ns. A typical large TP number during cell EP pulsing is of order\n$10^6$. In twenty MD-based TP lifetimes (2 us total), the TP number plummets to\n$\\sim$0.001. In short, TPs vanish 2 us after a pulse ends, and cannot account\nfor post-pulse behavior such as large and relatively non-specific ionic and\nmolecular transport. Instead, an early conjecture of complex pores (CPs) with\nboth lipidic and other molecule should be taken seriously. Indeed, in the past\ndecade several experiments provide partial support for complex pores (CPs).\nPresently, CPs are \"dark\", in the sense that while some CP functions are known,\nlittle is known about their structure(s). There may be a wide range of\nlifetimes and permeabilities, not yet revealed by experiments. Like cosmology's\ndark matter, these unseen pores present us with an outstanding problem.\n",
"title": "Pore lifetimes in cell electroporation: Complex dark pores?"
}
| null | null | null | null | true | null |
11794
| null |
Default
| null | null |
null |
{
"abstract": " Causal discovery broadens the inference possibilities, as correlation does\nnot inform about the relationship direction. The common approaches were\nproposed for cases in which prior knowledge is desired, when the impact of a\ntreatment/intervention variable is discovered or to analyze time-related\ndependencies. In some practical applications, more universal techniques are\nneeded and have already been presented. Therefore, the aim of the study was to\nassess the accuracies in determining causal paths in a dataset without\nconsidering the ground truth and the contextual information. This benchmark was\nperformed on the database with cause-effect pairs, using a framework consisting\nof generalized correlations (GC), kernel regression gradients (GR) and absolute\nresiduals criteria (AR), along with causal additive modeling (CAM). The best\noverall accuracy, 80%, was achieved for the (majority voting) combination of\nGC, AR, and CAM, however, the most similar sensitivity and specificity values\nwere obtained for AR. Bootstrap simulation established the probability of\ncorrect causal path determination (which pairs should remain indeterminate).\nThe mean accuracy was then improved to 83% for the selected subset of pairs.\nThe described approach can be used for preliminary dependence assessment, as an\ninitial step for commonly used causality assessment frameworks or for\ncomparison with prior assumptions.\n",
"title": "Data-driven causal path discovery without prior knowledge - a benchmark study"
}
| null | null | null | null | true | null |
11795
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we combine concepts from Riemannian Optimization and the theory\nof Sobolev gradients to derive a new conjugate gradient method for direct\nminimization of the Gross-Pitaevskii energy functional with rotation. The\nconservation of the number of particles constrains the minimizers to lie on a\nmanifold corresponding to the unit $L^2$ norm. The idea developed here is to\ntransform the original constrained optimization problem to an unconstrained\nproblem on this (spherical) Riemannian manifold, so that fast minimization\nalgorithms can be applied as alternatives to more standard constrained\nformulations. First, we obtain Sobolev gradients using an equivalent definition\nof an $H^1$ inner product which takes into account rotation. Then, the\nRiemannian gradient (RG) steepest descent method is derived based on projected\ngradients and retraction of an intermediate solution back to the constraint\nmanifold. Finally, we use the concept of the Riemannian vector transport to\npropose a Riemannian conjugate gradient (RCG) method for this problem. It is\nderived at the continuous level based on the \"optimize-then-discretize\"\nparadigm instead of the usual \"discretize-then-optimize\" approach, as this\nensures robustness of the method when adaptive mesh refinement is performed in\ncomputations. We evaluate various design choices inherent in the formulation of\nthe method and conclude with recommendations concerning selection of the best\noptions. Numerical tests demonstrate that the proposed RCG method outperforms\nthe simple gradient descent (RG) method in terms of rate of convergence. While\non simple problems a Newton-type method implemented in the {\\tt Ipopt} library\nexhibits a faster convergence than the (RCG) approach, the two methods perform\nsimilarly on more complex problems requiring the use of mesh adaptation. At the\nsame time the (RCG) approach has far fewer tunable parameters.\n",
"title": "Computation of Ground States of the Gross-Pitaevskii Functional via Riemannian Optimization"
}
| null | null | null | null | true | null |
11796
| null |
Default
| null | null |
null |
{
"abstract": " In this paper we introduce a combinatorial formula for the\nEkeland-Hofer-Zehnder capacity of a convex polytope in $\\mathbb{R}^{2n}$. One\napplication of this formula is a certain subadditivity property of this\ncapacity.\n",
"title": "On the symplectic size of convex polytopes"
}
| null | null | null | null | true | null |
11797
| null |
Default
| null | null |
null |
{
"abstract": " This paper will describe a novel approach to the cocktail party problem that\nrelies on a fully convolutional neural network (FCN) architecture. The FCN\ntakes noisy audio data as input and performs nonlinear, filtering operations to\nproduce clean audio data of the target speech at the output. Our method learns\na model for one specific speaker, and is then able to extract that speakers\nvoice from babble background noise. Results from experimentation indicate the\nability to generalize to new speakers and robustness to new noise environments\nof varying signal-to-noise ratios. A potential application of this method would\nbe for use in hearing aids. A pre-trained model could be quickly fine tuned for\nan individuals family members and close friends, and deployed onto a hearing\naid to assist listeners in noisy environments.\n",
"title": "A Fully Convolutional Neural Network Approach to End-to-End Speech Enhancement"
}
| null | null | null | null | true | null |
11798
| null |
Default
| null | null |
null |
{
"abstract": " A normal conductor placed in good contact with a superconductor can inherit\nits remarkable electronic properties. This proximity effect microscopically\noriginates from the formation in the conductor of entangled electron-hole\nstates, called Andreev states. Spectroscopic studies of Andreev states have\nbeen performed in just a handful of systems. The unique geometry, electronic\nstructure and high mobility of graphene make it a novel platform for studying\nAndreev physics in two dimensions. Here we use a full van der Waals\nheterostructure to perform tunnelling spectroscopy measurements of the\nproximity effect in superconductor-graphene-superconductor junctions. The\nmeasured energy spectra, which depend on the phase difference between the\nsuperconductors, reveal the presence of a continuum of Andreev bound states.\nMoreover, our device heterostructure geometry and materials enable us to\nmeasure the Andreev spectrum as a function of the graphene Fermi energy,\nshowing a transition between different mesoscopic regimes. Furthermore, by\nexperimentally introducing a novel concept, the supercurrent spectral density,\nwe determine the supercurrent-phase relation in a tunnelling experiment, thus\nestablishing the connection between Andreev physics at finite energy and the\nJosephson effect. This work opens up new avenues for probing exotic topological\nphases of matter in hybrid superconducting Dirac materials.\n",
"title": "Tunnelling Spectroscopy of Andreev States in Graphene"
}
| null | null | null | null | true | null |
11799
| null |
Default
| null | null |
null |
{
"abstract": " We study the parameter estimation for parabolic, linear, second order,\nstochastic partial differential equations (SPDEs) observing a mild solution on\na discrete grid in time and space. A high-frequency regime is considered where\nthe mesh of the grid in the time variable goes to zero. Focusing on volatility\nestimation, we provide an explicit and easy to implement method of moments\nestimator based on squared increments. The estimator is consistent and admits a\ncentral limit theorem. This is established moreover for the estimation of the\nintegrated volatility in a semi-parametric framework and for the joint\nestimation of the volatility and an unknown parameter in the differential\noperator. Starting from a representation of the solution as an infinite factor\nmodel and exploiting mixing-type properties of time series, the theory\nconsiderably differs from the statistics for semi-martingales literature. The\nperformance of the method is illustrated in a simulation study.\n",
"title": "Volatility estimation for stochastic PDEs using high-frequency observations"
}
| null | null |
[
"Mathematics",
"Statistics"
] | null | true | null |
11800
| null |
Validated
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.