text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We present visible spectra of Ag-like ($4d^{10}4f$) and Cd-like\n($4d^{10}4f^2$) ions of Ho (atomic number $Z=67$), Er (68), and Tm (69)\nobserved with a compact electron beam ion trap. For Ag-like ions, prominent\nemission corresponding to the M1 transitions between the ground state fine\nstructure splitting $4f_{5/2}$--$4f_{7/2}$ is identified. For Cd-like ions,\nseveral M1 transitions in the ground state configuration are identified. The\ntransition wavelength and the transition probability are calculated with the\nrelativistic many-body perturbation theory and the relativistic CI + all-order\napproach. Comparisons between the experiments and the calculations show good\nagreement.\n", "title": "Visible transitions in Ag-like and Cd-like lanthanide ions" }
null
null
null
null
true
null
11501
null
Default
null
null
null
{ "abstract": " We introduce a gradient flow formulation of linear Boltzmann equations. Under\na diffusive scaling we derive a diffusion equation by using the machinery of\ngradient flows.\n", "title": "A gradient flow approach to linear Boltzmann equations" }
null
null
null
null
true
null
11502
null
Default
null
null
null
{ "abstract": " We present constraints on masses of active and sterile neutrinos. We use the\none-dimensional Ly$\\alpha$-forest power spectrum from the Baryon Oscillation\nSpectroscopic Survey (BOSS) of the Sloan Digital Sky Survey (SDSS-III) and from\nthe VLT/XSHOOTER legacy survey (XQ-100). In this paper, we present our own\nmeasurement of the power spectrum with the publicly released XQ-100 quasar\nspectra.\nFitting Ly$\\alpha$ data alone leads to cosmological parameters in excellent\nagreement with the values derived independently from Planck 2015 Cosmic\nMicrowave Background (CMB) data. Combining BOSS and XQ-100 Ly$\\alpha$ power\nspectra, we constrain the sum of neutrino masses to $\\sum m_\\nu < 0.8$ eV (95\\%\nC.L). With the addition of CMB data, this bound is tightened to $\\sum m_\\nu <\n0.14$ eV (95\\% C.L.).\nWith their sensitivity to small scales, Ly$\\alpha$ data are ideal to\nconstrain $\\Lambda$WDM models. Using XQ-100 alone, we issue lower bounds on\npure dark matter particles: $m_X \\gtrsim 2.08 \\: \\rm{keV}$ (95\\% C.L.) for\nearly decoupled thermal relics, and $m_s \\gtrsim 10.2 \\: \\rm{keV}$ (95\\% C.L.)\nfor non-resonantly produced right-handed neutrinos. Combining the 1D Ly$\\alpha$\nforest power spectrum measured by BOSS and XQ-100, we improve the two bounds to\n$m_X \\gtrsim 4.17 \\: \\rm{keV}$ and $m_s \\gtrsim 25.0 \\: \\rm{keV}$ (95\\% C.L.).\nThe $3~\\sigma$ bound shows a more significant improvement, increasing from $m_X\n\\gtrsim 2.74 \\: \\rm{keV}$ for BOSS alone to $m_X \\gtrsim 3.10 \\: \\rm{keV}$ for\nthe combined BOSS+XQ-100 data set.\nFinally, we include in our analysis the first two redshift bins ($z=4.2$ and\n$z=4.6$) of the power spectrum measured with the high-resolution HIRES/MIKE\nspectrographs. The addition of HIRES/MIKE power spectrum allows us to further\nimprove the two limits to $m_X \\gtrsim 4.65 \\: \\rm{keV}$ and $m_s \\gtrsim 28.8\n\\: \\rm{keV}$ (95\\% C.L.).\n", "title": "Constraints on neutrino masses from Lyman-alpha forest power spectrum with BOSS and XQ-100" }
null
null
null
null
true
null
11503
null
Default
null
null
null
{ "abstract": " In this paper, we present BubbleView, an alternative methodology for eye\ntracking using discrete mouse clicks to measure which information people\nconsciously choose to examine. BubbleView is a mouse-contingent, moving-window\ninterface in which participants are presented with a series of blurred images\nand click to reveal \"bubbles\" - small, circular areas of the image at original\nresolution, similar to having a confined area of focus like the eye fovea.\nAcross 10 experiments with 28 different parameter combinations, we evaluated\nBubbleView on a variety of image types: information visualizations, natural\nimages, static webpages, and graphic designs, and compared the clicks to eye\nfixations collected with eye-trackers in controlled lab settings. We found that\nBubbleView clicks can both (i) successfully approximate eye fixations on\ndifferent images, and (ii) be used to rank image and design elements by\nimportance. BubbleView is designed to collect clicks on static images, and\nworks best for defined tasks such as describing the content of an information\nvisualization or measuring image importance. BubbleView data is cleaner and\nmore consistent than related methodologies that use continuous mouse movements.\nOur analyses validate the use of mouse-contingent, moving-window methodologies\nas approximating eye fixations for different image and task types.\n", "title": "BubbleView: an interface for crowdsourcing image importance maps and tracking visual attention" }
null
null
null
null
true
null
11504
null
Default
null
null
null
{ "abstract": " We present a study of the influence of disorder on the Mott metal-insulator\ntransition for the organic charge-transfer salt\n$\\kappa$-(BEDT-TTF)$_2$Cu[N(CN)$_2$]Cl. To this end, disorder was introduced\ninto the system in a controlled way by exposing the single crystals to x-ray\nirradiation. The crystals were then fine-tuned across the Mott transition by\nthe application of continuously controllable He-gas pressure at low\ntemperatures. Measurements of the thermal expansion and resistance show that\nthe first-order character of the Mott transition prevails for low irradiation\ndoses achieved by irradiation times up to 100 h. For these crystals with a\nmoderate degree of disorder, we find a first-order transition line which ends\nin a second-order critical endpoint, akin to the pristine crystals. Compared to\nthe latter, however, we observe a significant reduction of both, the critical\npressure $p_c$ and the critical temperature $T_c$. This result is consistent\nwith the theoretically-predicted formation of a soft Coulomb gap in the\npresence of strong correlations and small disorder. Furthermore, we\ndemonstrate, similar to the observation for the pristine sample, that the Mott\ntransition after 50 h of irradiation is accompanied by sizable lattice effects,\nthe critical behavior of which can be well described by mean-field theory. Our\nresults demonstrate that the character of the Mott transition remains\nessentially unchanged at a low disorder level. However, after an irradiation\ntime of 150 h, no clear signatures of a discontinuous metal-insulator\ntransition could be revealed anymore. These results suggest that, above a\ncertain disorder level, the metal-insulator transition becomes a smeared\nfirst-order transition with some residual hysteresis.\n", "title": "Effects of Disorder on the Pressure-Induced Mott Transition in $κ$-BEDT-TTF)$_2$Cu[N(CN)$_2$]Cl" }
null
null
null
null
true
null
11505
null
Default
null
null
null
{ "abstract": " We present the methodology for and detail the implementation of the Dark\nEnergy Survey (DES) 3x2pt DES Year 1 (Y1) analysis, which combines\nconfiguration-space two-point statistics from three different cosmological\nprobes: cosmic shear, galaxy-galaxy lensing, and galaxy clustering, using data\nfrom the first year of DES observations. We have developed two independent\nmodeling pipelines and describe the code validation process. We derive\nexpressions for analytical real-space multi-probe covariances, and describe\ntheir validation with numerical simulations. We stress-test the inference\npipelines in simulated likelihood analyses that vary 6-7 cosmology parameters\nplus 20 nuisance parameters and precisely resemble the analysis to be presented\nin the DES 3x2pt analysis paper, using a variety of simulated input data\nvectors with varying assumptions.\nWe find that any disagreement between pipelines leads to changes in assigned\nlikelihood $\\Delta \\chi^2 \\le 0.045$ with respect to the statistical error of\nthe DES Y1 data vector. We also find that angular binning and survey mask do\nnot impact our analytic covariance at a significant level. We determine lower\nbounds on scales used for analysis of galaxy clustering (8 Mpc$~h^{-1}$) and\ngalaxy-galaxy lensing (12 Mpc$~h^{-1}$) such that the impact of modeling\nuncertainties in the non-linear regime is well below statistical errors, and\nshow that our analysis choices are robust against a variety of systematics.\nThese tests demonstrate that we have a robust analysis pipeline that yields\nunbiased cosmological parameter inferences for the flagship 3x2pt DES Y1\nanalysis. We emphasize that the level of independent code development and\nsubsequent code comparison as demonstrated in this paper is necessary to\nproduce credible constraints from increasingly complex multi-probe analyses of\ncurrent data.\n", "title": "Dark Energy Survey Year 1 Results: Multi-Probe Methodology and Simulated Likelihood Analyses" }
null
null
[ "Physics" ]
null
true
null
11506
null
Validated
null
null
null
{ "abstract": " Synthesizing user-intended programs from a small number of input-output\nexamples is a challenging problem with several important applications like\nspreadsheet manipulation, data wrangling and code refactoring. Existing\nsynthesis systems either completely rely on deductive logic techniques that are\nextensively hand-engineered or on purely statistical models that need massive\namounts of data, and in general fail to provide real-time synthesis on\nchallenging benchmarks. In this work, we propose Neural Guided Deductive Search\n(NGDS), a hybrid synthesis technique that combines the best of both symbolic\nlogic techniques and statistical models. Thus, it produces programs that\nsatisfy the provided specifications by construction and generalize well on\nunseen examples, similar to data-driven systems. Our technique effectively\nutilizes the deductive search framework to reduce the learning problem of the\nneural component to a simple supervised learning setup. Further, this allows us\nto both train on sparingly available real-world data and still leverage\npowerful recurrent neural network encoders. We demonstrate the effectiveness of\nour method by evaluating on real-world customer scenarios by synthesizing\naccurate programs with up to 12x speed-up compared to state-of-the-art systems.\n", "title": "Neural-Guided Deductive Search for Real-Time Program Synthesis from Examples" }
null
null
[ "Computer Science" ]
null
true
null
11507
null
Validated
null
null
null
{ "abstract": " We performed simulations for solid molecular hydrogen at high pressures\n(250GPa$\\leq$P$\\leq$500GPa) along two isotherms at T=200 K (phases III and VI)\nand at T=414 K (phase IV). At T=200K we considered likely candidates for phase\nIII, the C2c and Cmca12 structures, while at T=414K in phase IV we studied the\nPc48 structure. We employed both Coupled Electron-Ion Monte Carlo (CEIMC) and\nPath Integral Molecular Dynamics (PIMD) based on Density Functional Theory\n(DFT) using the vdW-DF approximation. The comparison between the two methods\nallows us to address the question of the accuracy of the xc approximation of\nDFT for thermal and quantum protons without recurring to perturbation theories.\nIn general, we find that atomic and molecular fluctuations in PIMD are larger\nthan in CEIMC which suggests that the potential energy surface from vdW-DF is\nless structured than the one from Quantum Monte Carlo. We find qualitatively\ndifferent behaviors for systems prepared in the C2c structure for increasing\npressure. Within PIMD the C2c structure is dynamically partially stable for\nP$\\leq$250GPa only: it retains the symmetry of the molecular centers but not\nthe molecular orientation; at intermediate pressures it develops layered\nstructures like Pbcn or Ibam and transforms to the metallic Cmca-4 structure at\nP$\\geq$450GPa. Instead, within CEIMC, the C2c structure is found to be\ndynamically stable at least up to 450GPa; at increasing pressure the molecular\nbond length increases and the nuclear correlation decreases. For the other two\nstructures the two methods are in qualitative agreement although quantitative\ndifferences remain. We discuss various structural properties and the electrical\nconductivity. We find these structures become conducting around 350GPa but the\nmetallic Drude-like behavior is reached only at around 500GPa, consistent with\nrecent experimental claims.\n", "title": "Coupled Electron-Ion Monte Carlo simulation of hydrogen molecular crystals" }
null
null
[ "Physics" ]
null
true
null
11508
null
Validated
null
null
null
{ "abstract": " A method is described for the detection and estimation of transient chirp\nsignals that are characterized by smoothly evolving, but otherwise unmodeled,\namplitude envelopes and instantaneous frequencies. Such signals are\nparticularly relevant for gravitational wave searches, where they may arise in\na wide range of astrophysical scenarios. The method uses splines with\ncontinuously adjustable breakpoints to represent the amplitude envelope and\ninstantaneous frequency of a signal, and estimates them from noisy data using\npenalized least squares and model selection. Simulations based on waveforms\nspanning a wide morphological range show that the method performs well in a\nsignal-to-noise ratio regime where the time-frequency signature of a signal is\nhighly degraded, thereby extending the coverage of current unmodeled\ngravitational wave searches to a wider class of signals.\n", "title": "Spline Based Search Method For Unmodeled Transient Gravitational Wave Chirps" }
null
null
null
null
true
null
11509
null
Default
null
null
null
{ "abstract": " We derive the Hilbert space formalism of quantum mechanics from epistemic\nprinciples. A key assumption is that a physical theory that relies on entities\nor distinctions that are unknowable in principle gives rise to wrong\npredictions. An epistemic formalism is developed, where concepts like\nindividual and collective knowledge are used, and knowledge may be actual or\npotential. The physical state $S$ corresponds to the collective potential\nknowledge. The state $S$ is a subset of a state space $\\mathcal{S}=\\{Z\\}$, such\nthat $S$ always contains several elements $Z$, which correspond to unattainable\nstates of complete potential knowledge of the world. The evolution of $S$\ncannot be determined in terms of the individual evolution of the elements $Z$,\nunlike the evolution of an ensemble in classical phase space. The evolution of\n$S$ is described in terms of sequential time $n\\in \\mathbf{\\mathbb{N}}$, which\nis updated according to $n\\rightarrow n+1$ each time potential knowledge\nchanges. In certain experimental contexts $C$, there is initial knowledge at\ntime $n$ that a given series of properties $P,P',\\ldots$ will be observed\nwithin a given time frame, meaning that a series of values $p,p',\\ldots$ of\nthese properties will become known. At time $n$, it is just known that these\nvalues belong to predefined, finite sets $\\{p\\},\\{p'\\},\\ldots$. In such a\ncontext $C$, it is possible to define a complex Hilbert space $\\mathcal{H}_{C}$\non top of $\\mathcal{S}$, in which the elements are contextual state vectors\n$\\bar{S}_{C}$. Born's rule to calculate the probabilities to find the values\n$p,p',\\ldots$ is derived as the only generally applicable such rule. Also, we\ncan associate a self-adjoint operator $\\bar{P}$ with eigenvalues $\\{p\\}$ to\neach property $P$ observed within $C$. These operators obey\n$[\\bar{P},\\bar{P}']=0$ if and only if the precise values of $P$ and $P'$ are\nsimultaneoulsy knowable.\n", "title": "Quantum mechanics from an epistemic state space" }
null
null
null
null
true
null
11510
null
Default
null
null
null
{ "abstract": " In this paper, we construct global action-angle variables for a certain\ntwo-parameter family of hyperbolic van Diejen systems. Following Ruijsenaars'\nideas on the translation invariant models, the proposed action-angle variables\ncome from a thorough analysis of the commutation relation obeyed by the Lax\nmatrix, whereas the proof of their canonicity is based on the study of the\nscattering theory. As a consequence, we show that the van Diejen system of our\ninterest is self-dual with a factorized scattering map. Also, in an appendix by\nS. Ruijsenaars, a novel proof of the spectral asymptotics of certain\nexponential type matrix flows is presented. This result is of crucial\nimportance in our scattering-theoretical analysis.\n", "title": "Self-duality and scattering map for the hyperbolic van Diejen systems with two coupling parameters (with an appendix by S. Ruijsenaars)" }
null
null
null
null
true
null
11511
null
Default
null
null
null
{ "abstract": " The block maxima method in extreme value theory consists of fitting an\nextreme value distribution to a sample of block maxima extracted from a time\nseries. Traditionally, the maxima are taken over disjoint blocks of\nobservations. Alternatively, the blocks can be chosen to slide through the\nobservation period, yielding a larger number of overlapping blocks. Inference\nbased on sliding blocks is found to be more efficient than inference based on\ndisjoint blocks. The asymptotic variance of the maximum likelihood estimator of\nthe Fréchet shape parameter is reduced by more than 18%. Interestingly, the\namount of the efficiency gain is the same whatever the serial dependence of the\nunderlying time series: as for disjoint blocks, the asymptotic distribution\ndepends on the serial dependence only through the sequence of scaling\nconstants. The findings are illustrated by simulation experiments and are\napplied to the estimation of high return levels of the daily log-returns of the\nStandard & Poor's 500 stock market index.\n", "title": "Inference for heavy tailed stationary time series based on sliding blocks" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
11512
null
Validated
null
null
null
{ "abstract": " We demonstrate the active tuning of all-dielectric metasurfaces exhibiting\nhigh-quality factor (high-Q) resonances. The active control is provided by\nembedding the asymmetric silicon meta-atoms with liquid crystals, which allows\nthe relative index of refraction to be controlled through heating. It is found\nthat high quality factor resonances ($Q=270\\pm30$) can be tuned over more than\nthree resonance widths. Our results demonstrate the feasibility of using\nall-dielectric metasurfaces to construct tunable narrow-band filters.\n", "title": "Active tuning of high-Q dielectric metasurfaces" }
null
null
[ "Physics" ]
null
true
null
11513
null
Validated
null
null
null
{ "abstract": " We study the problem of semi-supervised question answering----utilizing\nunlabeled text to boost the performance of question answering models. We\npropose a novel training framework, the Generative Domain-Adaptive Nets. In\nthis framework, we train a generative model to generate questions based on the\nunlabeled text, and combine model-generated questions with human-generated\nquestions for training question answering models. We develop novel domain\nadaptation algorithms, based on reinforcement learning, to alleviate the\ndiscrepancy between the model-generated data distribution and the\nhuman-generated data distribution. Experiments show that our proposed framework\nobtains substantial improvement from unlabeled text.\n", "title": "Semi-Supervised QA with Generative Domain-Adaptive Nets" }
null
null
null
null
true
null
11514
null
Default
null
null
null
{ "abstract": " We propose an approach for showing rationality of an algebraic variety $X$.\nWe try to cover $X$ by rational curves of certain type and count how many\ncurves pass through a generic point. If the answer is $1$, then we can\nsometimes reduce the question of rationality of $X$ to the question of\nrationality of a closed subvariety of $X$. This approach is applied to the case\nof the so-called Ueno-Campana manifolds. Our experiments indicate that the\npreviously open cases $X_{4,6}$ and $X_{5,6}$ are both rational. However, this\nresult is not rigorously justified and depends on a heuristic argument and a\nMonte Carlo type computer simulation. In an unexpected twist, existence of\nlattices $D_6$, $E_8$ and $\\Lambda_{10}$ turns out to be crucial.\n", "title": "Rationality proofs by curve counting" }
null
null
null
null
true
null
11515
null
Default
null
null
null
{ "abstract": " With the availability of large databases and recent improvements in deep\nlearning methodology, the performance of AI systems is reaching or even\nexceeding the human level on an increasing number of complex tasks. Impressive\nexamples of this development can be found in domains such as image\nclassification, sentiment analysis, speech understanding or strategic game\nplaying. However, because of their nested non-linear structure, these highly\nsuccessful machine learning and artificial intelligence models are usually\napplied in a black box manner, i.e., no information is provided about what\nexactly makes them arrive at their predictions. Since this lack of transparency\ncan be a major drawback, e.g., in medical applications, the development of\nmethods for visualizing, explaining and interpreting deep learning models has\nrecently attracted increasing attention. This paper summarizes recent\ndevelopments in this field and makes a plea for more interpretability in\nartificial intelligence. Furthermore, it presents two approaches to explaining\npredictions of deep learning models, one method which computes the sensitivity\nof the prediction with respect to changes in the input and one approach which\nmeaningfully decomposes the decision in terms of the input variables. These\nmethods are evaluated on three classification tasks.\n", "title": "Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models" }
null
null
null
null
true
null
11516
null
Default
null
null
null
{ "abstract": " WTe2 and its sister alloys have attracted tremendous attentions recent years\ndue to the large non-saturating magnetoresistance and topological non-trivial\nproperties. Herein, we briefly review the electrical property studies on this\nnew quantum material.\n", "title": "The study on quantum material WTe2" }
null
null
null
null
true
null
11517
null
Default
null
null
null
{ "abstract": " Learning social media data embedding by deep models has attracted extensive\nresearch interest as well as boomed a lot of applications, such as link\nprediction, classification, and cross-modal search. However, for social images\nwhich contain both link information and multimodal contents (e.g., text\ndescription, and visual content), simply employing the embedding learnt from\nnetwork structure or data content results in sub-optimal social image\nrepresentation. In this paper, we propose a novel social image embedding\napproach called Deep Multimodal Attention Networks (DMAN), which employs a deep\nmodel to jointly embed multimodal contents and link information. Specifically,\nto effectively capture the correlations between multimodal contents, we propose\na multimodal attention network to encode the fine-granularity relation between\nimage regions and textual words. To leverage the network structure for\nembedding learning, a novel Siamese-Triplet neural network is proposed to model\nthe links among images. With the joint deep model, the learnt embedding can\ncapture both the multimodal contents and the nonlinear network information.\nExtensive experiments are conducted to investigate the effectiveness of our\napproach in the applications of multi-label classification and cross-modal\nsearch. Compared to state-of-the-art image embeddings, our proposed DMAN\nachieves significant improvement in the tasks of multi-label classification and\ncross-modal search.\n", "title": "Learning Social Image Embedding with Deep Multimodal Attention Networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
11518
null
Validated
null
null
null
{ "abstract": " Traditional supervised learning makes the closed-world assumption that the\nclasses appeared in the test data must have appeared in training. This also\napplies to text learning or text classification. As learning is used\nincreasingly in dynamic open environments where some new/test documents may not\nbelong to any of the training classes, identifying these novel documents during\nclassification presents an important problem. This problem is called open-world\nclassification or open classification. This paper proposes a novel deep\nlearning based approach. It outperforms existing state-of-the-art techniques\ndramatically.\n", "title": "DOC: Deep Open Classification of Text Documents" }
null
null
null
null
true
null
11519
null
Default
null
null
null
{ "abstract": " Applied statisticians use sequential regression procedures to produce a\nranking of explanatory variables and, in settings of low correlations between\nvariables and strong true effect sizes, expect that variables at the very top\nof this ranking are truly relevant to the response. In a regime of certain\nsparsity levels, however, three examples of sequential procedures--forward\nstepwise, the lasso, and least angle regression--are shown to include the first\nspurious variable unexpectedly early. We derive a rigorous, sharp prediction of\nthe rank of the first spurious variable for these three procedures,\ndemonstrating that the first spurious variable occurs earlier and earlier as\nthe regression coefficients become denser. This counterintuitive phenomenon\npersists for statistically independent Gaussian random designs and an\narbitrarily large magnitude of the true effects. We gain a better understanding\nof the phenomenon by identifying the underlying cause and then leverage the\ninsights to introduce a simple visualization tool termed the double-ranking\ndiagram to improve on sequential methods. As a byproduct of these findings, we\nobtain the first provable result certifying the exact equivalence between the\nlasso and least angle regression in the early stages of solution paths beyond\northogonal designs. This equivalence can seamlessly carry over many important\nmodel selection results concerning the lasso to least angle regression.\n", "title": "When Is the First Spurious Variable Selected by Sequential Regression Procedures?" }
null
null
null
null
true
null
11520
null
Default
null
null
null
{ "abstract": " Interpretability has become incredibly important as machine learning is\nincreasingly used to inform consequential decisions. We propose to construct\nglobal explanations of complex, blackbox models in the form of a decision tree\napproximating the original model---as long as the decision tree is a good\napproximation, then it mirrors the computation performed by the blackbox model.\nWe devise a novel algorithm for extracting decision tree explanations that\nactively samples new training points to avoid overfitting. We evaluate our\nalgorithm on a random forest to predict diabetes risk and a learned controller\nfor cart-pole. Compared to several baselines, our decision trees are both\nsubstantially more accurate and equally or more interpretable based on a user\nstudy. Finally, we describe several insights provided by our interpretations,\nincluding a causal issue validated by a physician.\n", "title": "Interpreting Blackbox Models via Model Extraction" }
null
null
[ "Computer Science" ]
null
true
null
11521
null
Validated
null
null
null
{ "abstract": " Let $E$ be a closed set in the Riemann sphere $\\widehat{\\mathbb{C}}$. We\nconsider a holomorphic motion $\\phi$ of $E$ over a complex manifold $M$, that\nis, a holomorphic family of injections on $E$ parametrized by $M$. It is known\nthat if $M$ is the unit disk $\\Delta$ in the complex plane, then any\nholomorphic motion of $E$ over $\\Delta$ can be extended to a holomorphic motion\nof the Riemann sphere over $\\Delta$. In this paper, we consider conditions\nunder which a holomorphic motion of $E$ over a non-simply connected Riemann\nsurface $X$ can be extended to a holomorphic motion of $\\widehat{\\mathbb{C}}$\nover $X$. Our main result shows that a topological condition, the triviality of\nthe monodromy, gives a necessary and sufficient condition for a holomorphic\nmotion of $E$ over $X$ to be extended to a holomorphic motion of\n$\\widehat{\\mathbb{C}}$ over $X$. We give topological and geometric conditions\nfor a holomorphic motion over a Riemann surface to be extended. We also apply\nour result to a lifting problem for holomorphic maps to Teichmüller spaces.\n", "title": "Extending holomorphic motions and monodromy" }
null
null
null
null
true
null
11522
null
Default
null
null
null
{ "abstract": " AI applications have emerged in current world. Among AI applications,\ncomputer-vision (CV) related applications have attracted high interest.\nHardware implementation of CV processors necessitates a high performance but\nlow-power image detector. The key to energy-efficiency work lies in\nanalog-digital converting, where output of imaging detectors is transferred to\ndigital domain and CV algorithms can be performed on data. In this paper,\nanalog-digital converter architectures are compared, and an example ADC design\nis proposed which achieves both good performance and low power consumption.\n", "title": "A/D Converter Architectures for Energy-Efficient Vision Processor" }
null
null
null
null
true
null
11523
null
Default
null
null
null
{ "abstract": " In monadic programming, datatypes are presented as free algebras, generated\nby data values, and by the algebraic operations and equations capturing some\ncomputational effects. These algebras are free in the sense that they satisfy\njust the equations imposed by their algebraic theory, and remain free of any\nadditional equations. The consequence is that they do not admit quotient types.\nThis is, of course, often inconvenient. Whenever a computation involves data\nwith multiple representatives, and they need to be identified according to some\nequations that are not satisfied by all data, the monadic programmer has to\nleave the universe of free algebras, and resort to explicit destructors. We\ncharacterize the situation when these destructors are preserved under all\noperations, and the resulting quotients of free algebras are also their\nsubalgebras. Such quotients are called *projective*. Although popular in\nuniversal algebra, projective algebras did not attract much attention in the\nmonadic setting, where they turn out to have a surprising avatar: for any given\nmonad, a suitable category of projective algebras is equivalent with the\ncategory of coalgebras for the comonad induced by any monad resolution. For a\nmonadic programmer, this equivalence provides a convenient way to implement\npolymorphic quotients as coalgebras. The dual correspondence of injective\ncoalgebras and all algebras leads to a different family of quotient types,\nwhich seems to have a different family of applications. Both equivalences also\nentail several general corollaries concerning monadicity and comonadicity.\n", "title": "Quotients in monadic programming: Projective algebras are equivalent to coalgebras" }
null
null
null
null
true
null
11524
null
Default
null
null
null
{ "abstract": " We give faster algorithms for producing sparse approximations of the\ntransition matrices of $k$-step random walks on undirected, weighted graphs.\nThese transition matrices also form graphs, and arise as intermediate objects\nin a variety of graph algorithms. Our improvements are based on a better\nunderstanding of processes that sample such walks, as well as tighter bounds on\nkey weights underlying these sampling processes. On a graph with $n$ vertices\nand $m$ edges, our algorithm produces a graph with about $n\\log{n}$ edges that\napproximates the $k$-step random walk graph in about $m + n \\log^4{n}$ time. In\norder to obtain this runtime bound, we also revisit \"density independent\"\nalgorithms for sparsifying graphs whose runtime overhead is expressed only in\nterms of the number of vertices.\n", "title": "Density Independent Algorithms for Sparsifying $k$-Step Random Walks" }
null
null
null
null
true
null
11525
null
Default
null
null
null
{ "abstract": " We introduce a generalized $k$-FL sequence and special kind of pairs of real\nnumbers that are related to it, and give an application on the integral\nsolutions of a certain equation using those pairs. Also, we associate skew\ncirculant and circulant matrices to each generalized $k$-FL sequence, and study\nthe determinantal variety of those matrices as an application.\n", "title": "On a generalized $k$-FL sequence and its applications" }
null
null
[ "Mathematics" ]
null
true
null
11526
null
Validated
null
null
null
{ "abstract": " The crucial importance of metrics in machine learning algorithms has led to\nan increasing interest in optimizing distance and similarity functions, an area\nof research known as metric learning. When data consist of feature vectors, a\nlarge body of work has focused on learning a Mahalanobis distance. Less work\nhas been devoted to metric learning from structured objects (such as strings or\ntrees), most of it focusing on optimizing a notion of edit distance. We\nidentify two important limitations of current metric learning approaches.\nFirst, they allow to improve the performance of local algorithms such as\nk-nearest neighbors, but metric learning for global algorithms (such as linear\nclassifiers) has not been studied so far. Second, the question of the\ngeneralization ability of metric learning methods has been largely ignored. In\nthis thesis, we propose theoretical and algorithmic contributions that address\nthese limitations. Our first contribution is the derivation of a new kernel\nfunction built from learned edit probabilities. Our second contribution is a\nnovel framework for learning string and tree edit similarities inspired by the\nrecent theory of (e,g,t)-good similarity functions. Using uniform stability\narguments, we establish theoretical guarantees for the learned similarity that\ngive a bound on the generalization error of a linear classifier built from that\nsimilarity. In our third contribution, we extend these ideas to metric learning\nfrom feature vectors by proposing a bilinear similarity learning method that\nefficiently optimizes the (e,g,t)-goodness. Generalization guarantees are\nderived for our approach, highlighting that our method minimizes a tighter\nbound on the generalization error of the classifier. Our last contribution is a\nframework for establishing generalization bounds for a large class of existing\nmetric learning algorithms based on a notion of algorithmic robustness.\n", "title": "Supervised Metric Learning with Generalization Guarantees" }
null
null
[ "Computer Science" ]
null
true
null
11527
null
Validated
null
null
null
{ "abstract": " The Kuramoto-Sivashinsky PDE on the line with odd and periodic boundary\nconditions and with parameter $\\nu=0.1212$ is considered. We give a\ncomputer-assisted proof the existence of symbolic dynamics and countable\ninfinity of periodic orbits with arbitrary large periods.\n", "title": "Symbolic dynamics for Kuramoto-Sivashinsky PDE on the line --- a computer-assisted proof" }
null
null
null
null
true
null
11528
null
Default
null
null
null
{ "abstract": " We present the full results of our decade-long astrometric monitoring\nprograms targeting 31 ultracool binaries with component spectral types M7-T5.\nJoint analysis of resolved imaging from Keck Observatory and Hubble Space\nTelescope and unresolved astrometry from CFHT/WIRCam yields parallactic\ndistances for all systems, robust orbit determinations for 23 systems, and\nphotocenter orbits for 19 systems. As a result, we measure 38 precise\nindividual masses spanning 30-115 $M_{\\rm Jup}$. We determine a\nmodel-independent substellar boundary that is $\\approx$70 $M_{\\rm Jup}$ in mass\n($\\approx$L4 in spectral type), and we validate Baraffe et al. (2015)\nevolutionary model predictions for the lithium-depletion boundary (60 $M_{\\rm\nJup}$ at field ages). Assuming each binary is coeval, we test models of the\nsubstellar mass-luminosity relation and find that in the L/T transition, only\nthe Saumon & Marley (2008) \"hybrid\" models accounting for cloud clearing match\nour data. We derive a precise, mass-calibrated spectral type-effective\ntemperature relation covering 1100-2800 K. Our masses enable a novel direct\ndetermination of the age distribution of field brown dwarfs spanning L4-T5 and\n30-70 $M_{\\rm Jup}$. We determine a median age of 1.3 Gyr, and our population\nsynthesis modeling indicates our sample is consistent with a constant star\nformation history modulated by dynamical heating in the Galactic disk. We\ndiscover two triple-brown-dwarf systems, the first with directly measured\nmasses and eccentricities. We examine the eccentricity distribution, carefully\nconsidering biases and completeness, and find that low-eccentricity orbits are\nsignificantly more common among ultracool binaries than solar-type binaries,\npossibly indicating the early influence of long-lived dissipative gas disks.\nOverall, this work represents a major advance in the empirical view of very\nlow-mass stars and brown dwarfs.\n", "title": "Individual Dynamical Masses of Ultracool Dwarfs" }
null
null
null
null
true
null
11529
null
Default
null
null
null
{ "abstract": " The velocity anisotropy parameter, beta, is a measure of the kinematic state\nof orbits in the stellar halo which holds promise for constraining the merger\nhistory of the Milky Way (MW). We determine global trends for beta as a\nfunction of radius from three suites of simulations, including accretion only\nand cosmological hydrodynamic simulations. We find that both types of\nsimulations are consistent and predict strong radial anisotropy (<beta>~0.7)\nfor Galactocentric radii greater than 10 kpc. Previous observations of beta for\nthe MW's stellar halo claim a detection of an isotropic or tangential \"dip\" at\nr~20 kpc. Using the N-body+SPH simulations, we investigate the temporal\npersistence, population origin, and severity of \"dips\" in beta. We find dips in\nthe in situ stellar halo are long-lived, while dips in the accreted stellar\nhalo are short-lived and tied to the recent accretion of satellite material. We\nalso find that a major merger as early as z~1 can result in a present day low\n(isotropic to tangential) value of beta over a wide range of radii and angular\nexpanse. While all of these mechanisms are plausible drivers for the beta dip\nobserved in the MW, in the simulations, each mechanism has a unique metallicity\nsignature associated with it, implying that future spectroscopic surveys could\ndistinguish between them. Since an accurate knowledge of beta(r) is required\nfor measuring the mass of the MW halo, we note significant transient dips in\nbeta could cause an overestimate of the halo's mass when using spherical Jeans\nequation modeling.\n", "title": "Beta Dips in the Gaia Era: Simulation Predictions of the Galactic Velocity Anisotropy Parameter for Stellar Halos" }
null
null
null
null
true
null
11530
null
Default
null
null
null
{ "abstract": " A computational flow is a pair consisting of a sequence of computational\nproblems of a certain sort and a sequence of computational reductions among\nthem. In this paper we will develop a theory for these computational flows and\nwe will use it to make a sound and complete interpretation for bounded theories\nof arithmetic. This property helps us to decompose a first order arithmetical\nproof to a sequence of computational reductions by which we can extract the\ncomputational content of low complexity statements in some bounded theories of\narithmetic such as $I\\Delta_0$, $T^k_n$, $I\\Delta_0+EXP$ and $PRA$. In the last\nsection, by generalizing term-length flows to ordinal-length flows, we will\nextend our investigation from bounded theories to strong unbounded ones such as\n$I\\Sigma_n$ and $PA+TI(\\alpha)$ and we will capture their total $NP$ search\nproblems as a consequence.\n", "title": "Computational Flows in Arithmetic" }
null
null
null
null
true
null
11531
null
Default
null
null
null
{ "abstract": " Let $N$ be a compact, connected, non-orientable surface of genus $\\rho$ with\n$n$ boundary components, with $\\rho \\ge 5$ and $n \\ge 0$, and let $\\mathcal{M}\n(N)$ be the mapping class group of $N$. We show that, if $\\mathcal{G}$ is a\nfinite index subgroup of $\\mathcal{M} (N)$ and $\\varphi: \\mathcal{G} \\to\n\\mathcal{M} (N)$ is an injective homomorphism, then there exists $f_0 \\in\n\\mathcal{M} (N)$ such that $\\varphi (g) = f_0 g f_0^{-1}$ for all $g \\in\n\\mathcal{G}$. We deduce that the abstract commensurator of $\\mathcal{M} (N)$\ncoincides with $\\mathcal{M} (N)$.\n", "title": "Injective homomorphisms of mapping class groups of non-orientable surfaces" }
null
null
[ "Mathematics" ]
null
true
null
11532
null
Validated
null
null
null
{ "abstract": " Electric and thermal transport properties of a $\\nu=2/3$ fractional quantum\nHall junction are analyzed. We investigate the evolution of the electric and\nthermal two-terminal conductances, $G$ and $G^Q$, with system size $L$ and\ntemperature $T$. This is done both for the case of strong interaction between\nthe 1 and 1/ 3 modes (when the low-temperature physics of the interacting\nsegment of the device is controlled by the vicinity of the strong-disorder\nKane-Fisher-Polchinski fixed point) and for relatively weak interaction, for\nwhich the disorder is irrelevant at $T=0$ in the renormalization-group sense.\nThe transport properties in both cases are similar in several respects. In\nparticular, $G(L)$ is close to 4/3 (in units of $e^2/h$) and $G^Q$ to 2 (in\nunits of $\\pi T / 6 \\hbar$) for small $L$, independently of the interaction\nstrength. For large $L$ the system is in an incoherent regime, with $G$ given\nby 2/3 and $G^Q$ showing the Ohmic scaling, $G^Q\\propto 1/L$, again for any\ninteraction strength. The hallmark of the strong-disorder fixed point is the\nemergence of an intermediate range of $L$, in which the electric conductance\nshows strong mesoscopic fluctuations and the thermal conductance is $G^Q=1$.\nThe analysis is extended also to a device with floating 1/3 mode, as studied in\na recent experiment [A. Grivnin et al, Phys. Rev. Lett. 113, 266803 (2014)].\n", "title": "Transport in a disordered $ν=2/3$ fractional quantum Hall junction" }
null
null
null
null
true
null
11533
null
Default
null
null
null
{ "abstract": " We report that a longitudinal epsilon-near-zero (LENZ) film leads to giant\nfield enhancement and strong radiation emission of sources in it and that these\nfeatures are superior to what found in previous studies related to isotropic\nENZ. LENZ films are uniaxially anisotropic films where relative permittivity\nalong the normal direction to the film is much smaller than unity, while the\npermittivity in the transverse plane of the film is not vanishing. It has been\nshown previously that realistic isotropic ENZ films do not provide large field\nenhancement due to material losses, however, we show the loss effects can be\novercome using LENZ films. We also prove that in comparison to the (isotropic)\nENZ case, the LENZ film field enhancement is not only remarkably larger but it\nalso occurs for a wider range of angles of incidence. Importantly, the field\nenhancement near the interface of the LENZ film is almost independent of the\nthickness unlike for the isotropic ENZ case where extremely small thickness is\nrequired. We show that for a LENZ structure consisting of a multilayer of\ndysprosium-doped cadmium oxide and silicon accounting for realistic losses,\nfield intensity enhancement of 30 is obtained which is almost 10 times larger\nthan that obtained with realistic ENZ materials\n", "title": "Giant Field Enhancement in Longitudinal Epsilon Near Zero Films" }
null
null
null
null
true
null
11534
null
Default
null
null
null
{ "abstract": " With the trend of increasing wind turbine rotor diameters, the mitigation of\nblade fatigue loadings is of special interest to extend the turbine lifetime.\nFatigue load reductions can be partly accomplished using Individual Pitch\nControl (IPC) facilitated by the so-called Multi-Blade Coordinate (MBC)\ntransformation. This operation transforms and decouples the blade load signals\nin a yaw- and tilt-axis. However, in practical scenarios, the resulting\ntransformed system still shows coupling between the axes, posing a need for\nmore advanced Multiple-Input Multiple-Output (MIMO) control architectures. This\npaper presents a novel analysis and design framework for decoupling of the\nnon-rotating axes by the inclusion of an azimuth offset in the reverse MBC\ntransformation, enabling the application of simple Single-Input Single-Output\n(SISO) controllers. A thorough analysis is given by including the azimuth\noffset in a frequency-domain representation. The result is evaluated on\nsimplified blade models, as well as linearizations obtained from the\nNREL~5\\nobreakdash-MW reference wind turbine. A sensitivity and decoupling\nassessment justify the application of decentralized SISO control loops for IPC.\nFurthermore, closed-loop high-fidelity simulations show beneficial effects on\npitch actuation and blade fatigue load reductions.\n", "title": "Analysis and optimal individual pitch control decoupling by inclusion of an azimuth offset in the multi-blade coordinate transformation" }
null
null
[ "Computer Science" ]
null
true
null
11535
null
Validated
null
null
null
{ "abstract": " 802.11p based V2X communication uses stochastic medium access control, which\ncannot prevent broadcast packet collision, in particular during high channel\nload. Wireless congestion control has been designed to keep the channel load at\nan optimal point. However, vehicles' lack of precise and granular knowledge\nabout true channel activity, in time and space, makes it impossible to fully\navoid packet collisions. In this paper, we propose a machine learning approach\nusing deep neural network for learning the vehicles' transmit patterns, and as\nsuch predicting future channel activity in space and time. We evaluate the\nperformance of our proposal via simulation considering multiple safety-related\nV2X services involving heterogeneous transmit patterns. Our results show that\npredicting channel activity, and transmitting accordingly, reduces collisions\nand significantly improves communication performance.\n", "title": "Deep Learning-aided Application Scheduler for Vehicular Safety Communication" }
null
null
null
null
true
null
11536
null
Default
null
null
null
{ "abstract": " We prove moment inequalities for a class of functionals of i.i.d. random\nfields. We then derive rates in the central limit theorem for weighted sums of\nsuch randoms fields via an approximation by $m$-dependent random fields.\n", "title": "Convergence rates in the central limit theorem for weighted sums of Bernoulli random fields" }
null
null
null
null
true
null
11537
null
Default
null
null
null
{ "abstract": " The multi-armed bandit (MAB) problem is a classic example of the\nexploration-exploitation dilemma. It is concerned with maximising the total\nrewards for a gambler by sequentially pulling an arm from a multi-armed slot\nmachine where each arm is associated with a reward distribution. In static\nMABs, the reward distributions do not change over time, while in dynamic MABs,\neach arm's reward distribution can change, and the optimal arm can switch over\ntime. Motivated by many real applications where rewards are binary, we focus on\ndynamic Bernoulli bandits. Standard methods like $\\epsilon$-Greedy and Upper\nConfidence Bound (UCB), which rely on the sample mean estimator, often fail to\ntrack changes in the underlying reward for dynamic problems. In this paper, we\novercome the shortcoming of slow response to change by deploying adaptive\nestimation in the standard methods and propose a new family of algorithms,\nwhich are adaptive versions of $\\epsilon$-Greedy, UCB, and Thompson sampling.\nThese new methods are simple and easy to implement. Moreover, they do not\nrequire any prior knowledge about the dynamic reward process, which is\nimportant for real applications. We examine the new algorithms numerically in\ndifferent scenarios and the results show solid improvements of our algorithms\nin dynamic environments.\n", "title": "On Adaptive Estimation for Dynamic Bernoulli Bandits" }
null
null
null
null
true
null
11538
null
Default
null
null
null
{ "abstract": " We study the least squares regression problem \\begin{align*} \\min_{\\Theta \\in\n\\mathcal{S}_{\\odot D,R}} \\|A\\Theta-b\\|_2, \\end{align*} where\n$\\mathcal{S}_{\\odot D,R}$ is the set of $\\Theta$ for which $\\Theta =\n\\sum_{r=1}^{R} \\theta_1^{(r)} \\circ \\cdots \\circ \\theta_D^{(r)}$ for vectors\n$\\theta_d^{(r)} \\in \\mathbb{R}^{p_d}$ for all $r \\in [R]$ and $d \\in [D]$, and\n$\\circ$ denotes the outer product of vectors. That is, $\\Theta$ is a\nlow-dimensional, low-rank tensor. This is motivated by the fact that the number\nof parameters in $\\Theta$ is only $R \\cdot \\sum_{d=1}^D p_d$, which is\nsignificantly smaller than the $\\prod_{d=1}^{D} p_d$ number of parameters in\nordinary least squares regression. We consider the above CP decomposition model\nof tensors $\\Theta$, as well as the Tucker decomposition. For both models we\nshow how to apply data dimensionality reduction techniques based on {\\it\nsparse} random projections $\\Phi \\in \\mathbb{R}^{m \\times n}$, with $m \\ll n$,\nto reduce the problem to a much smaller problem $\\min_{\\Theta} \\|\\Phi A \\Theta\n- \\Phi b\\|_2$, for which if $\\Theta'$ is a near-optimum to the smaller problem,\nthen it is also a near optimum to the original problem. We obtain significantly\nsmaller dimension and sparsity in $\\Phi$ than is possible for ordinary least\nsquares regression, and we also provide a number of numerical simulations\nsupporting our theory.\n", "title": "Near Optimal Sketching of Low-Rank Tensor Regression" }
null
null
null
null
true
null
11539
null
Default
null
null
null
{ "abstract": " Permutation tests are among the simplest and most widely used statistical\ntools. Their p-values can be computed by a straightforward sampling of\npermutations. However, this way of computing p-values is often so slow that it\nis replaced by an approximation, which is accurate only for part of the\ninteresting range of parameters. Moreover, the accuracy of the approximation\ncan usually not be improved by increasing the computation time.\nWe introduce a new sampling-based algorithm which uses the fast Fourier\ntransform to compute p-values for the permutation test based on Pearson's\ncorrelation coefficient. The algorithm is practically and asymptotically faster\nthan straightforward sampling. Typically, its complexity is logarithmic in the\ninput size, while the complexity of straightforward sampling is linear. The\nidea behind the algorithm can also be used to accelerate the computation of\np-values for many other common statistical tests. The algorithm is easy to\nimplement, but its analysis involves results from the representation theory of\nthe symmetric group.\n", "title": "Fast computation of p-values for the permutation test based on Pearson's correlation coefficient and other statistical tests" }
null
null
[ "Statistics" ]
null
true
null
11540
null
Validated
null
null
null
{ "abstract": " Gaussian belief propagation (BP) has been widely used for distributed\nestimation in large-scale networks such as the smart grid, communication\nnetworks, and social networks, where local measurements/observations are\nscattered over a wide geographical area. However, the convergence of Gaus- sian\nBP is still an open issue. In this paper, we consider the convergence of\nGaussian BP, focusing in particular on the convergence of the information\nmatrix. We show analytically that the exchanged message information matrix\nconverges for arbitrary positive semidefinite initial value, and its dis- tance\nto the unique positive definite limit matrix decreases exponentially fast.\n", "title": "Convergence analysis of the information matrix in Gaussian belief propagation" }
null
null
null
null
true
null
11541
null
Default
null
null
null
{ "abstract": " Static and dynamic properties of vortices in a two-component Bose-Einstein\ncondensate with Rashba spin-orbit coupling are investigated. The mass current\naround a vortex core in the plane-wave phase is found to be deformed by the\nspin-orbit coupling, and this makes the dynamics of the vortex pairs quite\ndifferent from those in a scalar Bose-Einstein condensate. The velocity of a\nvortex-antivortex pair is much smaller than that without spin-orbit coupling,\nand there exist stationary states. Two vortices with the same circulation move\naway from each other or unite to form a stationary state.\n", "title": "Vortex pairs in a spin-orbit coupled Bose-Einstein condensate" }
null
null
null
null
true
null
11542
null
Default
null
null
null
{ "abstract": " Advances in remote sensing technologies have made it possible to use\nhigh-resolution visual data for weather observation and forecasting tasks. We\npropose the use of multi-layer neural networks for understanding complex\natmospheric dynamics based on multichannel satellite images. The capability of\nour model was evaluated by using a linear regression task for single typhoon\ncoordinates prediction. A specific combination of models and different\nactivation policies enabled us to obtain an interesting prediction result in\nthe northeastern hemisphere (ENH).\n", "title": "GlobeNet: Convolutional Neural Networks for Typhoon Eye Tracking from Remote Sensing Imagery" }
null
null
null
null
true
null
11543
null
Default
null
null
null
{ "abstract": " Multiresolution analysis and matrix factorization are foundational tools in\ncomputer vision. In this work, we study the interface between these two\ndistinct topics and obtain techniques to uncover hierarchical block structure\nin symmetric matrices -- an important aspect in the success of many vision\nproblems. Our new algorithm, the incremental multiresolution matrix\nfactorization, uncovers such structure one feature at a time, and hence scales\nwell to large matrices. We describe how this multiscale analysis goes much\nfarther than what a direct global factorization of the data can identify. We\nevaluate the efficacy of the resulting factorizations for relative leveraging\nwithin regression tasks using medical imaging data. We also use the\nfactorization on representations learned by popular deep networks, providing\nevidence of their ability to infer semantic relationships even when they are\nnot explicitly trained to do so. We show that this algorithm can be used as an\nexploratory tool to improve the network architecture, and within numerous other\nsettings in vision.\n", "title": "The Incremental Multiresolution Matrix Factorization Algorithm" }
null
null
null
null
true
null
11544
null
Default
null
null
null
{ "abstract": " Let $k$ be an algebraically closed field and $A$ the polynomial algebra in\n$r$ variables with coefficients in $k$. In case the characteristic of $k$ is\n$2$, Carlsson conjectured that for any $DG$-$A$-module $M$ of dimension $N$ as\na free $A$-module, if the homology of $M$ is nontrivial and finite dimensional\nas a $k$-vector space, then $2^r\\leq N$. Here we state a stronger conjecture\nabout varieties of square-zero upper-triangular $N\\times N$ matrices with\nentries in $A$. Using stratifications of these varieties via Borel orbits, we\nshow that the stronger conjecture holds when $N < 8$ or $r < 3$ without any\nrestriction on the characteristic of $k$. As a consequence, we attain a new\nproof for many of the known cases of Carlsson's conjecture and give new results\nwhen $N > 4$ and $r = 2$.\n", "title": "Carlsson's rank conjecture and a conjecture on square-zero upper triangular matrices" }
null
null
null
null
true
null
11545
null
Default
null
null
null
{ "abstract": " Magnesium and its alloys are being considered for biodegradable biomaterials.\nHowever, high and uncontrollable corrosion rates have limited the use of\nmagnesium and its alloys in biological environments. In this research, high\npurified magnesium (HP-Mg) was coated with stearic acid in order to improve the\ncorrosion resistance of magnesium. Anodization and immersion in stearic acid\nwere used to form a hydrophobic layer on magnesium substrate. Different DC\nvoltages, times, electrolytes, and temperatures were tested. Electrochemical\nimpedance spectroscopy and potentiodynamic polarization were used to measure\nthe corrosion rates of the coated HP-Mg. The results showed that optimum\ncorrosion resistance occurred for specimens anodized at +4 volts for 4 minutes\nat 70°C in borate benzoate. The corrosion resistance was temporarily\nenhanced by 1000x.\n", "title": "Effect of Anodizing Parameters on Corrosion Resistance of Coated Purified Magnesium" }
null
null
null
null
true
null
11546
null
Default
null
null
null
{ "abstract": " In this article, we discuss a probabilistic interpretation of McShane's\nidentity as describing a finite measure on the space of embedded paths though a\npoint.\n", "title": "The probabilistic nature of McShane's identity: planar tree coding of simple loops" }
null
null
null
null
true
null
11547
null
Default
null
null
null
{ "abstract": " In most process control systems nowadays, process measurements are\nperiodically collected and archived in historians. Analytics applications\nprocess the data, and provide results offline or in a time period that is\nconsiderably slow in comparison to the performance of the manufacturing\nprocess. Along with the proliferation of Internet-of-Things (IoT) and the\nintroduction of \"pervasive sensors\" technology in process industries,\nincreasing number of sensors and actuators are installed in process plants for\npervasive sensing and control, and the volume of produced process data is\ngrowing exponentially. To digest these data and meet the ever-growing\nrequirements to increase production efficiency and improve product quality,\nthere needs to be a way to both improve the performance of the analytics system\nand scale the system to closely monitor a much larger set of plant resources.\nIn this paper, we present a real-time data analytics platform, called RT-DAP,\nto support large-scale continuous data analytics in process industries. RT-DAP\nis designed to be able to stream, store, process and visualize a large volume\nof realtime data flows collected from heterogeneous plant resources, and\nfeedback to the control system and operators in a realtime manner. A prototype\nof the platform is implemented on Microsoft Azure. Our extensive experiments\nvalidate the design methodologies of RT-DAP and demonstrate its efficiency in\nboth component and system levels.\n", "title": "RT-DAP: A Real-Time Data Analytics Platform for Large-scale Industrial Process Monitoring and Control" }
null
null
[ "Computer Science" ]
null
true
null
11548
null
Validated
null
null
null
{ "abstract": " The popular BFGS quasi-Newton minimization algorithm under reasonable\nconditions converges globally on smooth convex functions. This result was\nproved by Powell in 1976: we consider its implications for functions that are\nnot smooth. In particular, an analogous convergence result holds for functions,\nlike the Euclidean norm, that are nonsmooth at the minimizer.\n", "title": "BFGS convergence to nonsmooth minimizers of convex functions" }
null
null
null
null
true
null
11549
null
Default
null
null
null
{ "abstract": " Volunteer computing (VC) or distributed computing projects are common in the\ncitizen cyberscience (CCS) community and present extensive opportunities for\nscientists to make use of computing power donated by volunteers to undertake\nlarge-scale scientific computing tasks. Volunteer computing is generally a\nnon-interactive process for those contributing computing resources to a project\nwhereas volunteer thinking (VT) or distributed thinking, which allows\nvolunteers to participate interactively in citizen cyberscience projects to\nsolve human computation tasks. In this paper we describe the integration of\nthree tools, the Virtual Atom Smasher (VAS) game developed by CERN, LiveQ, a\njob distribution middleware, and CitizenGrid, an online platform for hosting\nand providing computation to CCS projects. This integration demonstrates the\ncombining of volunteer computing and volunteer thinking to help address the\nscientific and educational goals of games like VAS. The paper introduces the\nthree tools and provides details of the integration process along with further\npotential usage scenarios for the resulting platform.\n", "title": "A collaborative citizen science platform for real-time volunteer computing and games" }
null
null
[ "Computer Science" ]
null
true
null
11550
null
Validated
null
null
null
{ "abstract": " We develop a theory of the quasiparticle interference (QPI) in multiband\nsuperconductors based on strong-coupling Eliashberg approach within the Born\napproximation. In the framework of this theory, we study dependencies of the\nQPI response function in the multiband superconductors with nodeless s-wave\nsuperconductive order parameter. We pay a special attention to the difference\nof the quasiparticle scattering between the bands having the same and opposite\nsigns of the order parameter. We show that, at the momentum values close to the\nmomentum transfer between two bands, the energy dependence of the quasiparticle\ninterference response function has three singularities. Two of these correspond\nto the values of the gap functions and the third one depends on both the gaps\nand the transfer momentum. We argue that only the singularity near the smallest\nband gap may be used as an universal tool to distinguish between $s_{++}$ and\n$s_{\\pm}$ order parameters. The robustness of the sign of the response function\npeak near the smaller gap value, irrespective of the change in parameters, in\nboth the symmetry cases is a promising feature that can be harnessed\nexperimentally.\n", "title": "Quasiparticle interference in multiband superconductors with strong coupling" }
null
null
[ "Physics" ]
null
true
null
11551
null
Validated
null
null
null
{ "abstract": " Almost twenty years ago, E.R. Fernholz introduced portfolio generating\nfunctions which can be used to construct a variety of portfolios, solely in the\nterms of the individual companies' market weights. I. Karatzas and J. Ruf\nrecently developed another methodology for the functional construction of\nportfolios, which leads to very simple conditions for strong relative arbitrage\nwith respect to the market. In this paper, both of these notions of functional\nportfolio generation are generalized in a pathwise, probability-free setting;\nportfolio generating functions are substituted by path-dependent functionals,\nwhich involve the current market weights, as well as additional\nbounded-variation functions of past and present market weights. This\ngeneralization leads to a wider class of functionally-generated portfolios than\nwas heretofore possible, and yields improved conditions for outperforming the\nmarket portfolio over suitable time-horizons.\n", "title": "Trading Strategies Generated by Path-dependent Functionals of Market Weights" }
null
null
null
null
true
null
11552
null
Default
null
null
null
{ "abstract": " We introduce an approach based on the Givens representation that allows for a\nroutine, reliable, and flexible way to infer Bayesian models with orthogonal\nmatrix parameters. This class of models most notably includes models from\nmultivariate statistics such factor models and probabilistic principal\ncomponent analysis (PPCA). Our approach overcomes several of the practical\nbarriers to using the Givens representation in a general Bayesian inference\nframework. In particular, we show how to inexpensively compute the\nchange-of-measure term necessary for transformations of random variables. We\nalso show how to overcome specific topological pathologies that arise when\nrepresenting circular random variables in an unconstrained space. In addition,\nwe discuss how the alternative parameterization can be used to define new\ndistributions over orthogonal matrices as well as to constrain parameter space\nto eliminate superfluous posterior modes in models such as PPCA. While previous\ninference approaches to this problem involved specialized updates to the\northogonal matrix parameters, our approach lets us represent these constrained\nparameters in an unconstrained form. Unlike previous approaches, this allows\nfor the inference of models with orthogonal matrix parameters using any modern\ninference algorithm including those available in modern Bayesian modeling\nframeworks such as Stan, Edward, or PyMC3. We illustrate with examples how our\napproach can be used in practice in Stan to infer models with orthogonal matrix\nparameters, and we compare to existing methods.\n", "title": "General Bayesian Inference over the Stiefel Manifold via the Givens Representation" }
null
null
null
null
true
null
11553
null
Default
null
null
null
{ "abstract": " Silicon single-photon detectors (SPDs) are the key devices for detecting\nsingle photons in the visible wavelength range. Here we present high detection\nefficiency silicon SPDs dedicated to the generation of multiphoton entanglement\nbased on the technique of high-frequency sine wave gating. The silicon\nsingle-photon avalanche diodes (SPADs) components are acquired by disassembling\n6 commercial single-photon counting modules (SPCMs). Using the new quenching\nelectronics, the average detection efficiency of SPDs is increased from 68.6%\nto 73.1% at a wavelength of 785 nm. These sine wave gating SPDs are then\napplied in a four-photon entanglement experiment, and the four-fold coincidence\ncount rate is increased by 30% without degrading its visibility compared with\nthe original SPCMs.\n", "title": "Sine wave gating Silicon single-photon detectors for multiphoton entanglement experiments" }
null
null
null
null
true
null
11554
null
Default
null
null
null
{ "abstract": " In many statistical applications that concern mathematical psychologists, the\nconcept of Fisher information plays an important role. In this tutorial we\nclarify the concept of Fisher information as it manifests itself across three\ndifferent statistical paradigms. First, in the frequentist paradigm, Fisher\ninformation is used to construct hypothesis tests and confidence intervals\nusing maximum likelihood estimators; second, in the Bayesian paradigm, Fisher\ninformation is used to define a default prior; lastly, in the minimum\ndescription length paradigm, Fisher information is used to measure model\ncomplexity.\n", "title": "A Tutorial on Fisher Information" }
null
null
null
null
true
null
11555
null
Default
null
null
null
{ "abstract": " We investigate the identification of hydrogen-poor superluminous supernovae\n(SLSNe I) using a photometric analysis, without including an arbitrary\nmagnitude threshold. We assemble a homogeneous sample of previously classified\nSLSNe I from the literature, and fit their light curves using Gaussian\nprocesses. From the fits, we identify four photometric parameters that have a\nhigh statistical significance when correlated, and combine them in a parameter\nspace that conveys information on their luminosity and color evolution. This\nparameter space presents a new definition for SLSNe I, which can be used to\nanalyse existing and future transient datasets. We find that 90% of previously\nclassified SLSNe I meet our new definition. We also examine the evidence for\ntwo subclasses of SLSNe I, combining their photometric evolution with\nspectroscopic information, namely the photospheric velocity and its gradient. A\ncluster analysis reveals the presence of two distinct groups. `Fast' SLSNe show\nfast light curves and color evolution, large velocities, and a large velocity\ngradient. `Slow' SLSNe show slow light curve and color evolution, small\nexpansion velocities, and an almost non-existent velocity gradient. Finally, we\ndiscuss the impact of our analyses in the understanding of the powering engine\nof SLSNe, and their implementation as cosmological probes in current and future\nsurveys.\n", "title": "A statistical approach to identify superluminous supernovae and probe their diversity" }
null
null
null
null
true
null
11556
null
Default
null
null
null
{ "abstract": " An empirical relation indicates that an increase of living standard decreases\nthe Total Fertility Rate (TFR), but this trend was broken in highly developed\ncountries in 2005. The reversal of the TFR was associated with the continuous\neconomic and social development expressed by the Human Development Index (HDI).\nWe have investigated how universal and persistent the TFR reversal is. The\nresults show that in highly developed countries, $ \\mathrm{HDI}>0.85 $, the TFR\nand the HDI are not correlated in 2010-2014. Detailed analyses of correlations\nand differences of the TFR and the HDI indicate a decrease of the TFR if the\nHDI increases in this period. However, we found that a reversal of the TFR as a\nconsequence of economic development started at medium levels of the HDI, i.e. $\n0.575<\\mathrm{HDI}<0.85 $, in many countries. Our results show a transient\nnature of the TFR reversal in highly developed countries in 2010-2014 and a\nrelative stable trend of the TFR increase in medium developed countries in\nlonger time periods. We believe that knowledge of the fundamental nature of the\nTFR is very important for the survival of medium and highly developed\nsocieties.\n", "title": "Low fertility rate reversal: a feature of interactions between Biological and Economic systems" }
null
null
null
null
true
null
11557
null
Default
null
null
null
{ "abstract": " We investigate a family of regression problems in a semi-supervised setting.\nThe task is to assign real-valued labels to a set of $n$ sample points,\nprovided a small training subset of $N$ labeled points. A goal of\nsemi-supervised learning is to take advantage of the (geometric) structure\nprovided by the large number of unlabeled data when assigning labels. We\nconsider random geometric graphs, with connection radius $\\epsilon(n)$, to\nrepresent the geometry of the data set. Functionals which model the task reward\nthe regularity of the estimator function and impose or reward the agreement\nwith the training data. Here we consider the discrete $p$-Laplacian\nregularization.\nWe investigate asymptotic behavior when the number of unlabeled points\nincreases, while the number of training points remains fixed. We uncover a\ndelicate interplay between the regularizing nature of the functionals\nconsidered and the nonlocality inherent to the graph constructions. We\nrigorously obtain almost optimal ranges on the scaling of $\\epsilon(n)$ for the\nasymptotic consistency to hold. We prove that the minimizers of the discrete\nfunctionals in random setting converge uniformly to the desired continuum\nlimit. Furthermore we discover that for the standard model used there is a\nrestrictive upper bound on how quickly $\\epsilon(n)$ must converge to zero as\n$n \\to \\infty$. We introduce a new model which is as simple as the original\nmodel, but overcomes this restriction.\n", "title": "Analysis of $p$-Laplacian Regularization in Semi-Supervised Learning" }
null
null
null
null
true
null
11558
null
Default
null
null
null
{ "abstract": " We consider the nonlinear Schrödinger (NLS) equation with the subcritical\npower nonlinearity on a star graph consisting of $N$ edges and a single vertex\nunder generalized Kirchhoff boundary conditions. The stationary NLS equation\nmay admit a family of solitary waves parameterized by a translational\nparameter, which we call the shifted states. The two main examples include (i)\nthe star graph with even $N$ under the classical Kirchhoff boundary conditions\nand (ii) the star graph with one incoming edge and $N-1$ outgoing edges under a\nsingle constraint on coefficients of the generalized Kirchhoff boundary\nconditions. We obtain the general counting results on the Morse index of the\nshifted states and apply them to the two examples. In the case of (i), we prove\nthat the shifted states with even $N \\geq 4$ are saddle points of the action\nfunctional which are spectrally unstable under the NLS flow. In the case of\n(ii), we prove that the shifted states with the monotone profiles in the $N-1$\noutgoing edges are spectrally stable, whereas the shifted states with\nnon-monotone profiles in the $N-1$ outgoing edges are spectrally unstable, the\ntwo families intersect at the half-soliton states which are spectrally stable\nbut nonlinearly unstable. Since the NLS equation on a star graph with shifted\nstates can be reduced to the homogeneous NLS equation on a line, the spectral\ninstability of shifted states is due to the perturbations breaking this\nreduction. We give a simple argument suggesting that the spectrally stable\nshifted states are nonlinear unstable under the NLS flow due to the\nperturbations breaking the reduction to the NLS equation on a line.\n", "title": "Spectral stability of shifted states on star graphs" }
null
null
null
null
true
null
11559
null
Default
null
null
null
{ "abstract": " In this paper we are interested in multifractional stable processes where the\nself-similarity index $H$ is a function of time, in other words $H$ becomes\ntime changing, and the stability index $\\alpha$ is a constant. Using $\\beta$-\nnegative power variations ($-1/2<\\beta<0$), we propose estimators for the value\nof the multifractional function $H$ at a fixed time $t_0$ and for $\\alpha$ for\ntwo cases: multifractional Brownian motion ($\\alpha=2$) and linear\nmultifractional stable motion ($0<\\alpha<2$). We get the consistency of our\nestimates for the underlying processes with the rate of convergence.\n", "title": "Estimation of the multifractional function and the stability index of linear multifractional stable processes" }
null
null
null
null
true
null
11560
null
Default
null
null
null
{ "abstract": " We report on observation of the unusual kind of solar microflares, presumably\nassociated with the so-called \"topological trigger\" of magnetic reconnection,\nwhich was theoretically suggested long time ago by Gorbachev et al. (Sov. Ast.\n1988, v.32, p.308) but has not been clearly identified so far by observations.\nAs can be seen in pictures by Hinode SOT in CaII line, there may be a bright\nloop connecting two sunspots, which looks at the first sight just as a magnetic\nfield line connecting the opposite poles. However, a closer inspection of SDO\nHMI magnetograms shows that the respective arc is anchored in the regions of\nthe same polarity near the sunspot boundaries. Yet another peculiar feature is\nthat the arc flashes almost instantly as a thin strip and then begins to expand\nand decay, while the typical chromospheric flares in CaII line are much wider\nand propagate progressively in space. A qualitative explanation of the unusual\nflare can be given by the above-mentioned model of topological trigger. Namely,\nthere are such configurations of the magnetic sources on the surface of\nphotosphere that their tiny displacements result in the formation and fast\nmotion of a 3D null point along the arc located well above the plane of the\nsources. So, such a null point can quickly ignite a magnetic reconnection along\nthe entire its trajectory. Pictorially, this can be presented as flipping the\nso-called two-dome magnetic-field structure (which is just the reason why such\nmechanism was called topological). The most important prerequisite for the\ndevelopment of topological instability in the two-dome structure is a cruciform\narrangement of the magnetic sources in its base, and this condition is really\nsatisfied in the case under consideration.\n", "title": "Observation of \"Topological\" Microflares in the Solar Atmosphere" }
null
null
null
null
true
null
11561
null
Default
null
null
null
{ "abstract": " We study the commutative positive varieties of languages closed under various\noperations: shuffle, renaming and product over one-letter alphabets.\n", "title": "Commutative positive varieties of languages" }
null
null
null
null
true
null
11562
null
Default
null
null
null
{ "abstract": " We present a model of contagion that unifies and generalizes existing models\nof the spread of social influences and micro-organismal infections. Our model\nincorporates individual memory of exposure to a contagious entity (e.g., a\nrumor or disease), variable magnitudes of exposure (dose sizes), and\nheterogeneity in the susceptibility of individuals. Through analysis and\nsimulation, we examine in detail the case where individuals may recover from an\ninfection and then immediately become susceptible again (analogous to the\nso-called SIS model). We identify three basic classes of contagion models which\nwe call \\textit{epidemic threshold}, \\textit{vanishing critical mass}, and\n\\textit{critical mass} classes, where each class of models corresponds to\ndifferent strategies for prevention or facilitation. We find that the\nconditions for a particular contagion model to belong to one of the these three\nclasses depend only on memory length and the probabilities of being infected by\none and two exposures respectively. These parameters are in principle\nmeasurable for real contagious influences or entities, thus yielding empirical\nimplications for our model. We also study the case where individuals attain\npermanent immunity once recovered, finding that epidemics inevitably die out\nbut may be surprisingly persistent when individuals possess memory.\n", "title": "A generalized model of social and biological contagion" }
null
null
null
null
true
null
11563
null
Default
null
null
null
{ "abstract": " The luminous efficiency of meteors is poorly known, but critical for\ndetermining the meteoroid mass. We present an uncertainty analysis of the\nluminous efficiency as determined by the classical ablation equations, and\nsuggest a possible method for determining the luminous efficiency of real\nmeteor events. We find that a two-term exponential fit to simulated lag data is\nable to reproduce simulated luminous efficiencies reasonably well.\n", "title": "Luminous Efficiency Estimates of Meteors -I. Uncertainty analysis" }
null
null
null
null
true
null
11564
null
Default
null
null
null
{ "abstract": " A van der Waals (vdW) density functional was implemented in the mixed basis\napproach previously developed for studying two dimensional systems, in which\nthe vdW interaction plays an important role. The basis functions here are taken\nto be the localized B-splines for the finite non-periodic dimension and plane\nwaves for the two periodic directions. This approach will significantly reduce\nthe size of the basis set, especially for large systems, and therefore is\ncomputationally efficient for the diagonalization of the Kohn-Sham Hamiltonian.\nWe applied the present algorithm to calculate the binding energy for the\ntwo-layer graphene case and the results are consistent with data reported\nearlier. We also found that, due to the relatively weak vdW interaction, the\ncharge density obtained self-consistently for the whole bi-layer graphene\nsystem is not significantly different from the simple addition of those for the\ntwo individual one-layer system, except when the interlayer separation is close\nenough that the strong electron-repulsion dominates. This finding suggests an\nefficient way to calculate the vdW interaction for large complex systems\ninvolving the Moire pattern configurations.\n", "title": "Application of Van Der Waals Density Functionals to Two Dimensional Systems Based on a Mixed Basis Approach" }
null
null
null
null
true
null
11565
null
Default
null
null
null
{ "abstract": " We analyze the interiors of HD~219134~b and c, which are among the coolest\nsuper Earths detected thus far. Without using spectroscopic measurements, we\naim at constraining if the possible atmospheres are hydrogen-rich or\nhydrogen-poor. In a first step, we employ a full probabilistic Bayesian\ninference analysis in order to rigorously quantify the degeneracy of interior\nparameters given the data of mass, radius, refractory element abundances,\nsemi-major axes, and stellar irradiation. We obtain constraints on structure\nand composition for core, mantle, ice layer, and atmosphere. In a second step,\nwe aim to draw conclusions on the nature of possible atmospheres by considering\natmospheric escape. Specifically, we compare the actual possible atmospheres to\na threshold thickness above which a primordial (H$_2$-dominated) atmosphere can\nbe retained against evaporation over the planet's lifetime. The best\nconstrained parameters are the individual layer thicknesses. The maximum radius\nfraction of possible atmospheres are 0.18 and 0.13 $R$ (radius), for planets b\nand c, respectively. These values are significantly smaller than the threshold\nthicknesses of primordial atmospheres: 0.28 and 0.19 $R$, respectively. Thus,\nthe possible atmospheres of planets b and c are unlikely to be H$_2$-dominated.\nHowever, whether possible volatile layers are made of gas or liquid/solid water\ncannot be uniquely determined. Our main conclusions are: (1) the possible\natmospheres for planets b and c are enriched and thus possibly secondary in\nnature, and (2) both planets may contain a gas layer, whereas the layer of HD\n219134 b must be larger. HD 219134 c can be rocky.\n", "title": "Secondary atmospheres on HD 219134 b and c" }
null
null
[ "Physics" ]
null
true
null
11566
null
Validated
null
null
null
{ "abstract": " We give a complete classification (up to isomorphism) of Lie conformal\nalgebras which are free of rank two as $\\C[\\partial]$-modules, and determine\ntheir automorphism groups.\n", "title": "Classification of rank two Lie conformal algebras" }
null
null
null
null
true
null
11567
null
Default
null
null
null
{ "abstract": " We prove that two smooth families of 2-connected domains in $\\cc$ are\nsmoothly equivalent if they are equivalent under a possibly discontinuous\nfamily of biholomorphisms. We construct, for $m \\geq 3$, two smooth families of\nsmoothly bounded $m$-connected domains in $\\cc$, and for $n\\geq2$, two families\nof strictly pseudoconvex domains in $\\cc^n$, that are equivalent under\ndiscontinuous families of biholomorphisms but not under any continuous family\nof biholomorphisms. Finally, we give sufficient conditions for the smooth\nequivalence of two smooth families of domains.\n", "title": "Smooth equivalence of deformations of domains in complex euclidean spaces" }
null
null
[ "Mathematics" ]
null
true
null
11568
null
Validated
null
null
null
{ "abstract": " We improve existing lower bounds of the hyperbolic dimension for meromophic\nfunctions that have a logarithmic tract {\\Omega} which is a Hölder domain.\nThese bounds are given in terms of the fractal behavior, measured with integral\nmeans, of the boundary of {\\Omega} at infinity.\n", "title": "A lower bound of the hyperbolic dimension for meromorphic functions having a logarithmic Hölder tract" }
null
null
null
null
true
null
11569
null
Default
null
null
null
{ "abstract": " An important problem in many domains is to predict how a system will respond\nto interventions. This task is inherently linked to estimating the system's\nunderlying causal structure. To this end, Invariant Causal Prediction (ICP)\n(Peters et al., 2016) has been proposed which learns a causal model exploiting\nthe invariance of causal relations using data from different environments. When\nconsidering linear models, the implementation of ICP is relatively\nstraightforward. However, the nonlinear case is more challenging due to the\ndifficulty of performing nonparametric tests for conditional independence. In\nthis work, we present and evaluate an array of methods for nonlinear and\nnonparametric versions of ICP for learning the causal parents of given target\nvariables. We find that an approach which first fits a nonlinear model with\ndata pooled over all environments and then tests for differences between the\nresidual distributions across environments is quite robust across a large\nvariety of simulation settings. We call this procedure \"invariant residual\ndistribution test\". In general, we observe that the performance of all\napproaches is critically dependent on the true (unknown) causal structure and\nit becomes challenging to achieve high power if the parental set includes more\nthan two variables. As a real-world example, we consider fertility rate\nmodelling which is central to world population projections. We explore\npredicting the effect of hypothetical interventions using the accepted models\nfrom nonlinear ICP. The results reaffirm the previously observed central causal\nrole of child mortality rates.\n", "title": "Invariant Causal Prediction for Nonlinear Models" }
null
null
[ "Statistics" ]
null
true
null
11570
null
Validated
null
null
null
{ "abstract": " For the multivariate COGARCH(1,1) volatility process we show sufficient\nconditions for the existence of a unique stationary distribution, for the\ngeometric ergodicity and for the finiteness of moments of the stationary\ndistribution. One of the conditions demands a sufficiently fast exponential\ndecay of the MUCOGARCH(1,1) volatility process. Furthermore, we show easily\napplicable sufficient conditions for the needed irreducibility of the\nvolatility process living in the cone of positive semidefinite matrices, if the\ndriving Lévy process is a compound Poisson process.\n", "title": "Geometric Ergodicity of the MUCOGARCH(1,1) process" }
null
null
null
null
true
null
11571
null
Default
null
null
null
{ "abstract": " In this short essay, we discuss some basic features of cognitive activity at\nseveral different space-time scales: from neural networks in the brain to\ncivilizations. One motivation for such comparative study is its heuristic\nvalue. Attempts to better understand the functioning of \"wetware\" involved in\ncognitive activities of central nervous system by comparing it with a computing\ndevice have a long tradition. We suggest that comparison with Internet might be\nmore adequate. We briefly touch upon such subjects as encoding, compression,\nand Saussurean trichotomy langue/langage/parole in various environments.\n", "title": "Cognitive networks: brains, internet, and civilizations" }
null
null
null
null
true
null
11572
null
Default
null
null
null
{ "abstract": " The trigram `I love being' is expected to be followed by positive words such\nas `happy'. In a sarcastic sentence, however, the word `ignored' may be\nobserved. The expected and the observed words are, thus, incongruous. We model\nsarcasm detection as the task of detecting incongruity between an observed and\nan expected word. In order to obtain the expected word, we use Context2Vec, a\nsentence completion library based on Bidirectional LSTM. However, since the\nexact word where such an incongruity occurs may not be known in advance, we\npresent two approaches: an All-words approach (which consults sentence\ncompletion for every content word) and an Incongruous words-only approach\n(which consults sentence completion for the 50% most incongruous content\nwords). The approaches outperform reported values for tweets but not for\ndiscussion forum posts. This is likely to be because of redundant consultation\nof sentence completion for discussion forum posts. Therefore, we consider an\noracle case where the exact incongruous word is manually labeled in a corpus\nreported in past work. In this case, the performance is higher than the\nall-words approach. This sets up the promise for using sentence completion for\nsarcasm detection.\n", "title": "Expect the unexpected: Harnessing Sentence Completion for Sarcasm Detection" }
null
null
null
null
true
null
11573
null
Default
null
null
null
{ "abstract": " Deep neural networks (DNNs) have achieved superior performance in various\nprediction tasks, but can be very vulnerable to adversarial examples or\nperturbations. Therefore, it is crucial to measure the sensitivity of DNNs to\nvarious forms of perturbations in real applications. We introduce a novel\nperturbation manifold and its associated influence measure to quantify the\neffects of various perturbations on DNN classifiers. Such perturbations include\nvarious external and internal perturbations to input samples and network\nparameters. The proposed measure is motivated by information geometry and\nprovides desirable invariance properties. We demonstrate that our influence\nmeasure is useful for four model building tasks: detecting potential\n'outliers', analyzing the sensitivity of model architectures, comparing network\nsensitivity between training and test sets, and locating vulnerable areas.\nExperiments show reasonably good performance of the proposed measure for the\npopular DNN models ResNet50 and DenseNet121 on CIFAR10 and MNIST datasets.\n", "title": "Sensitivity Analysis of Deep Neural Networks" }
null
null
null
null
true
null
11574
null
Default
null
null
null
{ "abstract": " The Hylleraas-B-splines basis set is introduced in this paper, which can be\nused to obtain the eigenvalues and eigenstates of helium-like system's\nHamiltonian. Comparing with traditional B-splines basis, the rate of\nconvergence of our results has been significantly improved. Through combine\nthis method and pseudo-states sum over scheme, we obtained the high precision\nvalues of static dipole porlarizabilities of the $1{}^1S-5{}^1S$,\n$2{}^3S-6{}^3S$ states of helium in length and velocity gauges respectively,\nand the results get good agreements. The final extrapolate results of\nporlarizabilities in different quantum states arrived eight significant digits\nat least, which fully illustrates the advantage and convenience of this method\nin the problems involving continuous states.\n", "title": "An application of the Hylleraas-B-splines basis set: High accuracy calculations of the static dipole polarizabilities of helium" }
null
null
null
null
true
null
11575
null
Default
null
null
null
{ "abstract": " A Revival of the South Equatorial Belt (SEB) is an organised disturbance on a\ngrand scale. It starts with a single vigorous outbreak from which energetic\nstorms and disturbances spread around the planet in the different zonal\ncurrents. The Revival that began in 2010 was better observed than any before\nit. The observations largely validate the historical descriptions of these\nevents: the major features portrayed therein, albeit at lower resolution, are\nindeed the large structural features described here. Our major conclusions\nabout the 2010 SEB Revival are as follows, and we show that most of them may be\ntypical of SEB Revivals.\n1) The Revival started with a bright white plume.\n2) The initial plume erupted in a pre-existing cyclonic oval ('barge').\nSubsequent white plumes continued to appear on the track of this barge, which\nwas the location of the sub-surface source of the whole Revival.\n3) These plumes were extremely bright in the methane absorption band, i.e.\nthrusting up to very high altitudes, especially when new.\n4) Brilliant, methane-bright plumes also appeared along the leading edge of\nthe central branch. Altogether, 7 plumes appeared at the source and at least 6\nalong the leading edge.\n5) The central branch of the outbreak was composed of large convective cells,\neach initiated by a bright plume, which only occupied a part of each cell,\nwhile a very dark streak defined its west edge.\n6) The southern branch began with darkening and sudden acceleration of\npre-existing faint spots in a slowly retrograding wave-train.\n7) Subsequent darker spots in the southern branch were complex structures,\nnot coherent vortices.\n8) Dark spots in the southern branch had typical SEBs jetstream speeds but\nwere unusually far south....\n", "title": "Jupiter's South Equatorial Belt cycle in 2009-2011: II, The SEB Revival" }
null
null
[ "Physics" ]
null
true
null
11576
null
Validated
null
null
null
{ "abstract": " Regularization is important for end-to-end speech models, since the models\nare highly flexible and easy to overfit. Data augmentation and dropout has been\nimportant for improving end-to-end models in other domains. However, they are\nrelatively under explored for end-to-end speech models. Therefore, we\ninvestigate the effectiveness of both methods for end-to-end trainable, deep\nspeech recognition models. We augment audio data through random perturbations\nof tempo, pitch, volume, temporal alignment, and adding random noise.We further\ninvestigate the effect of dropout when applied to the inputs of all layers of\nthe network. We show that the combination of data augmentation and dropout give\na relative performance improvement on both Wall Street Journal (WSJ) and\nLibriSpeech dataset of over 20%. Our model performance is also competitive with\nother end-to-end speech models on both datasets.\n", "title": "Improved Regularization Techniques for End-to-End Speech Recognition" }
null
null
null
null
true
null
11577
null
Default
null
null
null
{ "abstract": " The maximum coercivity that can be achieved for a given hard magnetic alloy\nis estimated by computing the energy barrier for the nucleation of a reversed\ndomain in an idealized microstructure without any structural defects and\nwithout any soft magnetic secondary phases. For\nSm$_{1-z}$Zr$_z$(Fe$_{1-y}$Co$_y$)$_{12-x}$Ti$_x$ based alloys, which are\nconsidered an alternative to Nd$_2$Fe$_{14}$B magnets with lower rare-earth\ncontent, the coercive field of a small magnetic cube is reduced to 60 percent\nof the anisotropy field at room temperature and to 50 percent of the anisotropy\nfield at elevated temperature (473K). This decrease of the coercive field is\ncaused by misorientation, demagnetizing fields and thermal fluctuations.\n", "title": "On the limits of coercivity in permanent magnets" }
null
null
[ "Physics" ]
null
true
null
11578
null
Validated
null
null
null
{ "abstract": " We formulate the $N$ soliton solution of the Wadati-Konno-Ichikawa equation\nthat is determined by purely algebraic equations. Derivation is based on the\nmatrix Riemann-Hilbert problem. We give examples of one soliton solution that\ninclude smooth soliton, bursting soliton, and loop type soliton. In addition,\nwe give an explicit example for two soliton solution that blows up in a finite\ntime.\n", "title": "$N$-soliton formula and blowup result of the Wadati-Konno-Ichikawa equation" }
null
null
null
null
true
null
11579
null
Default
null
null
null
{ "abstract": " Repairing locality is an appreciated feature for distributed storage, in\nwhich a damaged or lost data share can be repaired by accessing a subset of\nother shares much smaller than is required for decoding the complete data.\nHowever for Secret Sharing (SS) schemes, it has been proven theoretically that\nlocal repairing can not be achieved with perfect security for the majority of\nthreshold SS schemes, where all the shares are equally regarded in both secret\nrecovering and share repairing. In this paper we make an attempt on decoupling\nthe two processes to make secure local repairing possible. Dedicated repairing\nredundancies only for the repairing process are generated, which are random\nnumbers to the original secret. Through this manner a threshold SS scheme with\nimproved repairing locality is achieved on the condition that security of\nrepairing redundancies is ensured, or else our scheme degenerates into a\nperfect access structure that is equivalent to the best existing schemes can\ndo. To maximize security of the repairing redundancies, a random placement\nmechanism is also proposed.\n", "title": "Introduction of Improved Repairing Locality into Secret Sharing Schemes with Perfect Security" }
null
null
null
null
true
null
11580
null
Default
null
null
null
{ "abstract": " Kiyota, Murai and Wada conjectured in 2002 that the largest eigenvalue of the\nCartan matrix C of a block of a finite group is rational if and only if all\neigenvalues of C are rational. We provide a counterexample to this conjecture\nand discuss related questions.\n", "title": "A counterexample to a conjecture of Kiyota, Murai and Wada" }
null
null
null
null
true
null
11581
null
Default
null
null
null
{ "abstract": " There exist many ways to build an orthonormal basis of $\\mathbb{R}^N$,\nconsisting of the eigenvectors of the discrete Fourier transform (DFT). In this\npaper we show that there is only one such orthonormal eigenbasis of the DFT\nthat is optimal in the sense of an appropriate uncertainty principle. Moreover,\nwe show that these optimal eigenvectors of the DFT are direct analogues of the\nHermite functions, that they also satisfy a three-term recurrence relation and\nthat they converge to Hermite functions as $N$ increases to infinity.\n", "title": "Minimal Hermite-type eigenbasis of the discrete Fourier transform" }
null
null
null
null
true
null
11582
null
Default
null
null
null
{ "abstract": " We design new algorithms for the combinatorial pure exploration problem in\nthe multi-arm bandit framework. In this problem, we are given $K$ distributions\nand a collection of subsets $\\mathcal{V} \\subset 2^{[K]}$ of these\ndistributions, and we would like to find the subset $v \\in \\mathcal{V}$ that\nhas largest mean, while collecting, in a sequential fashion, as few samples\nfrom the distributions as possible. In both the fixed budget and fixed\nconfidence settings, our algorithms achieve new sample-complexity bounds that\nprovide polynomial improvements on previous results in some settings. Via an\ninformation-theoretic lower bound, we show that no approach based on uniform\nsampling can improve on ours in any regime, yielding the first interactive\nalgorithms for this problem with this basic property. Computationally, we show\nhow to efficiently implement our fixed confidence algorithm whenever\n$\\mathcal{V}$ supports efficient linear optimization. Our results involve\nprecise concentration-of-measure arguments and a new algorithm for linear\nprogramming with exponentially many constraints.\n", "title": "Disagreement-Based Combinatorial Pure Exploration: Sample Complexity Bounds and an Efficient Algorithm" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
11583
null
Validated
null
null
null
{ "abstract": " Tungsten oxide and its associated bronzes (compounds of tungsten oxide and an\nalkali metal) are well known for their interesting optical and electrical\ncharacteristics. We have modified the transport properties of thin WO$_3$ films\nby electrolyte gating using both ionic liquids and polymer electrolytes. We are\nable to tune the resistivity of the gated film by more than five orders of\nmagnitude, and a clear insulator-to-metal transition is observed. To clarify\nthe doping mechanism, we have performed a series of incisive operando\nexperiments, ruling out both a purely electronic effect (charge accumulation\nnear the interface) and oxygen-related mechanisms. We propose instead that\nhydrogen intercalation is responsible for doping WO$_3$ into a highly\nconductive ground state and provide evidence that it can be described as a\ndense polaronic gas.\n", "title": "Insulator to Metal Transition in WO$_3$ Induced by Electrolyte Gating" }
null
null
[ "Physics" ]
null
true
null
11584
null
Validated
null
null
null
{ "abstract": " We examine the problem of transforming matching collections of data points\ninto optimal correspondence. The classic RMSD (root-mean-square deviation)\nmethod calculates a 3D rotation that minimizes the RMSD of a set of test data\npoints relative to a reference set of corresponding points. Similar literature\nin aeronautics, photogrammetry, and proteomics employs numerical methods to\nfind the maximal eigenvalue of a particular $4\\!\\times\\! 4$ quaternion-based\nmatrix, thus specifying the quaternion eigenvector corresponding to the optimal\n3D rotation. Here we generalize this basic problem, sometimes referred to as\nthe \"Procrustes Problem,\" and present algebraic solutions that exhibit\nproperties that are inaccessible to traditional numerical methods. We begin\nwith the 4D data problem, a problem one dimension higher than the conventional\n3D problem, but one that is also solvable by quaternion methods, we then study\nthe 3D and 2D data problems as special cases. In addition, we consider data\nthat are themselves quaternions isomorphic to orthonormal triads describing 3\ncoordinate frames (amino acids in proteins possess such frames). Adopting a\nreasonable approximation to the exact quaternion-data minimization problem, we\nfind a novel closed form \"quaternion RMSD\" (QRMSD) solution for the optimal\nrotation from a quaternion data set to a reference set. We observe that\ncomposites of the RMSD and QRMSD measures, combined with problem-dependent\nparameters including scaling factors to make their incommensurate dimensions\ncompatible, could be suitable for certain matching tasks.\n", "title": "Extensions and Exact Solutions to the Quaternion-Based RMSD Problem" }
null
null
null
null
true
null
11585
null
Default
null
null
null
{ "abstract": " We critically review the recent debate between Doreen Fraser and David\nWallace on the interpretation of quantum field theory, with the aim of\nidentifying where the core of the disagreement lies. We show that, despite\nappearances, their conflict does not concern the existence of particles or the\noccurrence of unitarily inequivalent representations. Instead, the dispute\nultimately turns on the very definition of what a quantum field theory is. We\nfurther illustrate the fundamental differences between the two approaches by\ncomparing them both to the Bohmian program in quantum field theory.\n", "title": "Particles, Cutoffs and Inequivalent Representations. Fraser andWallace on Quantum Field Theory" }
null
null
null
null
true
null
11586
null
Default
null
null
null
{ "abstract": " It is needed to ensure the integrity of systems that process sensitive\ninformation and control many aspects of everyday life. We examine the use of\nmachine learning algorithms to detect malware using the system calls generated\nby executables-alleviating attempts at obfuscation as the behavior is monitored\nrather than the bytes of an executable. We examine several machine learning\ntechniques for detecting malware including random forests, deep learning\ntechniques, and liquid state machines. The experiments examine the effects of\nconcept drift on each algorithm to understand how well the algorithms\ngeneralize to novel malware samples by testing them on data that was collected\nafter the training data. The results suggest that each of the examined machine\nlearning algorithms is a viable solution to detect malware-achieving between\n90% and 95% class-averaged accuracy (CAA). In real-world scenarios, the\nperformance evaluation on an operational network may not match the performance\nachieved in training. Namely, the CAA may be about the same, but the values for\nprecision and recall over the malware can change significantly. We structure\nexperiments to highlight these caveats and offer insights into expected\nperformance in operational environments. In addition, we use the induced models\nto gain a better understanding about what differentiates the malware samples\nfrom the goodware, which can further be used as a forensics tool to understand\nwhat the malware (or goodware) was doing to provide directions for\ninvestigation and remediation.\n", "title": "Dynamic Analysis of Executables to Detect and Characterize Malware" }
null
null
null
null
true
null
11587
null
Default
null
null
null
{ "abstract": " Network support is a key success factor for talented people. As an example,\nthe Hungarian Talent Support Network involves close to 1500 Talent Points and\nmore than 200,000 people. This network started the Hungarian Templeton Program\nidentifying and helping 315 exceptional cognitive talents. This network is a\npart of the European Talent Support Network initiated by the European Council\nfor High Ability involving more than 300 organizations in over 30 countries in\nEurope and extending in other continents. These networks are giving good\nexamples that talented people often occupy a central, but highly dynamic\nposition in social networks. The involvement of such 'creative nodes' in\nnetwork-related decision making processes is vital, especially in novel\nenvironmental challenges. Such adaptive/learning responses characterize a large\nvariety of complex systems from proteins, through brains to society. It is\ncrucial for talent support programs to use these networking and learning\nprocesses to increase their efficiency further.\n", "title": "Network support of talented people" }
null
null
null
null
true
null
11588
null
Default
null
null
null
{ "abstract": " This paper shows that a perturbed form of gradient descent converges to a\nsecond-order stationary point in a number iterations which depends only\npoly-logarithmically on dimension (i.e., it is almost \"dimension-free\"). The\nconvergence rate of this procedure matches the well-known convergence rate of\ngradient descent to first-order stationary points, up to log factors. When all\nsaddle points are non-degenerate, all second-order stationary points are local\nminima, and our result thus shows that perturbed gradient descent can escape\nsaddle points almost for free. Our results can be directly applied to many\nmachine learning applications, including deep learning. As a particular\nconcrete example of such an application, we show that our results can be used\ndirectly to establish sharp global convergence rates for matrix factorization.\nOur results rely on a novel characterization of the geometry around saddle\npoints, which may be of independent interest to the non-convex optimization\ncommunity.\n", "title": "How to Escape Saddle Points Efficiently" }
null
null
null
null
true
null
11589
null
Default
null
null
null
{ "abstract": " This article outlines different stages in development of the national culture\nmodel, created by Geert Hofstede and his affiliates. This paper reveals and\nsynthesizes the contemporary review of the application spheres of this\nframework. Numerous applications of the dimensions set are used as a source of\nidentifying significant critiques, concerning different aspects in model's\noperation. These critiques are classified and their underlying reasons are also\noutlined by means of a fishbone diagram.\n", "title": "Geert Hofstede et al's set of national cultural dimensions - popularity and criticisms" }
null
null
null
null
true
null
11590
null
Default
null
null
null
{ "abstract": " Reliable uncertainty estimation for time series prediction is critical in\nmany fields, including physics, biology, and manufacturing. At Uber,\nprobabilistic time series forecasting is used for robust prediction of number\nof trips during special events, driver incentive allocation, as well as\nreal-time anomaly detection across millions of metrics. Classical time series\nmodels are often used in conjunction with a probabilistic formulation for\nuncertainty estimation. However, such models are hard to tune, scale, and add\nexogenous variables to. Motivated by the recent resurgence of Long Short Term\nMemory networks, we propose a novel end-to-end Bayesian deep model that\nprovides time series prediction along with uncertainty estimation. We provide\ndetailed experiments of the proposed solution on completed trips data, and\nsuccessfully apply it to large-scale time series anomaly detection at Uber.\n", "title": "Deep and Confident Prediction for Time Series at Uber" }
null
null
null
null
true
null
11591
null
Default
null
null
null
{ "abstract": " In this paper, we consider a vehicular network in which the wireless nodes\nare located on a system of roads. We model the roadways, which are\npredominantly straight and randomly oriented, by a Poisson line process (PLP)\nand the locations of nodes on each road as a homogeneous 1D Poisson point\nprocess (PPP). Assuming that each node transmits independently, the locations\nof transmitting and receiving nodes are given by two Cox processes driven by\nthe same PLP. For this setup, we derive the coverage probability of a typical\nreceiver, which is an arbitrarily chosen receiving node, assuming independent\nNakagami-$m$ fading over all wireless channels. Assuming that the typical\nreceiver connects to its closest transmitting node in the network, we first\nderive the distribution of the distance between the typical receiver and the\nserving node to characterize the desired signal power. We then characterize\ncoverage probability for this setup, which involves two key technical\nchallenges. First, we need to handle several cases as the serving node can\npossibly be located on any line in the network and the corresponding\ninterference experienced at the typical receiver is different in each case.\nSecond, conditioning on the serving node imposes constraints on the spatial\nconfiguration of lines, which require careful analysis of the conditional\ndistribution of the lines. We address these challenges in order to accurately\ncharacterize the interference experienced at the typical receiver. We then\nderive an exact expression for coverage probability in terms of the derivative\nof Laplace transform of interference power distribution. We analyze the trends\nin coverage probability as a function of the network parameters: line density\nand node density. We also study the asymptotic behavior of this model and\ncompare the coverage performance with that of a homogeneous 2D PPP model with\nthe same node density.\n", "title": "Coverage Analysis of a Vehicular Network Modeled as Cox Process Driven by Poisson Line Process" }
null
null
[ "Computer Science" ]
null
true
null
11592
null
Validated
null
null
null
{ "abstract": " Context: Information Technology consumes up to 10\\% of the world's\nelectricity generation, contributing to CO2 emissions and high energy costs.\nData centers, particularly databases, use up to 23% of this energy. Therefore,\nbuilding an energy-efficient (green) database engine could reduce energy\nconsumption and CO2 emissions.\nGoal: To understand the factors driving databases' energy consumption and\nexecution time throughout their evolution.\nMethod: We conducted an empirical case study of energy consumption by two\nMySQL database engines, InnoDB and MyISAM, across 40 releases. We examined the\nrelationships of four software metrics to energy consumption and execution time\nto determine which metrics reflect the greenness and performance of a database.\nResults: Our analysis shows that database engines' energy consumption and\nexecution time increase as databases evolve. Moreover, the Lines of Code metric\nis correlated moderately to strongly with energy consumption and execution time\nin 88% of cases.\nConclusions: Our findings provide insights to both practitioners and\nresearchers. Database administrators may use them to select a fast, green\nrelease of the MySQL database engine. MySQL database-engine developers may use\nthe software metric to assess products' greenness and performance. Researchers\nmay use our findings to further develop new hypotheses or build models to\npredict greenness and performance of databases.\n", "title": "Database Engines: Evolution of Greenness" }
null
null
null
null
true
null
11593
null
Default
null
null
null
{ "abstract": " We introduce Parseval networks, a form of deep neural networks in which the\nLipschitz constant of linear, convolutional and aggregation layers is\nconstrained to be smaller than 1. Parseval networks are empirically and\ntheoretically motivated by an analysis of the robustness of the predictions\nmade by deep neural networks when their input is subject to an adversarial\nperturbation. The most important feature of Parseval networks is to maintain\nweight matrices of linear and convolutional layers to be (approximately)\nParseval tight frames, which are extensions of orthogonal matrices to\nnon-square matrices. We describe how these constraints can be maintained\nefficiently during SGD. We show that Parseval networks match the\nstate-of-the-art in terms of accuracy on CIFAR-10/100 and Street View House\nNumbers (SVHN) while being more robust than their vanilla counterpart against\nadversarial examples. Incidentally, Parseval networks also tend to train faster\nand make a better usage of the full capacity of the networks.\n", "title": "Parseval Networks: Improving Robustness to Adversarial Examples" }
null
null
null
null
true
null
11594
null
Default
null
null
null
{ "abstract": " High purity Zinc Selenide (ZnSe) crystals are produced starting from\nelemental Zn and Se to be used for the search of the neutrinoless double beta\ndecay (0{\\nu}DBD) of 82Se. In order to increase the number of emitting\nnuclides, enriched 82Se is used. Dedicated production lines for the synthesis\nand conditioning of the Zn82Se powder in order to make it suitable for crystal\ngrowth were assembled compliant with radio-purity constraints specific to rare\nevent physics experiments. Besides routine check of impurities concentration,\nhigh sensitivity measurements are made for radio-isotope concentrations in raw\nmaterials, reactants, consumables, ancillaries and intermediary products used\nfor ZnSe crystals production. Indications are given on the crystals perfection\nand how it is achieved. Since very expensive isotopically enriched material\n(82Se) is used, a special attention is given for acquiring the maximum yield in\nthe mass balance of all production stages. Production and certification\nprotocols are presented and resulting ready-to-use Zn82Se crystals are\ndescribed.\n", "title": "Production of 82Se enriched Zinc Selenide (ZnSe) crystals for the study of neutrinoless double beta decay" }
null
null
null
null
true
null
11595
null
Default
null
null
null
{ "abstract": " Algorithms for equilibrium computation generally make no attempt to ensure\nthat the computed strategies are understandable by humans. For instance the\nstrategies for the strongest poker agents are represented as massive binary\nfiles. In many situations, we would like to compute strategies that can\nactually be implemented by humans, who may have computational limitations and\nmay only be able to remember a small number of features or components of the\nstrategies that have been computed. We study poker games where private\ninformation distributions can be arbitrary. We create a large training set of\ngame instances and solutions, by randomly selecting the information\nprobabilities, and present algorithms that learn from the training instances in\norder to perform well in games with unseen information distributions. We are\nable to conclude several new fundamental rules about poker strategy that can be\neasily implemented by humans.\n", "title": "Computing Human-Understandable Strategies" }
null
null
[ "Statistics" ]
null
true
null
11596
null
Validated
null
null
null
{ "abstract": " We propose a probabilistic model to aggregate the answers of respondents\nanswering multiple-choice questions. The model does not assume that everyone\nhas access to the same information, and so does not assume that the consensus\nanswer is correct. Instead, it infers the most probable world state, even if\nonly a minority vote for it. Each respondent is modeled as receiving a signal\ncontingent on the actual world state, and as using this signal to both\ndetermine their own answer and predict the answers given by others. By\nincorporating respondent's predictions of others' answers, the model infers\nlatent parameters corresponding to the prior over world states and the\nprobability of different signals being received in all possible world states,\nincluding counterfactual ones. Unlike other probabilistic models for\naggregation, our model applies to both single and multiple questions, in which\ncase it estimates each respondent's expertise. The model shows good\nperformance, compared to a number of other probabilistic models, on data from\nseven studies covering different types of expertise.\n", "title": "A statistical model for aggregating judgments by incorporating peer predictions" }
null
null
[ "Statistics" ]
null
true
null
11597
null
Validated
null
null
null
{ "abstract": " Debris disk morphology is wavelength dependent due to the wide range of\nparticle sizes and size-dependent dynamics influenced by various forces.\nResolved images of nearby debris disks reveal complex disk structures that are\ndifficult to distinguish from their spectral energy distributions. Therefore,\nmulti-wavelength resolved images of nearby debris systems provide an essential\nfoundation to understand the intricate interplay between collisional,\ngravitational, and radiative forces that govern debris disk structures. We\npresent the SOFIA 35 um resolved disk image of epsilon Eri, the closest debris\ndisk around a star similar to the early Sun. Combining with the Spitzer\nresolved image at 24 um and 15-38 um excess spectrum, we examine two proposed\norigins of the inner debris in epsilon Eri: (1) in-situ planetesimal belt(s)\nand (2) dragged-in grains from the cold outer belt. We find that the presence\nof in-situ dust-producing planetesmial belt(s) is the most likely source of the\nexcess emission in the inner 25 au region. Although a small amount of\ndragged-in grains from the cold belt could contribute to the excess emission in\nthe inner region, the resolution of the SOFIA data is high enough to rule out\nthe possibility that the entire inner warm excess results from dragged-in\ngrains, but not enough to distinguish one broad inner disk from two narrow\nbelts.\n", "title": "The Inner 25 AU Debris Distribution in the epsilon Eri System" }
null
null
null
null
true
null
11598
null
Default
null
null
null
{ "abstract": " Water pollution is a major global environmental problem, and it poses a great\nenvironmental risk to public health and biological diversity. This work is\nmotivated by assessing the potential environmental threat of coal mining\nthrough increased sulfate concentrations in river networks, which do not belong\nto any simple parametric distribution. However, existing network models mainly\nfocus on binary or discrete networks and weighted networks with known\nparametric weight distributions. We propose a principled nonparametric weighted\nnetwork model based on exponential-family random graph models and local\nlikelihood estimation and study its model-based clustering with application to\nlarge-scale water pollution network analysis. We do not require any parametric\ndistribution assumption on network weights. The proposed method greatly extends\nthe methodology and applicability of statistical network models. Furthermore,\nit is scalable to large and complex networks in large-scale environmental\nstudies and geoscientific research. The power of our proposed methods is\ndemonstrated in simulation studies.\n", "title": "Model-Based Clustering of Nonparametric Weighted Networks" }
null
null
null
null
true
null
11599
null
Default
null
null
null
{ "abstract": " During the High Luminosity LHC, the CMS detector will need charged particle\ntracking at the hardware trigger level to maintain a manageable trigger rate\nand achieve its physics goals. The tracklet approach is a track-finding\nalgorithm based on a road-search algorithm that has been implemented on\ncommercially available FPGA technology. The tracklet algorithm has achieved\nhigh performance in track-finding and completes tracking within 3.4 $\\mu$s on a\nXilinx Virtex-7 FPGA. An overview of the algorithm and its implementation on an\nFPGA is given, results are shown from a demonstrator test stand and system\nperformance studies are presented.\n", "title": "FPGA-Based Tracklet Approach to Level-1 Track Finding at CMS for the HL-LHC" }
null
null
null
null
true
null
11600
null
Default
null
null