text
null
inputs
dict
prediction
null
prediction_agent
null
annotation
list
annotation_agent
null
multi_label
bool
1 class
explanation
null
id
stringlengths
1
5
metadata
null
status
stringclasses
2 values
event_timestamp
null
metrics
null
null
{ "abstract": " We characterize Cesàro-Orlicz function spaces $Ces_{\\varphi}$ containing\norder isomorphically isometric copy of $l^\\infty$. We discuss also some useful\napplicable conditions sufficient for the existence of such a copy.\n", "title": "Isometric copies of $l^\\infty$ in Cesàro-Orlicz function spaces" }
null
null
[ "Mathematics" ]
null
true
null
9901
null
Validated
null
null
null
{ "abstract": " Modeling the joint distribution of extreme weather events in multiple\nlocations is a challenging task with important applications. In this study, we\nuse max-stable models to study extreme daily precipitation events in\nSwitzerland. The non-stationarity of the spatial process at hand involves\nimportant challenges, which are often dealt with by using a stationary model in\na so-called climate space, with well-chosen covariates. Here, we instead chose\nto warp the weather stations under study in a latent space of higher dimension\nusing multidimensional scaling (MDS). The advantage of this approach is its\nimproved flexibility to reproduce highly non-stationary phenomena, while\nkeeping a tractable stationary spatial model in the latent space. Two model\nfitting approaches, which both use MDS, are presented and compared to a\nclassical approach that relies on composite likelihood maximization in a\nclimate space. Results suggest that the proposed methods better reproduce the\nobserved extremal coefficients and their complex spatial dependence.\n", "title": "Modeling non-stationary extreme dependence with stationary max-stable processes and multidimensional scaling" }
null
null
[ "Statistics" ]
null
true
null
9902
null
Validated
null
null
null
{ "abstract": " MOEMS Deformable Mirrors (DM) are key components for next generation\ninstruments with innovative adaptive optics systems, in existing telescopes and\nin the future ELTs. These DMs must perform at room temperature as well as in\ncryogenic and vacuum environment. Ideally, the MOEMS-DMs must be designed to\noperate in such environment. We present some major rules for designing /\noperating DMs in cryo and vacuum. We chose to use interferometry for the full\ncharacterization of these devices, including surface quality measurement in\nstatic and dynamical modes, at ambient and in vacuum/cryo. Thanks to our\nprevious set-up developments, we placed a compact cryo-vacuum chamber designed\nfor reaching 10-6 mbar and 160K, in front of our custom Michelson\ninterferometer, able to measure performances of the DM at actuator/segment\nlevel as well as whole mirror level, with a lateral resolution of 2{\\mu}m and a\nsub-nanometric z-resolution. Using this interferometric bench, we tested the\nIris AO PTT111 DM: this unique and robust design uses an array of single\ncrystalline silicon hexagonal mirrors with a pitch of 606{\\mu}m, able to move\nin tip, tilt and piston with strokes from 5 to 7{\\mu}m, and tilt angle in the\nrange of +/-5mrad. They exhibit typically an open-loop flat surface figure as\ngood as <20nm rms. A specific mount including electronic and opto-mechanical\ninterfaces has been designed for fitting in the test chamber. Segment\ndeformation, mirror shaping, open-loop operation are tested at room and cryo\ntemperature and results are compared. The device could be operated successfully\nat 160K. An additional, mainly focus-like, 500 nm deformation is measured at\n160K; we were able to recover the best flat in cryo by correcting the focus and\nlocal tip-tilts on some segments. Tests on DM with different mirror thicknesses\n(25{\\mu}m and 50{\\mu}m) and different coatings (silver and gold) are currently\nunder way.\n", "title": "MOEMS deformable mirror testing in cryo for future optical instrumentation" }
null
null
null
null
true
null
9903
null
Default
null
null
null
{ "abstract": " Remote sensing image scene classification plays an important role in a wide\nrange of applications and hence has been receiving remarkable attention. During\nthe past years, significant efforts have been made to develop various datasets\nor present a variety of approaches for scene classification from remote sensing\nimages. However, a systematic review of the literature concerning datasets and\nmethods for scene classification is still lacking. In addition, almost all\nexisting datasets have a number of limitations, including the small scale of\nscene classes and the image numbers, the lack of image variations and\ndiversity, and the saturation of accuracy. These limitations severely limit the\ndevelopment of new approaches especially deep learning-based methods. This\npaper first provides a comprehensive review of the recent progress. Then, we\npropose a large-scale dataset, termed \"NWPU-RESISC45\", which is a publicly\navailable benchmark for REmote Sensing Image Scene Classification (RESISC),\ncreated by Northwestern Polytechnical University (NWPU). This dataset contains\n31,500 images, covering 45 scene classes with 700 images in each class. The\nproposed NWPU-RESISC45 (i) is large-scale on the scene classes and the total\nimage number, (ii) holds big variations in translation, spatial resolution,\nviewpoint, object pose, illumination, background, and occlusion, and (iii) has\nhigh within-class diversity and between-class similarity. The creation of this\ndataset will enable the community to develop and evaluate various data-driven\nalgorithms. Finally, several representative methods are evaluated using the\nproposed dataset and the results are reported as a useful baseline for future\nresearch.\n", "title": "Remote Sensing Image Scene Classification: Benchmark and State of the Art" }
null
null
null
null
true
null
9904
null
Default
null
null
null
{ "abstract": " Network theory proved recently to be useful in the quantification of many\nproperties of financial systems. The analysis of the structure of investment\nportfolios is a major application since their eventual correlation and overlap\nimpact the actual risk diversification by individual investors. We investigate\nthe bipartite network of US mutual fund portfolios and their assets. We follow\nits evolution during the Global Financial Crisis and analyse the interplay\nbetween diversification, as understood in classical portfolio theory, and\nsimilarity of the investments of different funds. We show that, on average,\nportfolios have become more diversified and less similar during the crisis.\nHowever, we also find that large overlap is far more likely than expected from\nmodels of random allocation of investments. This indicates the existence of\nstrong correlations between fund portfolio strategies. We introduce a\nsimplified model of propagation of financial shocks, that we exploit to show\nthat a systemic risk component origins from the similarity of portfolios. The\nnetwork is still vulnerable after crisis because of this effect, despite the\nincrease in the diversification of portfolios. Our results indicate that\ndiversification may even increase systemic risk when funds diversify in the\nsame way. Diversification and similarity can play antagonistic roles and the\ntrade-off between the two should be taken into account to properly assess\nsystemic risk.\n", "title": "The Network of U.S. Mutual Fund Investments: Diversification, Similarity and Fragility throughout the Global Financial Crisis" }
null
null
null
null
true
null
9905
null
Default
null
null
null
{ "abstract": " An efficient adaptive algorithm for the removal of Salt and Pepper noise from\ngray scale and color image is presented in this paper. In this proposed method\nfirst a 3X3 window is taken and the central pixel of the window is considered\nas the processing pixel. If the processing pixel is found as uncorrupted, then\nit is left unchanged. And if the processing pixel is found corrupted one, then\nthe window size is increased according to the conditions given in the proposed\nalgorithm. Finally the processing pixel or the central pixel is replaced by\neither the mean, median or trimmed value of the elements in the current window\ndepending upon different conditions of the algorithm. The proposed algorithm\nefficiently removes noise at all densities with better Peak Signal to Noise\nRatio (PSNR) and Image Enhancement Factor (IEF). The proposed algorithm is\ncompared with different existing algorithms like MF, AMF, MDBUTMF, MDBPTGMF and\nAWMF.\n", "title": "Removal of Salt and Pepper noise from Gray-Scale and Color Images: An Adaptive Approach" }
null
null
null
null
true
null
9906
null
Default
null
null
null
{ "abstract": " Strong product is an efficient way to construct a larger digraph through some\nspecific small digraphs. The large digraph constructed by the strong product\nmethod contains the factor digraphs as its subgraphs, and can retain some good\nproperties of the factor digraphs. The distance of digraphs is one of the most\nbasic structural parameters in graph theory, and it plays an important role in\nanalyzing the effectiveness of interconnection networks. In particular, it\nprovides a basis for measuring the transmission delay of networks. When the\ntopological structure of an interconnection network is represented by a\ndigraph, the average distance of the directed graph is a good measure of the\ncommunication performance of the network. In this paper, we mainly investigate\nthe distance and average distance of strong product digraphs, and give a\nformula for the distance of strong product digraphs and an algorithm for\nsolving the average distance of strong product digraphs.\n", "title": "On the distance and algorithms of strong product digraphs" }
null
null
null
null
true
null
9907
null
Default
null
null
null
{ "abstract": " In this paper, we prove the existence of global weak solutions to the\ncompressible two-fluid Navier-Stokes equations in three dimensional space. The\npressure depends on two different variables from the continuity equations.\nWe develop an argument of variable reduction for the pressure law. This\nyields to the strong convergence of the densities, and provides the existence\nof global solutions in time, for the compressible two-fluid Navier-Stokes\nequations, with large data in three dimensional space.\n", "title": "Global weak solution to the viscous two-fluid model with finite energy" }
null
null
null
null
true
null
9908
null
Default
null
null
null
{ "abstract": " Energy has been increasingly generated or collected by different entities on\nthe power grid (e.g., universities, hospitals and householdes) via solar\npanels, wind turbines or local generators in the past decade. With local\nenergy, such electricity consumers can be considered as \"microgrids\" which can\nsimulataneously generate and consume energy. Some microgrids may have excessive\nenergy that can be shared to other power consumers on the grid. To this end,\nall the entities have to share their local private information (e.g., their\nlocal demand, local supply and power quality data) to each other or a\nthird-party to find and implement the optimal energy sharing solution. However,\nsuch process is constrained by privacy concerns raised by the microgrids. In\nthis paper, we propose a privacy preserving scheme for all the microgrids which\ncan securely implement their energy sharing against both semi-honest and\ncolluding adversaries. The proposed approach includes two secure communication\nprotocols that can ensure quantified privacy leakage and handle collusions.\n", "title": "Privacy Preserving and Collusion Resistant Energy Sharing" }
null
null
null
null
true
null
9909
null
Default
null
null
null
{ "abstract": " Word embeddings use vectors to represent words such that the geometry between\nvectors captures semantic relationship between the words. In this paper, we\ndevelop a framework to demonstrate how the temporal dynamics of the embedding\ncan be leveraged to quantify changes in stereotypes and attitudes toward women\nand ethnic minorities in the 20th and 21st centuries in the United States. We\nintegrate word embeddings trained on 100 years of text data with the U.S.\nCensus to show that changes in the embedding track closely with demographic and\noccupation shifts over time. The embedding captures global social shifts --\ne.g., the women's movement in the 1960s and Asian immigration into the U.S --\nand also illuminates how specific adjectives and occupations became more\nclosely associated with certain populations over time. Our framework for\ntemporal analysis of word embedding opens up a powerful new intersection\nbetween machine learning and quantitative social science.\n", "title": "Word Embeddings Quantify 100 Years of Gender and Ethnic Stereotypes" }
null
null
null
null
true
null
9910
null
Default
null
null
null
{ "abstract": " In this work, we develop a novel Bayesian estimation method for the Dirichlet\nprocess (DP) mixture of the inverted Dirichlet distributions, which has been\nshown to be very flexible for modeling vectors with positive elements. The\nrecently proposed extended variational inference (EVI) framework is adopted to\nderive an analytically tractable solution. The convergency of the proposed\nalgorithm is theoretically guaranteed by introducing single lower bound\napproximation to the original objective function in the VI framework. In\nprinciple, the proposed model can be viewed as an infinite inverted Dirichelt\nmixture model (InIDMM) that allows the automatic determination of the number of\nmixture components from data. Therefore, the problem of pre-determining the\noptimal number of mixing components has been overcome. Moreover, the problems\nof over-fitting and under-fitting are avoided by the Bayesian estimation\napproach. Comparing with several recently proposed DP-related methods, the good\nperformance and effectiveness of the proposed method have been demonstrated\nwith both synthesized data and real data evaluations.\n", "title": "Infinite Mixture of Inverted Dirichlet Distributions" }
null
null
null
null
true
null
9911
null
Default
null
null
null
{ "abstract": " We introduce $\\Psi$ec, a local spectral exterior calculus that provides a\ndiscretization of Cartan's exterior calculus of differential forms using\nwavelet functions. Our construction consists of differential form wavelets with\nflexible directional localization, between fully isotropic and curvelet- and\nridgelet-like, that provide tight frames for the spaces of $k$-forms in\n$\\mathbb{R}^2$ and $\\mathbb{R}^3$. By construction, these wavelets satisfy the\nde Rahm co-chain complex, the Hodge decomposition, and that the integral of a\n$k+1$-form is a $k$-form. They also enforce Stokes' theorem for differential\nforms, and we show that with a finite number of wavelet levels it is most\nefficiently approximated using anisotropic curvelet- or ridgelet-like forms.\nOur construction is based on the intrinsic geometric properties of the exterior\ncalculus in the Fourier domain. To reveal these, we extend existing results on\nthe Fourier transform of differential forms to a frequency domain description\nof the exterior calculus, including, for example, a Parseval theorem for forms\nand a description of the symbols of all important operators.\n", "title": "$Ψ$ec: A Local Spectral Exterior Calculus" }
null
null
null
null
true
null
9912
null
Default
null
null
null
{ "abstract": " We report the transverse relaxation rates 1/$T_2$'s of the $^{63}$Cu nuclear\nspin-echo envelope for double-layer high-$T_c$ cuprate superconductors\nHgBa$_{2}$CaCu$_{2}$O$_{6+d}$ from underdoped to overdoped. The relaxation rate\n1/$T_{2L}$ of the exponential function (Lorentzian component) shows a peak at\n220$-$240 K in the underdoped ($T_c$ = 103 K) and the optimally doped ($T_c$ =\n127 K) samples but no peak in the overdoped ($T_c$ = 93 K) sample. The\nenhancement in 1/$T_{2L}$ suggests development of the zero frequency components\nof local field fluctuations. Ultraslow fluctuations are hidden in the pseudogap\nstates.\n", "title": "Ultraslow fluctuations in the pseudogap states of HgBa$_{2}$CaCu$_{2}$O$_{6+d}$" }
null
null
null
null
true
null
9913
null
Default
null
null
null
{ "abstract": " Limited-angle computed tomography (CT) is often used in clinical applications\nsuch as C-arm CT for interventional imaging. However, CT images from limited\nangles suffers from heavy artifacts due to incomplete projection data. Existing\niterative methods require extensive calculations but can not deliver\nsatisfactory results. Based on the observation that the artifacts from limited\nangles have some directional property and are globally distributed, we propose\na novel multi-scale wavelet domain residual learning architecture, which\ncompensates for the artifacts. Experiments have shown that the proposed method\neffectively eliminates artifacts, thereby preserving edge and global structures\nof the image.\n", "title": "Multi-Scale Wavelet Domain Residual Learning for Limited-Angle CT Reconstruction" }
null
null
null
null
true
null
9914
null
Default
null
null
null
{ "abstract": " The renewed Green's function approach to calculating the angular Fock\ncoefficients, $\\psi_{k,p}(\\alpha,\\theta)$ is presented. The final formulas are\nsimplified and specified to be applicable for analytical as well as numerical\ncalculations. The Green's function formulas with the hyperspherical angles\n$\\theta=0,\\pi$ (arbitrary $\\alpha$) or $\\alpha=0,\\pi$ (arbitrary $\\theta$) are\nindicated as corresponding to the angular Fock coefficients possessing physical\nmeaning. The most interesting case of $\\theta=0$ corresponding to a collinear\narrangement of the particles is studied in detail. It is emphasized that this\ncase represents the generalization of the specific cases of the\nelectron-nucleus ($\\alpha=0$) and electron-electron ($\\alpha=\\pi/2$)\ncoalescences. It is shown that the Green's function method for $\\theta=0$\nenables us to calculate any component/subcomponent of the angular Fock\ncoefficient in the form of a single series representation with arbitrary angle\n$\\theta$. Those cases, where the Green's function approach can not be applied,\nare thoroughly studied, and the corresponding solutions are found.\n", "title": "Helium-like atoms. The Green's function approach to the Fock expansion calculations" }
null
null
null
null
true
null
9915
null
Default
null
null
null
{ "abstract": " This paper proposes a new algorithm for controlling classification results by\ngenerating a small additive perturbation without changing the classifier\nnetwork. Our work is inspired by existing works generating adversarial\nperturbation that worsens classification performance. In contrast to the\nexisting methods, our work aims to generate perturbations that can enhance\noverall classification performance. To solve this performance enhancement\nproblem, we newly propose a perturbation generation network (PGN) influenced by\nthe adversarial learning strategy. In our problem, the information in a large\nexternal dataset is summarized by a small additive perturbation, which helps to\nimprove the performance of the classifier trained with the target dataset. In\naddition to this performance enhancement problem, we show that the proposed PGN\ncan be adopted to solve the classical adversarial problem without utilizing the\ninformation on the target classifier. The mentioned characteristics of our\nmethod are verified through extensive experiments on publicly available visual\ndatasets.\n", "title": "Butterfly Effect: Bidirectional Control of Classification Performance by Small Additive Perturbation" }
null
null
null
null
true
null
9916
null
Default
null
null
null
{ "abstract": " We analyze the secrecy outage probability in the downlink for wireless\nnetworks with spatially (Poisson) distributed eavesdroppers (EDs) under the\nassumption that the base station employs transmit antenna selection (TAS) to\nenhance secrecy performance. We compare the cases where the receiving user\nequipment (UE) operates in half-duplex (HD) mode and full-duplex (FD) mode. In\nthe latter case, the UE simultaneously receives the intended downlink message\nand transmits a jamming signal to strengthen secrecy. We investigate two models\nof (semi)passive eavesdropping: (1) EDs act independently and (2) EDs collude\nto intercept the transmitted message. For both of these models, we obtain\nexpressions for the secrecy outage probability in the downlink for HD and FD UE\noperation. The expressions for HD systems have very accurate approximate or\nexact forms in terms of elementary and/or special functions for all path loss\nexponents. Those related to the FD systems have exact integral forms for\ngeneral path loss exponents, while exact closed forms are given for specific\nexponents. A closed-form approximation is also derived for the FD case with\ncolluding EDs. The resulting analysis shows that the reduction in the secrecy\noutage probability is logarithmic in the number of antennas used for TAS and\nidentifies conditions under which HD operation should be used instead of FD\njamming at the UE. These performance trends and exact relations between system\nparameters can be used to develop adaptive power allocation and duplex\noperation methods in practice. Examples of such techniques are alluded to\nherein.\n", "title": "Secrecy Outage Analysis for Downlink Transmissions in the Presence of Randomly Located Eavesdroppers" }
null
null
null
null
true
null
9917
null
Default
null
null
null
{ "abstract": " We introduce a novel method for defining geographic districts in road\nnetworks using stable matching. In this approach, each geographic district is\ndefined in terms of a center, which identifies a location of interest, such as\na post office or polling place, and all other network vertices must be labeled\nwith the center to which they are associated. We focus on defining geographic\ndistricts that are equitable, in that every district has the same number of\nvertices and the assignment is stable in terms of geographic distance. That is,\nthere is no unassigned vertex-center pair such that both would prefer each\nother over their current assignments. We solve this problem using a version of\nthe classic stable matching problem, called symmetric stable matching, in which\nthe preferences of the elements in both sets obey a certain symmetry. In our\ncase, we study a graph-based version of stable matching in which nodes are\nstably matched to a subset of nodes denoted as centers, prioritized by their\nshortest-path distances, so that each center is apportioned a certain number of\nnodes. We show that, for a planar graph or road network with $n$ nodes and $k$\ncenters, the problem can be solved in $O(n\\sqrt{n}\\log n)$ time, which improves\nupon the $O(nk)$ runtime of using the classic Gale-Shapley stable matching\nalgorithm when $k$ is large. Finally, we provide experimental results on road\nnetworks for these algorithms and a heuristic algorithm that performs better\nthan the Gale-Shapley algorithm for any range of values of $k$.\n", "title": "Defining Equitable Geographic Districts in Road Networks via Stable Matching" }
null
null
null
null
true
null
9918
null
Default
null
null
null
{ "abstract": " Computer aided diagnostic (CAD) system is crucial for modern med-ical\nimaging. But almost all CAD systems operate on reconstructed images, which were\noptimized for radiologists. Computer vision can capture features that is subtle\nto human observers, so it is desirable to design a CAD system op-erating on the\nraw data. In this paper, we proposed a deep-neural-network-based detection\nsystem for lung nodule detection in computed tomography (CT). A\nprimal-dual-type deep reconstruction network was applied first to convert the\nraw data to the image space, followed by a 3-dimensional convolutional neural\nnetwork (3D-CNN) for the nodule detection. For efficient network training, the\ndeep reconstruction network and the CNN detector was trained sequentially\nfirst, then followed by one epoch of end-to-end fine tuning. The method was\nevaluated on the Lung Image Database Consortium image collection (LIDC-IDRI)\nwith simulated forward projections. With 144 multi-slice fanbeam pro-jections,\nthe proposed end-to-end detector could achieve comparable sensitivity with the\nreference detector, which was trained and applied on the fully-sampled image\ndata. It also demonstrated superior detection performance compared to detectors\ntrained on the reconstructed images. The proposed method is general and could\nbe expanded to most detection tasks in medical imaging.\n", "title": "End-to-end Lung Nodule Detection in Computed Tomography" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
9919
null
Validated
null
null
null
{ "abstract": " It has recently been shown that if feedback effects of decisions are ignored,\nthen imposing fairness constraints such as demographic parity or equality of\nopportunity can actually exacerbate unfairness. We propose to address this\nchallenge by modeling feedback effects as the dynamics of a Markov decision\nprocesses (MDPs). First, we define analogs of fairness properties that have\nbeen proposed for supervised learning. Second, we propose algorithms for\nlearning fair decision-making policies for MDPs. We also explore extensions to\nreinforcement learning, where parts of the dynamical system are unknown and\nmust be learned without violating fairness. Finally, we demonstrate the need to\naccount for dynamical effects using simulations on a loan applicant MDP.\n", "title": "Fairness with Dynamics" }
null
null
null
null
true
null
9920
null
Default
null
null
null
{ "abstract": " Predictions of inflationary schemes can be influenced by the presence of\nextra dimensions. This could be of particular relevance for the spectrum of\ngravitational waves in models where the extra dimensions provide a brane-world\nsolution to the hierarchy problem. Apart from models of large as well as\nexponentially warped extra dimensions, we analyze the size of tensor modes in\nthe Linear Dilaton scheme recently revived in the discussion of the \"clockwork\nmechanism\". The results are model dependent, significantly enhanced tensor\nmodes on one side and a suppression on the other. In some cases we are led to a\nscheme of \"remote inflation\", where the expansion is driven by energies at a\nhidden brane. In all cases where tensor modes are enhanced, the requirement of\nperturbativity of gravity leads to a stringent upper limit on the allowed\nHubble rate during inflation.\n", "title": "Exploring extra dimensions through inflationary tensor modes" }
null
null
null
null
true
null
9921
null
Default
null
null
null
{ "abstract": " We characterize the class of RFD $C^*$-algebras as those containing a dense\nsubset of elements that attain their norm under a finite-dimensional\nrepresentation. We show further that this subset is the whole space precisely\nwhen every irreducible representation of the $C^*$-algebra is\nfinite-dimensional, which is equivalent to the $C^*$-algebra having no simple\ninfinite-dimensional AF subquotient. We apply techniques from this proof to\nshow the existence of elements in more general classes of $C^*$-algebras whose\nnorms in finite-dimensional representations fit certain prescribed properties.\n", "title": "Elements of $C^*$-algebras Attaining Their Norm in a Finite-Dimensional Representation" }
null
null
null
null
true
null
9922
null
Default
null
null
null
{ "abstract": " Let $(\\mathbf{B}, \\|\\cdot\\|)$ be a real separable Banach space. Let\n$\\varphi(\\cdot)$ and $\\psi(\\cdot)$ be two continuous and increasing functions\ndefined on $[0, \\infty)$ such that $\\varphi(0) = \\psi(0) = 0$, $\\lim_{t\n\\rightarrow \\infty} \\varphi(t) = \\infty$, and\n$\\frac{\\psi(\\cdot)}{\\varphi(\\cdot)}$ is a nondecreasing function on $[0,\n\\infty)$. Let $\\{V_{n};~n \\geq 1 \\}$ be a sequence of independent and symmetric\n{\\bf B}-valued random variables. In this note, we establish a probability\ninequality for sums of independent {\\bf B}-valued random variables by showing\nthat for every $n \\geq 1$ and all $t \\geq 0$, \\[\n\\mathbb{P}\\left(\\left\\|\\sum_{i=1}^{n} V_{i} \\right\\| > t b_{n} \\right) \\leq 4\n\\mathbb{P} \\left(\\left\\|\\sum_{i=1}^{n} \\varphi\\left(\\psi^{-1}(\\|V_{i}\\|)\\right)\n\\frac{V_{i}}{\\|V_{i}\\|} \\right\\| > t a_{n} \\right) +\n\\sum_{i=1}^{n}\\mathbb{P}\\left(\\|V_{i}\\| > b_{n} \\right), \\] where $a_{n} =\n\\varphi(n)$ and $b_{n} = \\psi(n)$, $n \\geq 1$. As an application of this\ninequality, we establish what we call a comparison theorem for the weak law of\nlarge numbers for independent and identically distributed ${\\bf B}$-valued\nrandom variables.\n", "title": "A probability inequality for sums of independent Banach space valued random variables" }
null
null
[ "Mathematics" ]
null
true
null
9923
null
Validated
null
null
null
{ "abstract": " It is difficult for humans to efficiently teach robots how to correctly\nperform a task. One intuitive solution is for the robot to iteratively learn\nthe human's preferences from corrections, where the human improves the robot's\ncurrent behavior at each iteration. When learning from corrections, we argue\nthat while the robot should estimate the most likely human preferences, it\nshould also know what it does not know, and integrate this uncertainty as it\nmakes decisions. We advance the state-of-the-art by introducing a Kalman filter\nfor learning from corrections: this approach obtains the uncertainty of the\nestimated human preferences. Next, we demonstrate how the estimate uncertainty\ncan be leveraged for active learning and risk-sensitive deployment. Our results\nindicate that obtaining and leveraging uncertainty leads to faster learning\nfrom human corrections.\n", "title": "Including Uncertainty when Learning from Human Corrections" }
null
null
[ "Computer Science" ]
null
true
null
9924
null
Validated
null
null
null
{ "abstract": " To perform tasks specified by natural language instructions, autonomous\nagents need to extract semantically meaningful representations of language and\nmap it to visual elements and actions in the environment. This problem is\ncalled task-oriented language grounding. We propose an end-to-end trainable\nneural architecture for task-oriented language grounding in 3D environments\nwhich assumes no prior linguistic or perceptual knowledge and requires only raw\npixels from the environment and the natural language instruction as input. The\nproposed model combines the image and text representations using a\nGated-Attention mechanism and learns a policy to execute the natural language\ninstruction using standard reinforcement and imitation learning methods. We\nshow the effectiveness of the proposed model on unseen instructions as well as\nunseen maps, both quantitatively and qualitatively. We also introduce a novel\nenvironment based on a 3D game engine to simulate the challenges of\ntask-oriented language grounding over a rich set of instructions and\nenvironment states.\n", "title": "Gated-Attention Architectures for Task-Oriented Language Grounding" }
null
null
[ "Computer Science" ]
null
true
null
9925
null
Validated
null
null
null
{ "abstract": " Automatic classification of trees using remotely sensed data has been a dream\nof many scientists and land use managers. Recently, Unmanned aerial vehicles\n(UAV) has been expected to be an easy-to-use, cost-effective tool for remote\nsensing of forests, and deep learning has attracted attention for its ability\nconcerning machine vision. In this study, using a commercially available UAV\nand a publicly available package for deep learning, we constructed a machine\nvision system for the automatic classification of trees. In our method, we\nsegmented a UAV photography image of forest into individual tree crowns and\ncarried out object-based deep learning. As a result, the system was able to\nclassify 7 tree types at 89.0% accuracy. This performance is notable because we\nonly used basic RGB images from a standard UAV. In contrast, most of previous\nstudies used expensive hardware such as multispectral imagers to improve the\nperformance. This result means that our method has the potential to classify\nindividual trees in a cost-effective manner. This can be a usable tool for many\nforest researchers and managements.\n", "title": "Automatic classification of trees using a UAV onboard camera and deep learning" }
null
null
[ "Statistics" ]
null
true
null
9926
null
Validated
null
null
null
{ "abstract": " It is known that individuals in social networks tend to exhibit homophily\n(a.k.a. assortative mixing) in their social ties, which implies that they\nprefer bonding with others of their own kind. But what are the reasons for this\nphenomenon? Is it that such relations are more convenient and easier to\nmaintain? Or are there also some more tangible benefits to be gained from this\ncollective behaviour?\nThe current work takes a game-theoretic perspective on this phenomenon, and\nstudies the conditions under which different assortative mixing strategies lead\nto equilibrium in an evolving social network. We focus on a biased preferential\nattachment model where the strategy of each group (e.g., political or social\nminority) determines the level of bias of its members toward other group\nmembers and non-members. Our first result is that if the utility function that\nthe group attempts to maximize is the degree centrality of the group,\ninterpreted as the sum of degrees of the group members in the network, then the\nonly strategy achieving Nash equilibrium is a perfect homophily, which implies\nthat cooperation with other groups is harmful to this utility function. A\nsecond, and perhaps more surprising, result is that if a reward for inter-group\ncooperation is added to the utility function (e.g., externally enforced by an\nauthority as a regulation), then there are only two possible equilibria,\nnamely, perfect homophily or perfect heterophily, and it is possible to\ncharacterize their feasibility spaces. Interestingly, these results hold\nregardless of the minority-majority ratio in the population.\nWe believe that these results, as well as the game-theoretic perspective\npresented herein, may contribute to a better understanding of the forces that\nshape the groups and communities of our society.\n", "title": "Assortative Mixing Equilibria in Social Network Games" }
null
null
null
null
true
null
9927
null
Default
null
null
null
{ "abstract": " In the last decade, deep learning has contributed to advances in a wide range\ncomputer vision tasks including texture analysis. This paper explores a new\napproach for texture segmentation using deep convolutional neural networks,\nsharing important ideas with classic filter bank based texture segmentation\nmethods. Several methods are developed to train Fully Convolutional Networks to\nsegment textures in various applications. We show in particular that these\nnetworks can learn to recognize and segment a type of texture, e.g. wood and\ngrass from texture recognition datasets (no training segmentation). We\ndemonstrate that Fully Convolutional Networks can learn from repetitive\npatterns to segment a particular texture from a single image or even a part of\nan image. We take advantage of these findings to develop a method that is\nevaluated on a series of supervised and unsupervised experiments and improve\nthe state of the art on the Prague texture segmentation datasets.\n", "title": "Texture segmentation with Fully Convolutional Networks" }
null
null
null
null
true
null
9928
null
Default
null
null
null
{ "abstract": " NA62 is a fixed-target experiment at the CERN SPS dedicated to measurements\nof rare kaon decays. Such measurements, like the branching fraction of the\n$K^{+} \\rightarrow \\pi^{+} \\nu \\bar\\nu$ decay, have the potential to bring\nsignificant insights into new physics processes when comparison is made with\nprecise theoretical predictions. For this purpose, innovative techniques have\nbeen developed, in particular, in the domain of low-mass tracking devices.\nDetector construction spanned several years from 2009 to 2014. The\ncollaboration started detector commissioning in 2014 and will collect data\nuntil the end of 2018. The beam line and detector components are described\ntogether with their early performance obtained from 2014 and 2015 data.\n", "title": "The Beam and detector of the NA62 experiment at CERN" }
null
null
[ "Physics" ]
null
true
null
9929
null
Validated
null
null
null
{ "abstract": " Let $K\\subset S^3$ be a knot, $X:= S^3\\setminus K$ its complement, and\n$\\mathbb{T}$ the circle group identified with $\\mathbb{R}/\\mathbb{Z}$. To any\noriented long knot diagram of $K$, we associate a quadratic polynomial in\nvariables bijectively associated with the bridges of the diagram such that,\nwhen the variables projected to $\\mathbb{T}$ satisfy the linear equations\ncharacterizing the first homology group $H_1(\\tilde{X}_2)$ of the double cyclic\ncovering of $X$, the polynomial projects down to a well defined\n$\\mathbb{T}$-valued function on $T^1(\\tilde{X}_2,\\mathbb{T})$ (the dual of the\ntorsion part $T_1$ of $H_1$). This function is sensitive to knot chirality, for\nexample, it seems to confirm chirality of the knot $10_{71}$. It also\ndistinguishes the knots $7_4$ and $9_2$ known to have identical Alexander\npolynomials and the knots $9_2$ and K11n13 known to have identical Jones\npolynomials but does not distinguish $7_4$ and K11n13.\n", "title": "A dequantized metaplectic knot invariant" }
null
null
null
null
true
null
9930
null
Default
null
null
null
{ "abstract": " This paper presents an automated supervised method for Persian wordnet\nconstruction. Using a Persian corpus and a bi-lingual dictionary, the initial\nlinks between Persian words and Princeton WordNet synsets have been generated.\nThese links will be discriminated later as correct or incorrect by employing\nseven features in a trained classification system. The whole method is just a\nclassification system, which has been trained on a train set containing FarsNet\nas a set of correct instances. State of the art results on the automatically\nderived Persian wordnet is achieved. The resulted wordnet with a precision of\n91.18% includes more than 16,000 words and 22,000 synsets.\n", "title": "Persian Wordnet Construction using Supervised Learning" }
null
null
null
null
true
null
9931
null
Default
null
null
null
{ "abstract": " Bose condensation is central to our understanding of quantum phases of\nmatter. Here we review Bose condensation in topologically ordered phases (also\ncalled topological symmetry breaking), where the condensing bosons have\nnon-trivial mutual statistics with other quasiparticles in the system. We give\na non-technical overview of the relationship between the phases before and\nafter condensation, drawing parallels with more familiar symmetry-breaking\ntransitions. We then review two important applications of this phenomenon.\nFirst, we describe the equivalence between such condensation transitions and\npairs of phases with gappable boundaries, as well as examples where multiple\ntypes of gapped boundary between the same two phases exist. Second, we discuss\nhow such transitions can lead to global symmetries which exchange or permute\nanyon types. Finally we discuss the nature of the critical point, which can be\nmapped to a conventional phase transition in some -- but not all -- cases.\n", "title": "Anyon condensation and its applications" }
null
null
null
null
true
null
9932
null
Default
null
null
null
{ "abstract": " Physics phenomena of multi-soliton complexes have enriched the life of\ndissipative solitons in fiber lasers. By developing a birefringence-enhanced\nfiber laser, we report the first experimental observation of\ngroup-velocity-locked vector soliton (GVLVS) molecules. The\nbirefringence-enhanced fiber laser facilitates the generation of GVLVSs, where\nthe two orthogonally polarized components are coupled together to form a\nmulti-soliton complex. Moreover, the interaction of repulsive and attractive\nforces between multiple pulses binds the particle-like GVLVSs together in time\ndomain to further form compound multi-soliton complexes, namely GVLVS\nmolecules. By adopting the polarization-resolved measurement, we show that the\ntwo orthogonally polarized components of the GVLVS molecules are both soliton\nmolecules supported by the strongly modulated spectral fringes and the\ndouble-humped intensity profiles. Additionally, GVLVS molecules with various\nsoliton separations are also observed by adjusting the pump power and the\npolarization controller.\n", "title": "Group-velocity-locked vector soliton molecules in a birefringence-enhanced fiber laser" }
null
null
null
null
true
null
9933
null
Default
null
null
null
{ "abstract": " Visual question answering (or VQA) is a new and exciting problem that\ncombines natural language processing and computer vision techniques. We present\na survey of the various datasets and models that have been used to tackle this\ntask. The first part of the survey details the various datasets for VQA and\ncompares them along some common factors. The second part of this survey details\nthe different approaches for VQA, classified into four types: non-deep learning\nmodels, deep learning models without attention, deep learning models with\nattention, and other models which do not fit into the first three. Finally, we\ncompare the performances of these approaches and provide some directions for\nfuture work.\n", "title": "Survey of Visual Question Answering: Datasets and Techniques" }
null
null
null
null
true
null
9934
null
Default
null
null
null
{ "abstract": " With the rise of end-to-end learning through deep learning, person detectors\nand re-identification (ReID) models have recently become very strong.\nMulti-camera multi-target (MCMT) tracking has not fully gone through this\ntransformation yet. We intend to take another step in this direction by\npresenting a theoretically principled way of integrating ReID with tracking\nformulated as an optimal Bayes filter. This conveniently side-steps the need\nfor data-association and opens up a direct path from full images to the core of\nthe tracker. While the results are still sub-par, we believe that this new,\ntight integration opens many interesting research opportunities and leads the\nway towards full end-to-end tracking from raw pixels.\n", "title": "Towards a Principled Integration of Multi-Camera Re-Identification and Tracking through Optimal Bayes Filters" }
null
null
null
null
true
null
9935
null
Default
null
null
null
{ "abstract": " From medical charts to national census, healthcare has traditionally operated\nunder a paper-based paradigm. However, the past decade has marked a long and\narduous transformation bringing healthcare into the digital age. Ranging from\nelectronic health records, to digitized imaging and laboratory reports, to\npublic health datasets, today, healthcare now generates an incredible amount of\ndigital information. Such a wealth of data presents an exciting opportunity for\nintegrated machine learning solutions to address problems across multiple\nfacets of healthcare practice and administration. Unfortunately, the ability to\nderive accurate and informative insights requires more than the ability to\nexecute machine learning models. Rather, a deeper understanding of the data on\nwhich the models are run is imperative for their success. While a significant\neffort has been undertaken to develop models able to process the volume of data\nobtained during the analysis of millions of digitalized patient records, it is\nimportant to remember that volume represents only one aspect of the data. In\nfact, drawing on data from an increasingly diverse set of sources, healthcare\ndata presents an incredibly complex set of attributes that must be accounted\nfor throughout the machine learning pipeline. This chapter focuses on\nhighlighting such challenges, and is broken down into three distinct\ncomponents, each representing a phase of the pipeline. We begin with attributes\nof the data accounted for during preprocessing, then move to considerations\nduring model building, and end with challenges to the interpretation of model\noutput. For each component, we present a discussion around data as it relates\nto the healthcare domain and offer insight into the challenges each may impose\non the efficiency of machine learning techniques.\n", "title": "Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline" }
null
null
null
null
true
null
9936
null
Default
null
null
null
{ "abstract": " In autonomous racing, vehicles operate close to the limits of handling and a\nsensor failure can have critical consequences. To limit the impact of such\nfailures, this paper presents the redundant perception and state estimation\napproaches developed for an autonomous race car. Redundancy in perception is\nachieved by estimating the color and position of the track delimiting objects\nusing two sensor modalities independently. Specifically, learning-based\napproaches are used to generate color and pose estimates, from LiDAR and camera\ndata respectively. The redundant perception inputs are fused by a particle\nfilter based SLAM algorithm that operates in real-time. Velocity is estimated\nusing slip dynamics, with reliability being ensured through a probabilistic\nfailure detection algorithm. The sub-modules are extensively evaluated in\nreal-world racing conditions using the autonomous race car \"gotthard\ndriverless\", achieving lateral accelerations up to 1.7G and a top speed of\n90km/h.\n", "title": "Redundant Perception and State Estimation for Reliable Autonomous Racing" }
null
null
null
null
true
null
9937
null
Default
null
null
null
{ "abstract": " We have experimentally studied the effects on the spin Hall angle due to\nsystematic addition of Pt into the light metal Cu. We perform spin torque\nferromagnetic resonance measurements on Py/CuPt bilayer and find that as the Pt\nconcentration increases, the spin Hall angle of CuPt alloy increases. Moreover,\nonly 28% Pt in CuPt alloy can give rise to a spin Hall angle close to that of\nPt. We further extract the spin Hall resistivity of CuPt alloy for different Pt\nconcentrations and find that the contribution of skew scattering is larger for\nlower Pt concentrations, while the side-jump contribution is larger for higher\nPt concentrations. From technological perspective, since the CuPt alloy can\nsustain high processing temperatures and Cu is the most common metallization\nelement in the Si platform, it would be easier to integrate the CuPt alloy\nbased spintronic devices into existing Si fabrication technology.\n", "title": "Experimental study of extrinsic spin Hall effect in CuPt alloy" }
null
null
[ "Physics" ]
null
true
null
9938
null
Validated
null
null
null
{ "abstract": " Deep neural networks (DNNs) have begun to have a pervasive impact on various\napplications of machine learning. However, the problem of finding an optimal\nDNN architecture for large applications is challenging. Common approaches go\nfor deeper and larger DNN architectures but may incur substantial redundancy.\nTo address these problems, we introduce a network growth algorithm that\ncomplements network pruning to learn both weights and compact DNN architectures\nduring training. We propose a DNN synthesis tool (NeST) that combines both\nmethods to automate the generation of compact and accurate DNNs. NeST starts\nwith a randomly initialized sparse network called the seed architecture. It\niteratively tunes the architecture with gradient-based growth and\nmagnitude-based pruning of neurons and connections. Our experimental results\nshow that NeST yields accurate, yet very compact DNNs, with a wide range of\nseed architecture selection. For the LeNet-300-100 (LeNet-5) architecture, we\nreduce network parameters by 70.2x (74.3x) and floating-point operations\n(FLOPs) by 79.4x (43.7x). For the AlexNet and VGG-16 architectures, we reduce\nnetwork parameters (FLOPs) by 15.7x (4.6x) and 30.2x (8.6x), respectively.\nNeST's grow-and-prune paradigm delivers significant additional parameter and\nFLOPs reduction relative to pruning-only methods.\n", "title": "NeST: A Neural Network Synthesis Tool Based on a Grow-and-Prune Paradigm" }
null
null
[ "Computer Science" ]
null
true
null
9939
null
Validated
null
null
null
{ "abstract": " We identify and study a number of new, rapidly growing instabilities of dust\ngrains in protoplanetary disks, which may be important for planetesimal\nformation. The study is based on the recognition that dust-gas mixtures are\ngenerically unstable to a Resonant Drag Instability (RDI), whenever the gas,\nabsent dust, supports undamped linear modes. We show that the \"streaming\ninstability\" is an RDI associated with epicyclic oscillations; this provides\nsimple interpretations for its mechanisms and accurate analytic expressions for\nits growth rates and fastest-growing wavelengths. We extend this analysis to\nmore general dust streaming motions and other waves, including buoyancy and\nmagnetohydrodynamic oscillations, finding various new instabilities. Most\nimportantly, we identify the disk \"settling instability,\" which occurs as dust\nsettles vertically into the midplane of a rotating disk. For small grains, this\ninstability grows many orders of magnitude faster than the standard streaming\ninstability, with a growth rate that is independent of grain size. Growth\ntimescales for realistic dust-to-gas ratios are comparable to the disk orbital\nperiod, and the characteristic wavelengths are more than an order of magnitude\nlarger than the streaming instability (allowing the instability to concentrate\nlarger masses). This suggests that in the process of settling, dust will band\ninto rings then filaments or clumps, potentially seeding dust traps,\nhigh-metallicity regions that in turn seed the streaming instability, or even\noverdensities that coagulate or directly collapse to planetesimals.\n", "title": "Resonant Drag Instabilities in protoplanetary disks: the streaming instability and new, faster-growing instabilities" }
null
null
null
null
true
null
9940
null
Default
null
null
null
{ "abstract": " Automatically determining the optimal size of a neural network for a given\ntask without prior information currently requires an expensive global search\nand training many networks from scratch. In this paper, we address the problem\nof automatically finding a good network size during a single training cycle. We\nintroduce *nonparametric neural networks*, a non-probabilistic framework for\nconducting optimization over all possible network sizes and prove its soundness\nwhen network growth is limited via an L_p penalty. We train networks under this\nframework by continuously adding new units while eliminating redundant units\nvia an L_2 penalty. We employ a novel optimization algorithm, which we term\n*adaptive radial-angular gradient descent* or *AdaRad*, and obtain promising\nresults.\n", "title": "Nonparametric Neural Networks" }
null
null
null
null
true
null
9941
null
Default
null
null
null
{ "abstract": " Traffic for internet video streaming has been rapidly increasing and is\nfurther expected to increase with the higher definition videos and IoT\napplications, such as 360 degree videos and augmented virtual reality\napplications. While efficient management of heterogeneous cloud resources to\noptimize the quality of experience is important, existing work in this problem\nspace often left out important factors. In this paper, we present a model for\ndescribing a today's representative system architecture for video streaming\napplications, typically composed of a centralized origin server and several CDN\nsites. Our model comprehensively considers the following factors: limited\ncaching spaces at the CDN sites, allocation of CDN for a video request, choice\nof different ports from the CDN, and the central storage and bandwidth\nallocation. With the model, we focus on minimizing a performance metric, stall\nduration tail probability (SDTP), and present a novel, yet efficient, algorithm\nto solve the formulated optimization problem. The theoretical bounds with\nrespect to the SDTP metric are also analyzed and presented. Our extensive\nsimulation results demonstrate that the proposed algorithms can significantly\nimprove the SDTP metric, compared to the baseline strategies. Small-scale video\nstreaming system implementation in a real cloud environment further validates\nour results.\n", "title": "FastTrack: Minimizing Stalls for CDN-based Over-the-top Video Streaming Systems" }
null
null
null
null
true
null
9942
null
Default
null
null
null
{ "abstract": " Hierarchically-organized data arise naturally in many psychology and\nneuroscience studies. As the standard assumption of independent and identically\ndistributed samples does not hold for such data, two important problems are to\naccurately estimate group-level effect sizes, and to obtain powerful\nstatistical tests against group-level null hypotheses. A common approach is to\nsummarize subject-level data by a single quantity per subject, which is often\nthe mean or the difference between class means, and treat these as samples in a\ngroup-level t-test. This 'naive' approach is, however, suboptimal in terms of\nstatistical power, as it ignores information about the intra-subject variance.\nTo address this issue, we review several approaches to deal with nested data,\nwith a focus on methods that are easy to implement. With what we call the\nsufficient-summary-statistic approach, we highlight a computationally efficient\ntechnique that can improve statistical power by taking into account\nwithin-subject variances, and we provide step-by-step instructions on how to\napply this approach to a number of frequently-used measures of effect size. The\nproperties of the reviewed approaches and the potential benefits over a\ngroup-level t-test are quantitatively assessed on simulated data and\ndemonstrated on EEG data from a simulated-driving experiment.\n", "title": "Powerful statistical inference for nested data using sufficient summary statistics" }
null
null
null
null
true
null
9943
null
Default
null
null
null
{ "abstract": " This paper is concerned with the channel estimation problem in multi-user\nmillimeter wave (mmWave) wireless systems with large antenna arrays. We develop\na novel simultaneous-estimation with iterative fountain training (SWIFT)\nframework, in which multiple users estimate their channels at the same time and\nthe required number of channel measurements is adapted to various channel\nconditions of different users. To achieve this, we represent the beam direction\nestimation process by a graph, referred to as the beam-on-graph, and associate\nthe channel estimation process with a code-on-graph decoding problem.\nSpecifically, the base station (BS) and each user measure the channel with a\nseries of random combinations of transmit/receive beamforming vectors until the\nchannel estimate converges. As the proposed SWIFT does not adapt the BS's beams\nto any single user, we are able to estimate all user channels simultaneously.\nSimulation results show that SWIFT can significantly outperform the existing\nrandom beamforming-based approaches, which use a predetermined number of\nmeasurements, over a wide range of signal-to-noise ratios and channel coherence\ntime. Furthermore, by utilizing the users' order in terms of completing their\nchannel estimation, our SWIFT framework can infer the sequence of users'\nchannel quality and perform effective user scheduling to achieve superior\nperformance.\n", "title": "Beam-On-Graph: Simultaneous Channel Estimation for mmWave MIMO Systems with Multiple Users" }
null
null
null
null
true
null
9944
null
Default
null
null
null
{ "abstract": " A policy maker faces a sequence of unknown outcomes. At each stage two\n(self-proclaimed) experts provide probabilistic forecasts on the outcome in the\nnext stage. A comparison test is a protocol for the policy maker to\n(eventually) decide which of the two experts is better informed. The protocol\ntakes as input the sequence of pairs of forecasts and actual realizations and\n(weakly) ranks the two experts. We propose two natural properties that such a\ncomparison test must adhere to and show that these essentially uniquely\ndetermine the comparison test. This test is a function of the derivative of the\ninduced pair of measures at the realization.\n", "title": "On Comparison Of Experts" }
null
null
null
null
true
null
9945
null
Default
null
null
null
{ "abstract": " Shearing transitions of multi-layer molecularly thin-film lubrication systems\nin variations of the film-substrate coupling strength and the load are studied\nby using a multiscale method. Three kinds of the interlayer slips found in\ndecreasing the coupling strength are in qualitative agreement with experimental\nresults. Although tribological behaviors are almost insensitive to the smaller\ncoupling strength, they and the effective film thickness are enlarged more and\nmore as the larger one increases. When the load increases, the tribological\nbehaviors are similar to those in increasing coupling strength, but the\neffective film thickness is opposite.\n", "title": "Multiscale simulation on shearing transitions of thin-film lubrication with multi-layer molecules" }
null
null
null
null
true
null
9946
null
Default
null
null
null
{ "abstract": " In chiral magnetic materials, numerous intriguing phenomena such as built in\nchiral magnetic domain walls (DWs) and skyrmions are generated by the\nDzyaloshinskii Moriya interaction (DMI). The DMI also results in asymmetric DW\nspeed under in plane magnetic field, which provides a useful scheme to measure\nthe DMI strengths. However, recent findings of additional asymmetries such as\nchiral damping have disenabled unambiguous DMI determination and the underlying\nmechanism of overall asymmetries becomes under debate. By extracting the\nDMI-induced symmetric contribution, here we experimentally investigated the\nnature of the additional asymmetry. The results revealed that the additional\nasymmetry has a truly antisymmetric nature with the typical behavior governed\nby the DW chirality. In addition, the antisymmetric contribution changes the DW\nspeed more than 100 times, which cannot be solely explained by the chiral\ndamping scenario. By calibrating such antisymmetric contributions, experimental\ninaccuracies can be largely removed, enabling again the DMI measurement scheme.\n", "title": "Chirality-induced Antisymmetry in Magnetic Domain-Wall Speed" }
null
null
null
null
true
null
9947
null
Default
null
null
null
{ "abstract": " Graphene has emerged as a promising building block in the modern optics and\noptoelectronics due to its novel optical and electrical properties. In the\nmid-infrared and terahertz (THz) regime, graphene behaves like metals and\nsupports surface plasmon resonances (SPRs). Moreover, the continuously tunable\nconductivity of graphene enables active SPRs and gives rise to a range of\nactive applications. However, the interaction between graphene and metal-based\nresonant metamaterials has not been fully understood. In this work, a\nsimulation investigation on the interaction between the graphene layer and THz\nresonances supported by the two-gap split ring metamaterials is systematically\nconducted. The simulation results show that the graphene layer can\nsubstantially reduce the Fano resonance and even switch it off, while leave the\ndipole resonance nearly unaffected, which phenomenon is well explained with the\nhigh conductivity of graphene. With the manipulation of graphene conductivity\nvia altering its Fermi energy or layer number, the amplitude of the Fano\nresonance can be modulated. The tunable Fano resonance here together with the\nunderlying physical mechanism can be strategically important in designing\nactive metal-graphene hybrid metamaterials. In addition, the \"sensitivity\" to\nthe graphene layer of the Fano resonance is also highly appreciated in the\nfield of ultrasensitive sensing, where the novel physical mechanism can be\nemployed in sensing other graphene-like two-dimensional (2D) materials or\nbiomolecules with the high conductivity.\n", "title": "Strong interaction between graphene layer and Fano resonance in terahertz metamaterials" }
null
null
[ "Physics" ]
null
true
null
9948
null
Validated
null
null
null
{ "abstract": " Transfer learning through fine-tuning a pre-trained neural network with an\nextremely large dataset, such as ImageNet, can significantly accelerate\ntraining while the accuracy is frequently bottlenecked by the limited dataset\nsize of the new target task. To solve the problem, some regularization methods,\nconstraining the outer layer weights of the target network using the starting\npoint as references (SPAR), have been studied. In this paper, we propose a\nnovel regularized transfer learning framework DELTA, namely DEep Learning\nTransfer using Feature Map with Attention. Instead of constraining the weights\nof neural network, DELTA aims to preserve the outer layer outputs of the target\nnetwork. Specifically, in addition to minimizing the empirical loss, DELTA\nintends to align the outer layer outputs of two networks, through constraining\na subset of feature maps that are precisely selected by attention that has been\nlearned in an supervised learning manner. We evaluate DELTA with the\nstate-of-the-art algorithms, including L2 and L2-SP. The experiment results\nshow that our proposed method outperforms these baselines with higher accuracy\nfor new tasks.\n", "title": "DELTA: DEep Learning Transfer using Feature Map with Attention for Convolutional Networks" }
null
null
null
null
true
null
9949
null
Default
null
null
null
{ "abstract": " In this work, we present an analysis of the Burst failure effect in the\n$H_\\infty$ norm. We present a procedure to perform an analysis between\ndifferent Markov Chain models and a numerical example. In the numerical example\nthe results obtained pointed out that the burst failure effect in the\nperformance does not exceed 6.3%. However, this work is an introduction for a\nwider and more extensive analysis in this subject.\n", "title": "The Burst Failure Influence on the $H_\\infty$ Norm" }
null
null
null
null
true
null
9950
null
Default
null
null
null
{ "abstract": " The architecture of Exascale computing facilities, which involves millions of\nheterogeneous processing units, will deeply impact on scientific applications.\nFuture astrophysical HPC applications must be designed to make such computing\nsystems exploitable. The ExaNeSt H2020 EU-funded project aims to design and\ndevelop an exascale ready prototype based on low-energy-consumption ARM64 cores\nand FPGA accelerators. We participate to the design of the platform and to the\nvalidation of the prototype with cosmological N-body and hydrodynamical codes\nsuited to perform large-scale, high-resolution numerical simulations of cosmic\nstructures formation and evolution. We discuss our activities on astrophysical\napplications to take advantage of the underlying architecture.\n", "title": "Cosmological Simulations in Exascale Era" }
null
null
null
null
true
null
9951
null
Default
null
null
null
{ "abstract": " We study the time evolution of a one-dimensional interacting fermion system\ndescribed by the Luttinger model starting from a nonequilibrium state defined\nby a smooth temperature profile $T(x)$. As a specific example we consider the\ncase when $T(x)$ is equal to $T_L$ ($T_R$) far to the left (right). Using a\nseries expansion in $\\epsilon = 2(T_{R} - T_{L})/(T_{L}+T_{R})$, we compute the\nenergy density, the heat current density, and the fermion two-point correlation\nfunction for all times $t \\geq 0$. For local (delta-function) interactions, the\nfirst two are computed to all orders, giving simple exact expressions involving\nthe Schwarzian derivative of the integral of $T(x)$. For nonlocal interactions,\nbreaking scale invariance, we compute the nonequilibrium steady state (NESS) to\nall orders and the evolution to first order in $\\epsilon$. The heat current in\nthe NESS is universal even when conformal invariance is broken by the\ninteractions, and its dependence on $T_{L,R}$ agrees with numerical results for\nthe $XXZ$ spin chain. Moreover, our analytical formulas predict peaks at short\ntimes in the transition region between different temperatures and show\ndispersion effects that, even if nonuniversal, are qualitatively similar to\nones observed in numerical simulations for related models, such as spin chains\nand interacting lattice fermions.\n", "title": "Time evolution of the Luttinger model with nonuniform temperature profile" }
null
null
null
null
true
null
9952
null
Default
null
null
null
{ "abstract": " The game of chess is the most widely-studied domain in the history of\nartificial intelligence. The strongest programs are based on a combination of\nsophisticated search techniques, domain-specific adaptations, and handcrafted\nevaluation functions that have been refined by human experts over several\ndecades. In contrast, the AlphaGo Zero program recently achieved superhuman\nperformance in the game of Go, by tabula rasa reinforcement learning from games\nof self-play. In this paper, we generalise this approach into a single\nAlphaZero algorithm that can achieve, tabula rasa, superhuman performance in\nmany challenging domains. Starting from random play, and given no domain\nknowledge except the game rules, AlphaZero achieved within 24 hours a\nsuperhuman level of play in the games of chess and shogi (Japanese chess) as\nwell as Go, and convincingly defeated a world-champion program in each case.\n", "title": "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm" }
null
null
null
null
true
null
9953
null
Default
null
null
null
{ "abstract": " We discuss a monotone quantity related to Huisken's monotonicity formula and\nsome technical consequences for mean curvature flow.\n", "title": "Some remarks on Huisken's monotonicity formula for mean curvature flow" }
null
null
null
null
true
null
9954
null
Default
null
null
null
{ "abstract": " We consider forecasting a single time series using high-dimensional\npredictors in the presence of a possible nonlinear forecast function. The\nsufficient forecasting (Fan et al., 2016) used sliced inverse regression to\nestimate lower-dimensional sufficient indices for nonparametric forecasting\nusing factor models. However, Fan et al. (2016) is fundamentally limited to the\ninverse first-moment method, by assuming the restricted fixed number of\nfactors, linearity condition for factors, and monotone effect of factors on the\nresponse. In this work, we study the inverse second-moment method using\ndirectional regression and the inverse third-moment method to extend the\nmethodology and applicability of the sufficient forecasting. As the number of\nfactors diverges with the dimension of predictors, the proposed method relaxes\nthe distributional assumption of the predictor and enhances the capability of\ncapturing the non-monotone effect of factors on the response. We not only\nprovide a high-dimensional analysis of inverse moment methods such as\nexhaustiveness and rate of convergence, but also prove their model selection\nconsistency. The power of our proposed methods is demonstrated in both\nsimulation studies and an empirical study of forecasting monthly macroeconomic\ndata from Q1 1959 to Q1 2016. During our theoretical development, we prove an\ninvariance result for inverse moment methods, which make a separate\ncontribution to the sufficient dimension reduction.\n", "title": "Inverse Moment Methods for Sufficient Forecasting using High-Dimensional Predictors" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
9955
null
Validated
null
null
null
{ "abstract": " We present new radio continuum observations of NGC253 from the Murchison\nWidefield Array at frequencies between 76 and 227 MHz. We model the broadband\nradio spectral energy distribution for the total flux density of NGC253 between\n76 MHz and 11 GHz. The spectrum is best described as a sum of central starburst\nand extended emission. The central component, corresponding to the inner 500pc\nof the starburst region of the galaxy, is best modelled as an internally\nfree-free absorbed synchrotron plasma, with a turnover frequency around 230\nMHz. The extended emission component of the NGC253 spectrum is best described\nas a synchrotron emission flattening at low radio frequencies. We find that 34%\nof the extended emission (outside the central starburst region) at 1 GHz\nbecomes partially absorbed at low radio frequencies. Most of this flattening\noccurs in the western region of the SE halo, and may be indicative of\nsynchrotron self-absorption of shock re-accelerated electrons or an intrinsic\nlow-energy cut off of the electron distribution. Furthermore, we detect the\nlarge-scale synchrotron radio halo of NGC253 in our radio images. At 154 - 231\nMHz the halo displays the well known X-shaped/horn-like structure, and extends\nout to ~8kpc in z-direction (from major axis).\n", "title": "Spectral energy distribution and radio halo of NGC 253 at low radio frequencies" }
null
null
null
null
true
null
9956
null
Default
null
null
null
{ "abstract": " Fractons are emergent particles which are immobile in isolation, but which\ncan move together in dipolar pairs or other small clusters. These exotic\nexcitations naturally occur in certain quantum phases of matter described by\ntensor gauge theories. Previous research has focused on the properties of small\nnumbers of fractons and their interactions, effectively mapping out the\n\"Standard Model\" of fractons. In the present work, however, we consider systems\nwith a finite density of either fractons or their dipolar bound states, with a\nfocus on the $U(1)$ fracton models. We study some of the phases in which\nemergent fractonic matter can exist, thereby initiating the study of the\n\"condensed matter\" of fractons. We begin by considering a system with a finite\ndensity of fractons, which we show can exhibit microemulsion physics, in which\nfractons form small-scale clusters emulsed in a phase dominated by long-range\nrepulsion. We then move on to study systems with a finite density of mobile\ndipoles, which have phases analogous to many conventional condensed matter\nphases. We focus on two major examples: Fermi liquids and quantum Hall phases.\nA finite density of fermionic dipoles will form a Fermi surface and enter a\nFermi liquid phase. Interestingly, this dipolar Fermi liquid exhibits a\nfinite-temperature phase transition, corresponding to an unbinding transition\nof fractons. Finally, we study chiral two-dimensional phases corresponding to\ndipoles in \"quantum Hall\" states of their emergent magnetic field. We study\nnumerous aspects of these generalized quantum Hall systems, such as their edge\ntheories and ground state degeneracies.\n", "title": "Emergent Phases of Fractonic Matter" }
null
null
null
null
true
null
9957
null
Default
null
null
null
{ "abstract": " The generating function of cubic Hodge integrals satisfying the local\nCalabi-Yau condition is conjectured to be a tau function of a new integrable\nsystem which can be regarded as a fractional generalization of the Volterra\nlattice hierarchy, so we name it the fractional Volterra hierarchy. In this\npaper, we give the definition of this integrable hierarchy in terms of Lax pair\nand Hamiltonian formalisms, construct its tau functions, and present its\nmulti-soliton solutions.\n", "title": "Fractional Volterra Hierarchy" }
null
null
null
null
true
null
9958
null
Default
null
null
null
{ "abstract": " We explore the simplification of widely used meta-generalized-gradient\napproximation (mGGA) exchange-correlation functionals to the Laplacian level of\nrefinement by use of approximate kinetic energy density functionals (KEDFs).\nSuch deorbitalization is motivated by the prospect of reducing computational\ncost while recovering a strictly Kohn-Sham local potential framework (rather\nthan the usual generalized Kohn-Sham treatment of mGGAs). A KEDF that has been\nrather successful in solid simulations proves to be inadequate for\ndeorbitalization but we produce other forms which, with parametrization to\nKohn-Sham results (not experimental data) on a small training set, yield rather\ngood results on standard molecular test sets when used to deorbitalize the\nmeta-GGA made very simple, TPSS, and SCAN functionals. We also study the\ndifference between high-fidelity and best-performing deorbitalizations and\ndiscuss possible implications for use in ab initio molecular dynamics\nsimulations of complicated condensed phase systems.\n", "title": "Deorbitalization strategies for meta-GGA exchange-correlation functionals" }
null
null
null
null
true
null
9959
null
Default
null
null
null
{ "abstract": " We prove that counting copies of any graph $F$ in another graph $G$ can be\nachieved using basic matrix operations on the adjacency matrix of $G$.\nMoreover, the resulting algorithm is competitive for medium-sized $F$: our\nalgorithm recovers the best known complexity for rooted 6-clique counting and\nimproves on the best known for 9-cycle counting. Underpinning our proofs is the\nnew result that, for a general class of graph operators, matrix operations are\nhomomorphisms for operations on rooted graphs.\n", "title": "Fast counting of medium-sized rooted subgraphs" }
null
null
[ "Computer Science", "Mathematics" ]
null
true
null
9960
null
Validated
null
null
null
{ "abstract": " Density-functional theory calculations with spin-polarized generalized\ngradient approximation and Hubbard $U$ correction is carried out to investigate\nthe mechanical, structural, electronic and magnetic properties of graphitic\nheptazine with embedded $\\mathrm{Fe}$ atom under bi-axial tensile strain and\napplied perpendicular electric field. It was found that the binding energy of\nheptazine with embedded $\\mathrm{Fe}$ atom system decreases as more tensile\nstrain is applied and increases as more electric field strength is applied. Our\ncalculations also predict a band gap at a peak value of 5 tensile strain but at\nexpense of the structural stability of the system. The band gap opening at 5\ntensile strain is due to distortion in the structure caused by the repulsive\neffect in the cavity between the lone pairs of edge nitrogen atoms and\n$\\mathrm{d}_{xy}/\\mathrm{d}_{x^2-y^2}$ orbital of Fe atom, hence the\nunoccupied $\\mathrm{p}_z$-orbital is forced to shift towards higher energy. The\nelectronic and magnetic properties of the heptazine with embedded $\\mathrm{Fe}$\nsystem under perpendicular electric field up to a peak value of 10\n$\\mathrm{V/nm}$ is also well preserved despite obvious buckled structure. Such\nproperties may be desirable for diluted magnetic semiconductors, spintronics,\nand sensing devices.\n", "title": "First-principles investigation of graphitic carbon nitride monolayer with embedded Fe atom" }
null
null
null
null
true
null
9961
null
Default
null
null
null
{ "abstract": " We introduce a notion of nodal domains for positivity preserving forms. This\nnotion generalizes the classical ones for Laplacians on domains and on graphs.\nWe prove the Courant nodal domain theorem in this generalized setting using\npurely analytical methods.\n", "title": "Courant's Nodal Domain Theorem for Positivity Preserving Forms" }
null
null
null
null
true
null
9962
null
Default
null
null
null
{ "abstract": " Notifications provide a unique mechanism for increasing the effectiveness of\nreal-time information delivery systems. However, notifications that demand\nusers' attention at inopportune moments are more likely to have adverse effects\nand might become a cause of potential disruption rather than proving beneficial\nto users. In order to address these challenges a variety of intelligent\nnotification mechanisms based on monitoring and learning users' behavior have\nbeen proposed. The goal of such mechanisms is maximizing users' receptivity to\nthe delivered information by automatically inferring the right time and the\nright context for sending a certain type of information.\nThis article provides an overview of the current state of the art in the area\nof intelligent notification mechanisms that relies on the awareness of users'\ncontext and preferences. More specifically, we first present a survey of\nstudies focusing on understanding and modeling users' interruptibility and\nreceptivity to notifications from desktops and mobile devices. Then, we discuss\nthe existing challenges and opportunities in developing mechanisms for\nintelligent notification systems in a variety of application scenarios.\n", "title": "Intelligent Notification Systems: A Survey of the State of the Art and Research Challenges" }
null
null
null
null
true
null
9963
null
Default
null
null
null
{ "abstract": " A novel matching based heuristic algorithm designed to detect specially\nformulated infeasible zero-one IPs is presented. The algorithm input is a set\nof nested doubly stochastic subsystems and a set E of instance defining\nvariables set at zero level. The algorithm deduces additional variables at zero\nlevel until either a constraint is violated (the IP is infeasible), or no more\nvariables can be deduced zero (the IP is undecided). All feasible IPs, and all\ninfeasible IPs not detected infeasible are undecided. We successfully apply the\nalgorithm to a small set of specially formulated infeasible zero-one IP\ninstances of the Hamilton cycle decision problem. We show how to model both the\ngraph and subgraph isomorphism decision problems for input to the algorithm.\nIncreased levels of nested doubly stochastic subsystems can be implemented\ndynamically. The algorithm is designed for parallel processing, and for\ninclusion of techniques in addition to matching.\n", "title": "Using Matching to Detect Infeasibility of Some Integer Programs" }
null
null
null
null
true
null
9964
null
Default
null
null
null
{ "abstract": " Neuroinflammation in utero may result in lifelong neurological disabilities.\nAstrocytes play a pivotal role, but the mechanisms are poorly understood. No\nearly postnatal treatment strategies exist to enhance neuroprotective potential\nof astrocytes. We hypothesized that agonism on {\\alpha}7 nicotinic\nacetylcholine receptor ({\\alpha}7nAChR) in fetal astrocytes will augment their\nneuroprotective transcriptome profile, while the antagonistic stimulation of\n{\\alpha}7nAChR will achieve the opposite. Using an in vivo - in vitro model of\ndevelopmental programming of neuroinflammation induced by lipopolysaccharide\n(LPS), we validated this hypothesis in primary fetal sheep astrocytes cultures\nre-exposed to LPS in presence of a selective {\\alpha}7nAChR agonist or\nantagonist. Our RNAseq findings show that a pro-inflammatory astrocyte\ntranscriptome phenotype acquired in vitro by LPS stimulation is reversed with\n{\\alpha}7nAChR agonistic stimulation. Conversely, antagonistic {\\alpha}7nAChR\nstimulation potentiates the pro-inflammatory astrocytic transcriptome\nphenotype. Furthermore, we conduct a secondary transcriptome analysis against\nthe identical {\\alpha}7nAChR experiments in fetal sheep primary microglia\ncultures and against the Simons Simplex Collection for autism spectrum disorder\nand discuss the implications.\n", "title": "α7 nicotinic acetylcholine receptor signaling modulates ovine fetal brain astrocytes transcriptome in response to endotoxin: comparison to microglia, implications for prenatal stress and development of autism spectrum disorder" }
null
null
null
null
true
null
9965
null
Default
null
null
null
{ "abstract": " Skyrmions are disk-like objects that typically form triangular crystals in\ntwo dimensional systems. This situation is analogous to the so-called \"pancake\nvortices\" of quasi-two dimensional superconductors. The way in which skyrmion\ndisks or pancake skyrmions pile up in layered centro-symmetric materials is\ndictated by the inter-layer exchange. Unbiased Monte Carlo simulations and\nsimple stabilization arguments reveal face centered cubic and hexagonal close\npacked skyrmion crystals for different choices of the inter-layer exchange, in\naddition to the conventional triangular crystal of skyrmion lines. Moreover, an\ninhomogeneous current induces sliding motion of pancake skyrmions, indicating\nthat they behave as effective mesoscale particles.\n", "title": "Face centered cubic and hexagonal close packed skyrmion crystals in centro-symmetric magnets" }
null
null
[ "Physics" ]
null
true
null
9966
null
Validated
null
null
null
{ "abstract": " Deep convolution neural networks demonstrate impressive results in the\nsuper-resolution domain. A series of studies concentrate on improving peak\nsignal noise ratio (PSNR) by using much deeper layers, which are not friendly\nto constrained resources. Pursuing a trade-off between the restoration capacity\nand the simplicity of models is still non-trivial. Recent contributions are\nstruggling to manually maximize this balance, while our work achieves the same\ngoal automatically with neural architecture search. Specifically, we handle\nsuper-resolution with a multi-objective approach. We also propose an elastic\nsearch tactic at both micro and macro level, based on a hybrid controller that\nprofits from evolutionary computation and reinforcement learning. Quantitative\nexperiments help us to draw a conclusion that our generated models dominate\nmost of the state-of-the-art methods with respect to the individual FLOPS.\n", "title": "Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search" }
null
null
null
null
true
null
9967
null
Default
null
null
null
{ "abstract": " We formulate simple assumptions, implying the Robbins-Monro conditions for\nthe $Q$-learning algorithm with the local learning rate, depending on the\nnumber of visits of a particular state-action pair (local clock) and the number\nof iteration (global clock). It is assumed that the Markov decision process is\ncommunicating and the learning policy ensures the persistent exploration. The\nrestrictions are imposed on the functional dependence of the learning rate on\nthe local and global clocks. The result partially confirms the conjecture of\nBradkte (1994).\n", "title": "Robbins-Monro conditions for persistent exploration learning strategies" }
null
null
null
null
true
null
9968
null
Default
null
null
null
{ "abstract": " The Holstein model describes the motion of a tight-binding tracer particle\ninteracting with a field of quantum harmonic oscillators. We consider this\nmodel with an on-site random potential. Provided the hopping amplitude for the\nparticle is small, we prove localization for matrix elements of the resolvent,\nin particle position and in the field Fock space. These bounds imply a form of\ndynamical localization for the particle position that leaves open the\npossibility of resonant tunneling in Fock space between equivalent field\nconfigurations.\n", "title": "Localization in the Disordered Holstein model" }
null
null
null
null
true
null
9969
null
Default
null
null
null
{ "abstract": " A key challenge in complex visuomotor control is learning abstract\nrepresentations that are effective for specifying goals, planning, and\ngeneralization. To this end, we introduce universal planning networks (UPN).\nUPNs embed differentiable planning within a goal-directed policy. This planning\ncomputation unrolls a forward model in a latent space and infers an optimal\naction plan through gradient descent trajectory optimization. The\nplan-by-gradient-descent process and its underlying representations are learned\nend-to-end to directly optimize a supervised imitation learning objective. We\nfind that the representations learned are not only effective for goal-directed\nvisual imitation via gradient-based trajectory optimization, but can also\nprovide a metric for specifying goals using images. The learned representations\ncan be leveraged to specify distance-based rewards to reach new target states\nfor model-free reinforcement learning, resulting in substantially more\neffective learning when solving new tasks described via image-based goals. We\nwere able to achieve successful transfer of visuomotor planning strategies\nacross robots with significantly different morphologies and actuation\ncapabilities.\n", "title": "Universal Planning Networks" }
null
null
[ "Computer Science", "Statistics" ]
null
true
null
9970
null
Validated
null
null
null
{ "abstract": " Recent advances in Representation Learning and Adversarial Training seem to\nsucceed in removing unwanted features from the learned representation. We show\nthat demographic information of authors is encoded in -- and can be recovered\nfrom -- the intermediate representations learned by text-based neural\nclassifiers. The implication is that decisions of classifiers trained on\ntextual data are not agnostic to -- and likely condition on -- demographic\nattributes. When attempting to remove such demographic information using\nadversarial training, we find that while the adversarial component achieves\nchance-level development-set accuracy during training, a post-hoc classifier,\ntrained on the encoded sentences from the first part, still manages to reach\nsubstantially higher classification accuracies on the same data. This behavior\nis consistent across several tasks, demographic properties and datasets. We\nexplore several techniques to improve the effectiveness of the adversarial\ncomponent. Our main conclusion is a cautionary one: do not rely on the\nadversarial training to achieve invariant representation to sensitive features.\n", "title": "Adversarial Removal of Demographic Attributes from Text Data" }
null
null
null
null
true
null
9971
null
Default
null
null
null
{ "abstract": " We theoretically analyze the effect of parameter fluctuations on the\nsuperradiance phase transition in a setup where a large number of\nsuperconducting qubits are coupled to a single cavity. We include parameter\nfluctuations that are typical of superconducting architectures, such as\nfluctuations in qubit gaps, bias points and qubit-cavity coupling strengths. We\nfind that the phase transition should occur in this case, although it manifests\nitself somewhat differently from the case with no fluctuations. We also find\nthat fluctuations in the qubit gaps and qubit-cavity coupling strengths do not\nnecessarily make it more difficult to reach the transition point. Fluctuations\nin the bias points, however, increase the coupling strength required to reach\nthe quantum phase transition point and enter the superradiant phase. Similarly,\nthese fluctuations lower the critical temperature for the thermal phase\ntransition.\n", "title": "Superradiance phase transition in the presence of parameter fluctuations" }
null
null
null
null
true
null
9972
null
Default
null
null
null
{ "abstract": " Exploratory testing is neither black nor white, but rather a continuum of\nexploration exists. In this research we propose an approach for decision\nsupport helping practitioners to distribute time between different degrees of\nexploratory testing on that continuum. To make the continuum manageable, five\nlevels have been defined: freestyle testing, high, medium and low degrees of\nexploration, and scripted testing. The decision support approach is based on\nthe repertory grid technique. The approach has been used in one company. The\nmethod for data collection was focus groups. The results showed that the\nproposed approach aids practitioners in the reflection of what exploratory\ntesting levels to use, and aligns their understanding for priorities of\ndecision criteria and the performance of exploratory testing levels in their\ncontexts. The findings also showed that the participating company, which is\ncurrently conducting mostly scripted testing, should spend more time on testing\nusing higher degrees of exploration in comparison to scripted testing.\n", "title": "A Decision Support Method for Recommending Degrees of Exploration in Exploratory Testing" }
null
null
null
null
true
null
9973
null
Default
null
null
null
{ "abstract": " In this paper we suggest that in the framework of the Category Theory it is\npossible to demonstrate the mathematical and logical \\textit{dual equivalence}\nbetween the category of the $q$-deformed Hopf Coalgebras and the category of\nthe $q$-deformed Hopf Algebras in QFT, interpreted as a thermal field theory.\nEach pair algebra-coalgebra characterizes, indeed, a QFT system and its\nmirroring thermal bath, respectively, so to model dissipative quantum systems\npersistently in far-from-equilibrium conditions, with an evident significance\nalso for biological sciences. The $q$-deformed Hopf Coalgebras and the\n$q$-deformed Hopf Algebras constitute two dual categories because characterized\nby the same functor $T$, related with the Bogoliubov transform, and by its\ncontravariant application $T^{op}$, respectively. The \\textit{q}-deformation\nparameter, indeed, is related to the Bogoliubov angle, and it is effectively a\nthermal parameter. Therefore, the different values of $q$ identify univocally,\nand then label, the vacua appearing in the foliation process of the quantum\nvacuum. This means that, in the framework of Universal Coalgebra, as general\ntheory of dynamic and computing systems (\"labelled state-transition systems\"),\nthe so labelled infinitely many quantum vacua can be interpreted as the Final\nCoalgebra of an \"Infinite State Black-Box Machine\". All this opens the way to\nthe possibility of designing a new class of universal quantum computing\narchitectures based on this coalgebraic formulation of QFT, as its ability of\nnaturally generating a Fibonacci progression demonstrates.\n", "title": "Quantum Field Theory and Coalgebraic Logic in Theoretical Computer Science" }
null
null
null
null
true
null
9974
null
Default
null
null
null
{ "abstract": " Accurately predicting when and where ambulance call-outs occur can reduce\nresponse times and ensure the patient receives urgent care sooner. Here we\npresent a novel method for ambulance demand prediction using Gaussian process\nregression (GPR) in time and geographic space. The method exhibits superior\naccuracy to MEDIC, a method which has been used in industry. The use of GPR has\nadditional benefits such as the quantification of uncertainty with each\nprediction, the choice of kernel functions to encode prior knowledge and the\nability to capture spatial correlation. Measures to increase the utility of GPR\nin the current context, with large training sets and a Poisson-distributed\noutput, are outlined.\n", "title": "Spatiotemporal Prediction of Ambulance Demand using Gaussian Process Regression" }
null
null
null
null
true
null
9975
null
Default
null
null
null
{ "abstract": " Curated web archive collections contain focused digital content which is\ncollected by archiving organizations, groups, and individuals to provide a\nrepresentative sample covering specific topics and events to preserve them for\nfuture exploration and analysis. In this paper, we discuss how to best support\ncollaborative construction and exploration of these collections through the\nArchiveWeb system. ArchiveWeb has been developed using an iterative\nevaluation-driven design-based research approach, with considerable user\nfeedback at all stages. The first part of this paper describes the important\ninsights we gained from our initial requirements engineering phase during the\nfirst year of the project and the main functionalities of the current\nArchiveWeb system for searching, constructing, exploring, and discussing web\narchive collections. The second part summarizes the feedback we received on\nthis version from archiving organizations and libraries, as well as our\ncorresponding plans for improving and extending the system for the next\nrelease.\n", "title": "ArchiveWeb: collaboratively extending and exploring web archive collections - How would you like to work with your collections?" }
null
null
[ "Computer Science" ]
null
true
null
9976
null
Validated
null
null
null
{ "abstract": " Learning network representations has a variety of applications, such as\nnetwork classification. Most existing work in this area focuses on static\nundirected networks and do not account for presence of directed edges or\ntemporarily changes. Furthermore, most work focuses on node representations\nthat do poorly on tasks like network classification. In this paper, we propose\na novel, flexible and scalable network embedding methodology, \\emph{gl2vec},\nfor network classification in both static and temporal directed networks.\n\\emph{gl2vec} constructs vectors for feature representation using static or\ntemporal network graphlet distributions and a null model for comparing them\nagainst random graphs. We argue that \\emph{gl2vec} can be used to classify and\ncompare networks of varying sizes and time period with high accuracy. We\ndemonstrate the efficacy and usability of \\emph{gl2vec} over existing\nstate-of-the-art methods on network classification tasks such as network type\nclassification and subgraph identification in several real-world static and\ntemporal directed networks. Experimental results further show that\n\\emph{gl2vec}, concatenated with a wide range of state-of-the-art methods,\nimproves classification accuracy by up to $10\\%$ in real-world applications\nsuch as detecting departments for subgraphs in an email network or identifying\nmobile users given their app switching behaviors represented as static or\ntemporal directed networks.\n", "title": "gl2vec: Learning Feature Representation Using Graphlets for Directed Networks" }
null
null
null
null
true
null
9977
null
Default
null
null
null
{ "abstract": " Superconductivity in noncentrosymmetric compounds has attracted sustained\ninterest in the last decades. Here we present a detailed study on the\ntransport, thermodynamic properties and the band structure of the\nnoncentrosymmetric superconductor La$_7$Ir$_3$ ($T_c$ $\\sim$2.3 K) that was\nrecently proposed to break the time-reversal symmetry. It is found that\nLa$_7$Ir$_3$ displays a moderately large electronic heat capacity (Sommerfeld\ncoefficient $\\gamma_n$ $\\sim$ 53.1 mJ/mol $\\text{K}^2$) and a significantly\nenhanced Kadowaki-Woods ratio (KWR $\\sim$ 32 $\\mu\\Omega$ cm mol$^2$ K$^2$\nJ$^{-2}$) that is greater than the typical value ($\\sim$ 10 $\\mu\\Omega$ cm\nmol$^2$ K$^2$ J$^{-2}$) for strongly correlated electron systems. The upper\ncritical field $H_{c2}$ was seen to be nicely described by the single-band\nWerthamer-Helfand-Hohenberg model down to very low temperatures. The\nhydrostatic pressure effects on the superconductivity were also investigated.\nThe heat capacity below $T_c$ reveals a dominant s-wave gap with the magnitude\nclose to the BCS value. The first-principles calculations yield the\nelectron-phonon coupling constant $\\lambda$ = 0.81 and the logarithmically\naveraged frequency $\\omega_{ln}$ = 78.5 K, resulting in a theoretical $T_c$ =\n2.5 K, close to the experimental value. Our calculations suggest that the\nenhanced electronic heat capacity is more likely due to electron-phonon\ncoupling, rather than the electron-electron correlation effects. Collectively,\nthese results place severe constraints on any theory of exotic\nsuperconductivity in this system.\n", "title": "Evidence of s-wave superconductivity in the noncentrosymmetric La$_7$Ir$_3$" }
null
null
null
null
true
null
9978
null
Default
null
null
null
{ "abstract": " In this article the $p$-essential dimension of generic symbols over fields of\ncharacteristic $p$ is studied. In particular, the $p$-essential dimension of\nthe length $\\ell$ generic $p$-symbol of degree $n+1$ is bounded below by\n$n+\\ell$ when the base field is algebraically closed of characteristic $p$. The\nproof uses new techniques for working with residues in Milne-Kato\n$p$-cohomology and builds on work of Babic and Chernousov in the Witt group in\ncharacteristic 2. Two corollaries on $p$-symbol algebras (i.e, degree 2\nsymbols) result from this work. The generic $p$-symbol algebra of length $\\ell$\nis shown to have $p$-essential dimension equal to $\\ell+1$ as a $p$-torsion\nBrauer class. The second is a lower bound of $\\ell+1$ on the $p$-essential\ndimension of the functor $\\mathrm{Alg}_{p^\\ell,p}$. Roughly speaking this says\nthat you will need at least $\\ell+1$ independent parameters to be able to\nspecify any given algebra of degree $p^{\\ell}$ and exponent $p$ over a field of\ncharacteristic $p$ and improves on the previously established lower bound of 3.\n", "title": "Essential Dimension of Generic Symbols in Characteristic p" }
null
null
null
null
true
null
9979
null
Default
null
null
null
{ "abstract": " Market research is generally performed by surveying a representative sample\nof customers with questions that includes contexts such as psycho-graphics,\ndemographics, attitude and product preferences. Survey responses are used to\nsegment the customers into various groups that are useful for targeted\nmarketing and communication. Reducing the number of questions asked to the\ncustomer has utility for businesses to scale the market research to a large\nnumber of customers. In this work, we model this task using Bayesian networks.\nWe demonstrate the effectiveness of our approach using an example market\nsegmentation of broadband customers.\n", "title": "Ask less - Scale Market Research without Annoying Your Customers" }
null
null
null
null
true
null
9980
null
Default
null
null
null
{ "abstract": " Optimization on Riemannian manifolds widely arises in eigenvalue computation,\ndensity functional theory, Bose-Einstein condensates, low rank nearest\ncorrelation, image registration, and signal processing, etc. We propose an\nadaptive regularized Newton method which approximates the original objective\nfunction by the second-order Taylor expansion in Euclidean space but keeps the\nRiemannian manifold constraints. The regularization term in the objective\nfunction of the subproblem enables us to establish a Cauchy-point like\ncondition as the standard trust-region method for proving global convergence.\nThe subproblem can be solved inexactly either by first-order methods or a\nmodified Riemannian Newton method. In the later case, it can further take\nadvantage of negative curvature directions. Both global convergence and\nsuperlinear local convergence are guaranteed under mild conditions. Extensive\ncomputational experiments and comparisons with other state-of-the-art methods\nindicate that the proposed algorithm is very promising.\n", "title": "Adaptive Regularized Newton Method for Riemannian Optimization" }
null
null
[ "Mathematics" ]
null
true
null
9981
null
Validated
null
null
null
{ "abstract": " In this paper, we formalize the notion of distributed sensitive social\nnetworks (DSSNs), which encompasses networks like enmity networks, financial\ntransaction networks, supply chain networks and sexual relationship networks.\nCompared to the well studied traditional social networks, DSSNs are often more\nchallenging to study, given the privacy concerns of the individuals on whom the\nnetwork is knit. In the current work, we envision the use of secure multiparty\ntools and techniques for performing privacy preserving social network analysis\nover DSSNs. As a step towards realizing this, we design efficient\ndata-oblivious algorithms for computing the K-shell decomposition and the\nPageRank centrality measure for a given DSSN. The designed data-oblivious\nalgorithms can be translated into equivalent secure computation protocols. We\nalso list a string of challenges that are needed to be addressed, for employing\nsecure computation protocols as a practical solution for studying DSSNs.\n", "title": "MPC meets SNA: A Privacy Preserving Analysis of Distributed Sensitive Social Networks" }
null
null
null
null
true
null
9982
null
Default
null
null
null
{ "abstract": " We propose a new approach to the problem of optimizing autoencoders for lossy\nimage compression. New media formats, changing hardware technology, as well as\ndiverse requirements and content types create a need for compression algorithms\nwhich are more flexible than existing codecs. Autoencoders have the potential\nto address this need, but are difficult to optimize directly due to the\ninherent non-differentiabilty of the compression loss. We here show that\nminimal changes to the loss are sufficient to train deep autoencoders\ncompetitive with JPEG 2000 and outperforming recently proposed approaches based\non RNNs. Our network is furthermore computationally efficient thanks to a\nsub-pixel architecture, which makes it suitable for high-resolution images.\nThis is in contrast to previous work on autoencoders for compression using\ncoarser approximations, shallower architectures, computationally expensive\nmethods, or focusing on small images.\n", "title": "Lossy Image Compression with Compressive Autoencoders" }
null
null
null
null
true
null
9983
null
Default
null
null
null
{ "abstract": " We present a slow control system to gather all relevant environment\ninformation necessary to effectively and reliably run an HPC (High Performance\nComputing) system at a high value over price ratio. The scalable and reliable\noverall concept is presented as well as a newly developed hardware device for\nsensor read out. This device incorporates a Raspberry Pi, an Arduino and PoE\n(Power over Ethernet) functionality in a compact form factor. The system is in\nuse at the 2 PFLOPS cluster of the Johannes Gutenberg-University and\nHelmholtz-Institute in Mainz.\n", "title": "A cost effective and reliable environment monitoring system for HPC applications" }
null
null
null
null
true
null
9984
null
Default
null
null
null
{ "abstract": " I studied the non-equilibrium response of an initial Néel state under\ntime evolution with the Kitaev honeycomb model. This time evolution can be\ncomputed using a random sampling over all relevant flux configurations. With\nisotropic interactions the system quickly equilibrates into a steady state\nvalence bond solid. Anisotropy induces an exponentially long prethermal regime\nwhose dynamics are governed by an effective toric code. Signatures of topology\nare absent, however, due to the high energy density nature of the initial\nstate.\n", "title": "Quenching the Kitaev honeycomb model" }
null
null
[ "Physics" ]
null
true
null
9985
null
Validated
null
null
null
{ "abstract": " We propose a data-driven framework for optimizing privacy-preserving data\nrelease mechanisms toward the information-theoretically optimal tradeoff\nbetween minimizing distortion of useful data and concealing sensitive\ninformation. Our approach employs adversarially-trained neural networks to\nimplement randomized mechanisms and to perform a variational approximation of\nmutual information privacy. We empirically validate our Privacy-Preserving\nAdversarial Networks (PPAN) framework with experiments conducted on discrete\nand continuous synthetic data, as well as the MNIST handwritten digits dataset.\nWith the synthetic data, we find that our model-agnostic PPAN approach achieves\ntradeoff points very close to the optimal tradeoffs that are\nanalytically-derived from model knowledge. In experiments with the MNIST data,\nwe visually demonstrate a learned tradeoff between minimizing the pixel-level\ndistortion versus concealing the written digit.\n", "title": "Privacy-Preserving Adversarial Networks" }
null
null
null
null
true
null
9986
null
Default
null
null
null
{ "abstract": " The chromium arsenides BaCr2As2 and BaCrFeAs2 with ThCr2Si2 type structure\n(space group I4/mmm; also adopted by '122' iron arsenide superconductors) have\nbeen suggested as mother compounds for possible new superconductors. DFT-based\ncalculations of the electronic structure evidence metallic antiferromagnetic\nground states for both compounds. By powder neutron diffraction we confirm for\nBaCr2As2 a robust ordering in the antiferromagnetic G-type structure at T_N =\n580 K with mu_Cr = 1.9 mu_B at T = 2K. Anomalies in the lattice parameters\npoint to magneto-structural coupling effects. In BaCrFeAs2 the Cr and Fe atoms\nrandomly occupy the transition-metal site and G-type order is found below 265 K\nwith mu_Cr/Fe = 1.1 mu_B. 57Fe Moessbauer spectroscopy demonstrates that only a\nsmall ordered moment is associated with the Fe atoms, in agreement with\nelectronic structure calculations with mu_Fe ~ 0. The temperature dependence of\nthe hyperfine field does not follow that of the total moments. Both compounds\nare metallic but show large enhancements of the linear specific heat\ncoefficient gamma with respect to the band structure values. The metallic state\nand the electrical transport in BaCrFeAs2 is dominated by the atomic disorder\nof Cr and Fe and partial magnetic disorder of Fe. Our results indicate that\nNeel-type order is unfavorable for the Fe moments and thus it is destabilized\nwith increasing iron content.\n", "title": "Antiferromagnetic structure and electronic properties of BaCr2As2 and BaCrFeAs2" }
null
null
null
null
true
null
9987
null
Default
null
null
null
{ "abstract": " Development of high strength carbon fibers (CFs) requires an understanding of\nthe relationship between the processing conditions, microstructure and\nresulting properties. We developed a molecular model that combines kinetic\nMonte Carlo (KMC) and molecular dynamics (MD) techniques to predict the\nmicrostructure evolution during the carbonization process of carbon fiber\nmanufacturing. The model accurately predicts the cross-sectional microstructure\nof carbon fibers, predicting features such as graphitic sheets and hairpin\nstructures that have been observed experimentally. We predict the transverse\nmodulus of the resulting fibers and find that the modulus is slightly lower\nthan experimental values, but is up to an order of magnitude lower than ideal\ngraphite. We attribute this to the perfect longitudinal texture of our\nsimulated structures, as well as the chain sliding mechanism that governs the\ndeformation of the fibers, rather than the van der Waals interaction that\ngoverns the modulus for graphite. We also observe that high reaction rates\nresult in porous structures that have lower moduli.\n", "title": "Molecular Modeling of the Microstructure Evolution during the Carbonization of PAN-Based Carbon Fibers" }
null
null
null
null
true
null
9988
null
Default
null
null
null
{ "abstract": " A model of cosmological inflation is proposed in which field space is a\nhyperbolic plane. The inflaton never slow-rolls, and instead orbits the bottom\nof the potential, buoyed by a centrifugal force. Though initial velocities\nredshift away during inflation, in negatively curved spaces angular momentum\nnaturally starts exponentially large and remains relevant throughout. Quantum\nfluctuations produce perturbations that are adiabatic and approximately scale\ninvariant; strikingly, in a certain parameter regime the perturbations can grow\ndouble-exponentially during horizon crossing.\n", "title": "Hyperinflation" }
null
null
[ "Physics" ]
null
true
null
9989
null
Validated
null
null
null
{ "abstract": " Often, more time is spent on finding a model that works well, rather than\ntuning the model and working directly with the dataset. Our research began as\nan attempt to improve upon a simple Recurrent Neural Network for answering\n\"simple\" first-order questions (QA-RNN), developed by Ferhan Ture and Oliver\nJojic, from Comcast Labs, using the SimpleQuestions dataset. Their baseline\nmodel, a bidirectional, 2-layer LSTM RNN and a GRU RNN, have accuracies of 0.94\nand 0.90, for entity detection and relation prediction, respectively. We fine\ntuned these models by doing substantial hyper-parameter tuning, getting\nresulting accuracies of 0.70 and 0.80, for entity detection and relation\nprediction, respectively. An accuracy of 0.984 was obtained on entity detection\nusing a 1-layer LSTM, where preprocessing was done by removing all words not\npart of a noun chunk from the question. 100% of the dataset was available for\nrelation prediction, but only 20% of the dataset, was available for entity\ndetection, which we believe to be much of the reason for our initial\ndifficulties in replicating their result, despite the fact we were able to\nimprove on their entity detection results.\n", "title": "Improving on Q & A Recurrent Neural Networks Using Noun-Tagging" }
null
null
null
null
true
null
9990
null
Default
null
null
null
{ "abstract": " We show that the convergence rate of $\\ell^1$-regularization for linear\nill-posed equations is always $O(\\delta)$ if the exact solution is sparse and\nif the considered operator is injective and weak*-to-weak continuous. Under the\nsame assumptions convergence rates in case of non-sparse solutions are proven.\nThe results base on the fact that certain source-type conditions used in the\nliterature for proving convergence rates are automatically satisfied.\n", "title": "Injectivity and weak*-to-weak continuity suffice for convergence rates in $\\ell^1$-regularization" }
null
null
null
null
true
null
9991
null
Default
null
null
null
{ "abstract": " The Mu2e experiment will search for coherent, neutrino-less conversion of\nmuons into electrons in the Coulomb field of an aluminum nucleus with a\nsensitivity of four orders of magnitude better than previous experiments. The\nsignature of this process is an electron with energy nearly equal to the muon\nmass. Mu2e relies on a precision (0.1%) measurement of the outgoing electron\nmomentum to separate signal from background. In order to achieve this goal,\nMu2e has chosen a very low-mass straw tracker, made of 20,736 5 mm diameter\nthin-walled (15 $\\mu$m) Mylar straws, held under tension to avoid the need for\nsupports within the active volume, and arranged in an approximately 3 m long by\n0.7 m radius cylinder, operated in vacuum and a 1 T magnetic field. Groups of\n96 straws are assembled into modules, called panels. We present the prototype\nand the assembly procedure for a Mu2e tracker panel built at Fermilab\n", "title": "A Panel Prototype for the Mu2e Straw Tube Tracker at Fermilab" }
null
null
null
null
true
null
9992
null
Default
null
null
null
{ "abstract": " The current prominence and future promises of the Internet of Things (IoT),\nInternet of Everything (IoE) and Internet of Nano Things (IoNT) are extensively\nreviewed and a summary survey report is presented. The analysis clearly\ndistinguishes between IoT and IoE which are wrongly considered to be the same\nby many people. Upon examining the current advancement in the fields of IoT,\nIoE and IoNT, the paper presents scenarios for the possible future expansion of\ntheir applications.\n", "title": "A Review on Internet of Things (IoT), Internet of Everything (IoE) and Internet of Nano Things (IoNT)" }
null
null
null
null
true
null
9993
null
Default
null
null
null
{ "abstract": " Adversarial examples are perturbed inputs designed to fool machine learning\nmodels. Adversarial training injects such examples into training data to\nincrease robustness. To scale this technique to large datasets, perturbations\nare crafted using fast single-step methods that maximize a linear approximation\nof the model's loss. We show that this form of adversarial training converges\nto a degenerate global minimum, wherein small curvature artifacts near the data\npoints obfuscate a linear approximation of the loss. The model thus learns to\ngenerate weak perturbations, rather than defend against strong ones. As a\nresult, we find that adversarial training remains vulnerable to black-box\nattacks, where we transfer perturbations computed on undefended models, as well\nas to a powerful novel single-step attack that escapes the non-smooth vicinity\nof the input data via a small random step. We further introduce Ensemble\nAdversarial Training, a technique that augments training data with\nperturbations transferred from other models. On ImageNet, Ensemble Adversarial\nTraining yields models with strong robustness to black-box attacks. In\nparticular, our most robust model won the first round of the NIPS 2017\ncompetition on Defenses against Adversarial Attacks.\n", "title": "Ensemble Adversarial Training: Attacks and Defenses" }
null
null
null
null
true
null
9994
null
Default
null
null
null
{ "abstract": " We construct a sample of X-ray bright optically faint active galactic nuclei\nby combining Subaru Hyper Suprime-Cam, XMM-Newton, and infrared source\ncatalogs. 53 X-ray sources satisfying i band magnitude fainter than 23.5 mag\nand X-ray counts with EPIC-PN detector larger than 70 are selected from 9.1\ndeg^2, and their spectral energy distributions (SEDs) and X-ray spectra are\nanalyzed. 44 objects with an X-ray to i-band flux ratio F_X/F_i>10 are\nclassified as extreme X-ray-to-optical flux sources. SEDs of 48 among 53 are\nrepresented by templates of type 2 AGNs or starforming galaxies and show\nsignature of stellar emission from host galaxies in the optical in the source\nrest frame. Infrared/optical SEDs indicate significant contribution of emission\nfrom dust to infrared fluxes and that the central AGN is dust obscured.\nPhotometric redshifts determined from the SEDs are in the range of 0.6-2.5.\nX-ray spectra are fitted by an absorbed power law model, and the intrinsic\nabsorption column densities are modest (best-fit log N_H = 20.5-23.5 cm^-2 in\nmost cases). The absorption corrected X-ray luminosities are in the range of\n6x10^42 - 2x10^45 erg s^-1. 20 objects are classified as type 2 quasars based\non X-ray luminsosity and N_H. The optical faintness is explained by a\ncombination of redshifts (mostly z>1.0), strong dust extinction, and in part a\nlarge ratio of dust/gas.\n", "title": "X-Ray bright optically faint active galactic nuclei in the Subaru Hyper Suprime-Cam wide survey" }
null
null
null
null
true
null
9995
null
Default
null
null
null
{ "abstract": " This article is the second of a pair of articles about the Goldman symplectic\nform on the PSL(V )-Hitchin component. We show that any ideal triangulation on\na closed connected surface of genus at least 2, and any compatible bridge\nsystem determine a symplectic trivialization of the tangent bundle to the\nHitchin component. Using this, we prove that a large class of flows defined in\nthe companion paper [SWZ17] are Hamiltonian. We also construct an explicit\ncollection of Hamiltonian vector fields on the Hitchin component that give a\nsymplectic basis at every point. These are used in the companion paper to\ncompute explicit global Darboux coordinates for the Hitchin component.\n", "title": "The Goldman symplectic form on the PSL(V)-Hitchin component" }
null
null
null
null
true
null
9996
null
Default
null
null
null
{ "abstract": " TUS is the world's first orbital detector of extreme energy cosmic rays\n(EECRs), which operates as a part of the scientific payload of the Lomonosov\nsatellite since May 19, 2016. TUS employs the nocturnal atmosphere of the Earth\nto register ultraviolet (UV) fluorescence and Cherenkov radiation from\nextensive air showers generated by EECRs as well as UV radiation from lightning\nstrikes and transient luminous events, micro-meteors and space debris. The\nfirst months of its operation in orbit have demonstrated an unexpectedly rich\nvariety of UV radiation in the atmosphere. We briefly review the design of TUS\nand present a few examples of events recorded in a mode dedicated to\nregistering EECRs.\n", "title": "Early Results from TUS, the First Orbital Detector of Extreme Energy Cosmic Rays" }
null
null
null
null
true
null
9997
null
Default
null
null
null
{ "abstract": " In this thesis we present the novel semi-supervised network-based algorithm\nP-Net, which is able to rank and classify patients with respect to a specific\nphenotype or clinical outcome under study. The peculiar and innovative\ncharacteristic of this method is that it builds a network of samples/patients,\nwhere the nodes represent the samples and the edges are functional or genetic\nrelationships between individuals (e.g. similarity of expression profiles), to\npredict the phenotype under study. In other words, it constructs the network in\nthe \"sample space\" and not in the \"biomarker space\" (where nodes represent\nbiomolecules (e.g. genes, proteins) and edges represent functional or genetic\nrelationships between nodes), as usual in state-of-the-art methods. To assess\nthe performances of P-Net, we apply it on three different publicly available\ndatasets from patients afflicted with a specific type of tumor: pancreatic\ncancer, melanoma and ovarian cancer dataset, by using the data and following\nthe experimental set-up proposed in two recently published papers [Barter et\nal., 2014, Winter et al., 2012]. We show that network-based methods in the\n\"sample space\" can achieve results competitive with classical supervised\ninductive systems. Moreover, the graph representation of the samples can be\neasily visualized through networks and can be used to gain visual clues about\nthe relationships between samples, taking into account the phenotype associated\nor predicted for each sample. To our knowledge this is one of the first works\nthat proposes graph-based algorithms working in the \"sample space\" of the\nbiomolecular profiles of the patients to predict their phenotype or outcome,\nthus contributing to a novel research line in the framework of the Network\nMedicine.\n", "title": "Network-based methods for outcome prediction in the \"sample space\"" }
null
null
null
null
true
null
9998
null
Default
null
null
null
{ "abstract": " We derive out naturally some important distributions such as high order\nnormal distributions and high order exponent distributions and the Gamma\ndistribution from a geometrical way. Further, we obtain the exact mean-values\nof integral form functionals in the balls of continuous functions space with\n$p-$norm, and show the complete concentration of measure phenomenon which means\nthat a functional takes its average on a ball with probability 1, from which we\nhave nonlinear exchange formula of expectation.\n", "title": "The geometrical origins of some distributions and the complete concentration of measure phenomenon for mean-values of functionals" }
null
null
[ "Mathematics", "Statistics" ]
null
true
null
9999
null
Validated
null
null
null
{ "abstract": " We give a generalization of a theorem of Silverman and Stephens regarding the\nsigns in an elliptic divisibility sequence to the case of an elliptic net. We\nalso describe applications of this theorem in the study of the distribution of\nthe signs in elliptic nets and generating elliptic nets using the denominators\nof the linear combination of points on elliptic curves.\n", "title": "The Signs in Elliptic Nets" }
null
null
null
null
true
null
10000
null
Default
null
null