title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
Finite temperature disordered bosons in two dimensions
We study phase transitions in a two dimensional weakly interacting Bose gas in a random potential at finite temperatures. We identify superfluid, normal fluid, and insulator phases and construct the phase diagram. At T=0 one has a tricritical point where the three phases coexist. The truncation of the energy distribution at the trap barrier, which is a generic phenomenon in cold atom systems, limits the growth of the localization length and in contrast to the thermodynamic limit the insulator phase is present at any temperature.
0
1
0
0
0
0
Morphology and properties evolution upon ring-opening polymerization during extrusion of cyclic butylene terephthalate and graphene-related-materials into thermally conductive nanocomposites
In this work, the study of thermal conductivity before and after in-situ ring-opening polymerization of cyclic butylene terephthalate into poly (butylene terephthalate) in presence of graphene-related materials (GRM) is addressed, to gain insight in the modification of nanocomposites morphology upon polymerization. Five types of GRM were used: one type of graphite nanoplatelets, two different grades of reduced graphene oxide (rGO) and the same rGO grades after thermal annealing for 1 hour at 1700°C under vacuum to reduce their defectiveness. Polymerization of CBT into pCBT, morphology and nanoparticle organization were investigated by means of differential scanning calorimetry, electron microscopy and rheology. Electrical and thermal properties were investigated by means of volumetric resistivity and bulk thermal conductivity measurement. In particular, the reduction of nanoflake aspect ratio during ring-opening polymerization was found to have a detrimental effect on both electrical and thermal conductivities in nanocomposites.
0
1
0
0
0
0
Financial density forecasts: A comprehensive comparison of risk-neutral and historical schemes
We investigate the forecasting ability of the most commonly used benchmarks in financial economics. We approach the usual caveats of probabilistic forecasts studies -small samples, limited models and non-holistic validations- by performing a comprehensive comparison of 15 predictive schemes during a time period of over 21 years. All densities are evaluated in terms of their statistical consistency, local accuracy and forecasting errors. Using a new composite indicator, the Integrated Forecast Score (IFS), we show that risk-neutral densities outperform historical-based predictions in terms of information content. We find that the Variance Gamma model generates the highest out-of-sample likelihood of observed prices and the lowest predictive errors, whereas the ARCH-based GJR-FHS delivers the most consistent forecasts across the entire density range. In contrast, lognormal densities, the Heston model or the Breeden-Litzenberger formula yield biased predictions and are rejected in statistical tests.
0
0
0
1
0
1
Deep Room Recognition Using Inaudible Echos
Recent years have seen the increasing need of location awareness by mobile applications. This paper presents a room-level indoor localization approach based on the measured room's echos in response to a two-millisecond single-tone inaudible chirp emitted by a smartphone's loudspeaker. Different from other acoustics-based room recognition systems that record full-spectrum audio for up to ten seconds, our approach records audio in a narrow inaudible band for 0.1 seconds only to preserve the user's privacy. However, the short-time and narrowband audio signal carries limited information about the room's characteristics, presenting challenges to accurate room recognition. This paper applies deep learning to effectively capture the subtle fingerprints in the rooms' acoustic responses. Our extensive experiments show that a two-layer convolutional neural network fed with the spectrogram of the inaudible echos achieve the best performance, compared with alternative designs using other raw data formats and deep models. Based on this result, we design a RoomRecognize cloud service and its mobile client library that enable the mobile application developers to readily implement the room recognition functionality without resorting to any existing infrastructures and add-on hardware. Extensive evaluation shows that RoomRecognize achieves 99.7%, 97.7%, 99%, and 89% accuracy in differentiating 22 and 50 residential/office rooms, 19 spots in a quiet museum, and 15 spots in a crowded museum, respectively. Compared with the state-of-the-art approaches based on support vector machine, RoomRecognize significantly improves the Pareto frontier of recognition accuracy versus robustness against interfering sounds (e.g., ambient music).
1
0
0
0
0
0
Linguistic Diversities of Demographic Groups in Twitter
The massive popularity of online social media provides a unique opportunity for researchers to study the linguistic characteristics and patterns of user's interactions. In this paper, we provide an in-depth characterization of language usage across demographic groups in Twitter. In particular, we extract the gender and race of Twitter users located in the U.S. using advanced image processing algorithms from Face++. Then, we investigate how demographic groups (i.e. male/female, Asian/Black/White) differ in terms of linguistic styles and also their interests. We extract linguistic features from 6 categories (affective attributes, cognitive attributes, lexical density and awareness, temporal references, social and personal concerns, and interpersonal focus), in order to identify the similarities and differences in particular writing set of attributes. In addition, we extract the absolute ranking difference of top phrases between demographic groups. As a dimension of diversity, we also use the topics of interest that we retrieve from each user. Our analysis unveils clear differences in the writing styles (and the topics of interest) of different demographic groups, with variation seen across both gender and race lines. We hope our effort can stimulate the development of new studies related to demographic information in the online space.
1
0
0
0
0
0
And That's A Fact: Distinguishing Factual and Emotional Argumentation in Online Dialogue
We investigate the characteristics of factual and emotional argumentation styles observed in online debates. Using an annotated set of "factual" and "feeling" debate forum posts, we extract patterns that are highly correlated with factual and emotional arguments, and then apply a bootstrapping methodology to find new patterns in a larger pool of unannotated forum posts. This process automatically produces a large set of patterns representing linguistic expressions that are highly correlated with factual and emotional language. Finally, we analyze the most discriminating patterns to better understand the defining characteristics of factual and emotional arguments.
1
0
0
0
0
0
A conservative sharp-interface method for compressible multi-material flows
In this paper we develop a conservative sharp-interface method dedicated to simulating multiple compressible fluids. Numerical treatments for a cut cell shared by more than two materials are proposed. First, we simplify the interface interaction inside such a cell with a reduced model to avoid explicit interface reconstruction and complex flux calculation. Second, conservation is strictly preserved by an efficient conservation correction procedure for the cut cell. To improve the robustness, a multi-material scale separation model is developed to consistently remove non-resolved interface scales. In addition, the multi-resolution method and local time-stepping scheme are incorporated into the proposed multi-material method to speed up the high-resolution simulations. Various numerical test cases, including the multi-material shock tube problem, inertial confinement fusion implosion, triple-point shock interaction and shock interaction with multi-material bubbles, show that the method is suitable for a wide range of complex compressible multi-material flows.
0
1
0
0
0
0
Complexity of products: the effect of data regularisation
Among several developments, the field of Economic Complexity (EC) has notably seen the introduction of two new techniques. One is the Bootstrapped Selective Predictability Scheme (SPSb), which can provide quantitative forecasts of the Gross Domestic Product of countries. The other, Hidden Markov Model (HMM) regularisation, denoises the datasets typically employed in the literature. We contribute to EC along three different directions. First, we prove the convergence of the SPSb algorithm to a well-known statistical learning technique known as Nadaraya-Watson Kernel regression. The latter has significantly lower time complexity, produces deterministic results, and it is interchangeable with SPSb for the purpose of making predictions. Second, we study the effects of HMM regularization on the Product Complexity and logPRODY metrics, for which a model of time evolution has been recently proposed. We find confirmation for the original interpretation of the logPRODY model as describing the change in the global market structure of products with new insights allowing a new interpretation of the Complexity measure, for which we propose a modification. Third, we explore new effects of regularisation on the data. We find that it reduces noise, and observe for the first time that it increases nestedness in the export network adjacency matrix.
0
0
0
0
0
1
A review on applications of two-dimensional materials in surface enhanced Raman spectroscopy
Two-dimensional (2D) materials, such as graphene and MoS2, have been attracting wide interest in surface enhancement Raman spectroscopy. This perspective gives an overview of recent developments in 2D materials' application in surface enhanced Raman spectroscopy. This review focuses on the applications of using bare 2D materials and metal/2D material hybrid substrate for Raman enhancement. The Raman enhancing mechanism of 2D materials will also be discussed. The progress covered herein shows great promise for widespread adoption of 2D materials in SERS application.
0
1
0
0
0
0
A variety of elastic anomalies in orbital-active nearly-itinerant cobalt vanadate spinel
We perform ultrasound velocity measurements on a single crystal of nearly-metallic spinel Co$_{1.21}$V$_{1.79}$O$_4$ which exhibits a ferrimagnetic phase transition at $T_C \sim$ 165 K. The experiments reveal a variety of elastic anomalies in not only the paramagnetic phase above $T_C$ but also the ferrimagnetic phase below $T_C$, which should be driven by the nearly-itinerant character of the orbitally-degenerate V 3$d$ electrons. In the paramagnetic phase above $T_C$, the elastic moduli exhibit elastic-mode-dependent unusual temperature variations, suggesting the existence of a dynamic spin-cluster state. Furthermore, above $T_C$, the sensitive magnetic-field response of the elastic moduli suggests that, with the negative magnetoresistance, the magnetic-field-enhanced nearly-itinerant character of the V 3$d$ electrons emerges from the spin-cluster state. This should be triggered by the inter-V-site interactions acting on the orbitally-degenerate 3$d$ electrons. In the ferrimagnetic phase below $T_C$, the elastic moduli exhibit distinct anomalies at $T_1\sim$ 95 K and $T_2\sim$ 50 K, with a sign change of the magnetoresistance at $T_1$ (positive below $T_1$) and an enhancement of the positive magnetoresistance below $T_2$, respectively. These observations below $T_C$ suggest the successive occurrence of an orbital glassy order at $T_1$ and a structural phase transition at $T_2$, where the rather localized character of the V 3$d$ electrons evolves below $T_1$ and is further enhanced below $T_2$.
0
1
0
0
0
0
Anomaly Detection using One-Class Neural Networks
We propose a one-class neural network (OC-NN) model to detect anomalies in complex data sets. OC-NN combines the ability of deep networks to extract a progressively rich representation of data with the one-class objective of creating a tight envelope around normal data. The OC-NN approach breaks new ground for the following crucial reason: data representation in the hidden layer is driven by the OC-NN objective and is thus customized for anomaly detection. This is a departure from other approaches which use a hybrid approach of learning deep features using an autoencoder and then feeding the features into a separate anomaly detection method like one-class SVM (OC-SVM). The hybrid OC-SVM approach is sub-optimal because it is unable to influence representational learning in the hidden layers. A comprehensive set of experiments demonstrate that on complex data sets (like CIFAR and GTSRB), OC-NN performs on par with state-of-the-art methods and outperformed conventional shallow methods in some scenarios.
0
0
0
1
0
0
Low Cost, Open-Source Testbed to Enable Full-Sized Automated Vehicle Research
An open-source vehicle testbed to enable the exploration of automation technologies for road vehicles is presented. The platform hardware and software, based on the Robot Operating System (ROS), are detailed. Two methods are discussed for enabling the remote control of a vehicle (in this case, an electric 2013 Ford Focus). The first approach used digital filtering of Controller Area Network (CAN) messages. In the case of the test vehicle, this approach allowed for the control of acceleration from a tap-point on the CAN bus and the OBD-II port. The second approach, based on the emulation of the analog output(s) of a vehicle's accelerator pedal, brake pedal, and steering torque sensors, is more generally applicable and, in the test vehicle, allowed for the full control vehicle acceleration, braking, and steering. To demonstrate the utility of the testbed for vehicle automation research, system identification was performed on the test vehicle and speed and steering controllers were designed to allow the vehicle to follow a predetermined path. The resulting system was shown to be differentially flat, and a high level path following algorithm was developed using the differentially flat properties and state feedback. The path following algorithm is experimentally validated on the automation testbed developed in the paper.
1
0
0
0
0
0
Sub-sampled Cubic Regularization for Non-convex Optimization
We consider the minimization of non-convex functions that typically arise in machine learning. Specifically, we focus our attention on a variant of trust region methods known as cubic regularization. This approach is particularly attractive because it escapes strict saddle points and it provides stronger convergence guarantees than first- and second-order as well as classical trust region methods. However, it suffers from a high computational complexity that makes it impractical for large-scale learning. Here, we propose a novel method that uses sub-sampling to lower this computational cost. By the use of concentration inequalities we provide a sampling scheme that gives sufficiently accurate gradient and Hessian approximations to retain the strong global and local convergence guarantees of cubically regularized methods. To the best of our knowledge this is the first work that gives global convergence guarantees for a sub-sampled variant of cubic regularization on non-convex functions. Furthermore, we provide experimental results supporting our theory.
1
0
1
1
0
0
Emergence of Seismic Metamaterials: Current State and Future Perspectives
Following the advent of electromagnetic metamaterials at the turn of the century, researchers working in other areas of wave physics have translated concepts of electromagnetic metamaterials to acoustics, elastodynamics, as well as to heat, mass and light diffusion processes. In elastodynamics, seismic metamaterials have emerged in the last decade for soft soils structured at the meter scale, and have been tested thanks to full-scale experiments on holey soils five years ago. Born in the soil, seismic metamaterials grow simultaneously on the field of tuned-resonators buried in the soil, around building's foundations or near the soil-structure's interface, and on the field of above-surface resonators. In this perspective article, we quickly recall some research advances made in all these types of seismic metamaterials and we further dress an inventory of which material parameters can be achieved and which cannot, notably from the effective medium theory perspective. We finally envision perspectives on future developments of large scale auxetic metamaterials for building's foundations, forests of trees for seismic protection and metamaterial-like transformed urbanism at the city scale.
0
1
0
0
0
0
Case studies in network community detection
Community structure describes the organization of a network into subgraphs that contain a prevalence of edges within each subgraph and relatively few edges across boundaries between subgraphs. The development of community-detection methods has occurred across disciplines, with numerous and varied algorithms proposed to find communities. As we present in this Chapter via several case studies, community detection is not just an "end game" unto itself, but rather a step in the analysis of network data which is then useful for furthering research in the disciplinary domain of interest. These case-study examples arise from diverse applications, ranging from social and political science to neuroscience and genetics, and we have chosen them to demonstrate key aspects of community detection and to highlight that community detection, in practice, should be directed by the application at hand.
1
1
0
0
0
0
Multi-particle instability in a spin-imbalanced Fermi gas
Weak attractive interactions in a spin-imbalanced Fermi gas induce a multi-particle instability, binding multiple fermions together. The maximum binding energy per particle is achieved when the ratio of the number of up- and down-spin particles in the instability is equal to the ratio of the up- and down-spin densities of states in momentum at the Fermi surfaces, to utilize the variational freedom of all available momentum states. We derive this result using an analytical approach, and verify it using exact diagonalization. The multi-particle instability extends the Cooper pairing instability of balanced Fermi gases to the imbalanced case, and could form the basis of a many-body state, analogously to the construction of the Bardeen-Cooper-Schrieffer theory of superconductivity out of Cooper pairs.
0
1
0
0
0
0
Weighted boundedness of maximal functions and fractional Bergman operators
The aim of this paper is to study two-weight norm inequalities for fractional maximal functions and fractional Bergman operator defined on the upper-half space. Namely, we characterize those pairs of weights for which these maximal operators satisfy strong and weak type inequalities. Our characterizations are in terms of Sawyer and Békollé-Bonami type conditions. We also obtain a $\Phi$-bump characterization for these maximal functions, where $\Phi$ is a Orlicz function. As a consequence, we obtain two-weight norm inequalities for fractional Bergman operators. Finally, we provide some sharp weighted inequalities for the fractional maximal functions.
0
0
1
0
0
0
A Framework for Algorithm Stability
We say that an algorithm is stable if small changes in the input result in small changes in the output. This kind of algorithm stability is particularly relevant when analyzing and visualizing time-varying data. Stability in general plays an important role in a wide variety of areas, such as numerical analysis, machine learning, and topology, but is poorly understood in the context of (combinatorial) algorithms. In this paper we present a framework for analyzing the stability of algorithms. We focus in particular on the tradeoff between the stability of an algorithm and the quality of the solution it computes. Our framework allows for three types of stability analysis with increasing degrees of complexity: event stability, topological stability, and Lipschitz stability. We demonstrate the use of our stability framework by applying it to kinetic Euclidean minimum spanning trees.
1
0
0
0
0
0
A Survey of Question Answering for Math and Science Problem
Turing test was long considered the measure for artificial intelligence. But with the advances in AI, it has proved to be insufficient measure. We can now aim to mea- sure machine intelligence like we measure human intelligence. One of the widely accepted measure of intelligence is standardized math and science test. In this paper, we explore the progress we have made towards the goal of making a machine smart enough to pass the standardized test. We see the challenges and opportunities posed by the domain, and note that we are quite some ways from actually making a system as smart as a even a middle school scholar.
1
0
0
0
0
0
Thermal and structural properties of iron at high pressure by molecular dynamics
We investigate the basic thermal, mechanical and structural properties of body centred cubic iron ($\alpha$-Fe) at several temperatures and positive loading by means of Molecular Dynamics simulations in conjunction with the embedded-atom method potential and its modified counterpart one. Computations of its thermal properties like average energy and density of atoms, transport sound velocities at finite temperatures and pressures are detailed studied as well. Moreover, there are suggestions to obtain hexagonal close- packed structure ($\varepsilon$-phase) of this metal under positive loading. To demonstrate that, one can increase sufficiently the pressure of simulated system at several temperature's ranges; these structural changes depend only on potential type used. The ensuring structures are studied via the pair radial distribution functions (PRDF) and precise common- neighbour analysis method (CNA) as well.
0
1
0
0
0
0
Interacting Fields and Flows: Magnetic Hot Jupiters
We present Magnetohydrodynamic (MHD) simulations of the magnetic interactions between a solar type star and short period hot Jupiter exoplanets, using the publicly available MHD code PLUTO. It has been predicted that emission due to magnetic interactions such as the electron cyclotron maser instability (ECMI) will be observable. In our simulations, a planetary outflow, due to UV evaporation of the exoplanets atmosphere, results in the build-up of circumplanetary material. We predict the ECMI emission and determine that the emission is prevented from escaping from the system. This is due to the evaporated material leading to a high plasma frequency in the vicinity of the planet, which inhibits the ECMI process.
0
1
0
0
0
0
From which world is your graph?
Discovering statistical structure from links is a fundamental problem in the analysis of social networks. Choosing a misspecified model, or equivalently, an incorrect inference algorithm will result in an invalid analysis or even falsely uncover patterns that are in fact artifacts of the model. This work focuses on unifying two of the most widely used link-formation models: the stochastic blockmodel (SBM) and the small world (or latent space) model (SWM). Integrating techniques from kernel learning, spectral graph theory, and nonlinear dimensionality reduction, we develop the first statistically sound polynomial-time algorithm to discover latent patterns in sparse graphs for both models. When the network comes from an SBM, the algorithm outputs a block structure. When it is from an SWM, the algorithm outputs estimates of each node's latent position.
1
0
0
1
0
0
Topological and Hodge L-Classes of Singular Covering Spaces and Varieties with Trivial Canonical Class
The signature of closed oriented manifolds is well-known to be multiplicative under finite covers. This fails for Poincaré complexes as examples of C. T. C. Wall show. We establish the multiplicativity of the signature, and more generally, the topological L-class, for closed oriented stratified pseudomanifolds that can be equipped with a middle-perverse Verdier self-dual complex of sheaves, determined by Lagrangian sheaves along strata of odd codimension (so-called L-pseudomanifolds). This class of spaces contains all Witt spaces and thus all pure-dimensional complex algebraic varieties. We apply this result in proving the Brasselet-Schürmann-Yokura conjecture for normal complex projective 3-folds with at most canonical singularities, trivial canonical class and positive irregularity. The conjecture asserts the equality of topological and Hodge L-class for compact complex algebraic rational homology manifolds.
0
0
1
0
0
0
Identities for the shifted harmonic numbers and binomial coefficients
We develop new closed form representations of sums of (n + {\alpha})th shifted harmonic numbers and reciprocal binomial coefficients in terms of {\alpha}th shifted harmonic numbers. Some interesting new consequences and illustrative examples are considered.
0
0
1
0
0
0
Mechanism of light energy transport in the avian retina
We studied intermediate filaments (IFs) in the retina of the Pied flycatcher (Ficedula hypoleuca) in the foveolar zone. Single IFs span Müller cells (MC) lengthwise; cylindrical bundles of IFs (IFBs) appear inside the cone inner segment (CIS) at the outer limiting membrane (OLM) level. IFBs adjoin the cone cytoplasmatic membrane, following lengthwise regularly spaced, forming a skeleton of the CIS, located above the OLM. IFBs follow along the cone outer segment (COS), with single IFs separating from the IFB, touching and entering in-between the light-sensitive disks of the cone membrane. We propose a mechanism of exciton transfer from the inner retinal surface to the visual pigments in the photoreceptor cells. This includes excitation transfer in donor-acceptor systems, from the IF donors to the rhodopsin acceptors, with theoretic efficiency over 80%. This explains high image contrast in fovea and foveola in daylight, while the classical mechanism that describes Müller cells as optical lightguides operates in night vision, with loss of resolution traded for sensitivity. Our theory receives strong confirmation in morphology and function of the cones and pigment cells. In daylight the lateral surface of the photosensor disks is blocked from the (scattered or oblique) light by the pigment cells. Thus the light energy can only get to the cone via intermediate filaments that absorb photons in the Müller cell endfeet and conduct excitons to the cone. Thus, the disks are consumed at their lateral surfaces, moving to the apex of the cone, with new disks produced below. An alternative hypothesis of direct light passing through the cone with its organelles and hitting the lowest disk contradicts morphological evidence, as thus all of the other disks would have no useful function in daylight vision.
0
1
0
0
0
0
The Molecular Structures of Local Arm and Perseus Arm in the Galactic Region of l=[139.75,149.75]$^\circ$, b=[-5.25,5.25]$^\circ$
Using the Purple Mountain Observatory Delingha (PMODLH) 13.7 m telescope, we report a 96-square-degree 12CO/13CO/C18O mapping observation toward the Galactic region of l = [139.75, 149.75]$^\circ$, b = [-5.25, 5.25]$^\circ$. The molecular structure of the Local Arm and Perseus Arm are presented. Combining HI data and part of the Outer Arm results, we obtain that the warp structure of both atomic and molecular gas is obvious, while the flare structure only exists in atomic gas in this observing region. In addition, five filamentary giant molecular clouds on the Perseus Arm are identified. Among them, four are newly identified. Their relations with the Milky Way large-scale structure are discussed.
0
1
0
0
0
0
The Theory is Predictive, but is it Complete? An Application to Human Perception of Randomness
When we test a theory using data, it is common to focus on correctness: do the predictions of the theory match what we see in the data? But we also care about completeness: how much of the predictable variation in the data is captured by the theory? This question is difficult to answer, because in general we do not know how much "predictable variation" there is in the problem. In this paper, we consider approaches motivated by machine learning algorithms as a means of constructing a benchmark for the best attainable level of prediction. We illustrate our methods on the task of predicting human-generated random sequences. Relative to an atheoretical machine learning algorithm benchmark, we find that existing behavioral models explain roughly 15 percent of the predictable variation in this problem. This fraction is robust across several variations on the problem. We also consider a version of this approach for analyzing field data from domains in which human perception and generation of randomness has been used as a conceptual framework; these include sequential decision-making and repeated zero-sum games. In these domains, our framework for testing the completeness of theories provides a way of assessing their effectiveness over different contexts; we find that despite some differences, the existing theories are fairly stable across our field domains in their performance relative to the benchmark. Overall, our results indicate that (i) there is a significant amount of structure in this problem that existing models have yet to capture and (ii) there are rich domains in which machine learning may provide a viable approach to testing completeness.
1
0
0
1
0
0
Clustering with Statistical Error Control
This paper presents a clustering approach that allows for rigorous statistical error control similar to a statistical test. We develop estimators for both the unknown number of clusters and the clusters themselves. The estimators depend on a tuning parameter alpha which is similar to the significance level of a statistical hypothesis test. By choosing alpha, one can control the probability of overestimating the true number of clusters, while the probability of underestimation is asymptotically negligible. In addition, the probability that the estimated clusters differ from the true ones is controlled. In the theoretical part of the paper, formal versions of these statements on statistical error control are derived in a standard model setting with convex clusters. A simulation study and two applications to temperature and gene expression microarray data complement the theoretical analysis.
0
0
1
1
0
0
DNA Base Pair Mismatches Induce Structural Changes and Alter the Free Energy Landscape of Base Flip
Double-stranded DNA may contain mismatched base pairs beyond the Watson-Crick pairs guanine-cytosine and adenine-thymine. Such mismatches bear adverse consequences for human health. We utilize molecular dynamics and metadynamics computer simulations to study the equilibrium structure and dynamics for both matched and mismatched base pairs. We discover significant differences between matched and mismatched pairs in structure, hydrogen bonding, and base flip work profiles. Mismatched pairs shift further in the plane normal to the DNA strand and are more likely to exhibit non-canonical structures, including the e-motif. We discuss potential implications on mismatch repair enzymes' detection of DNA mismatches.
0
0
0
0
1
0
Average sampling and average splines on combinatorial graphs
In the setting of a weighted combinatorial finite or infinite countable graph $G$ we introduce functional Paley-Wiener spaces $PW_{\omega}(L),\>\omega>0,$ defined in terms of the spectral resolution of the combinatorial Laplace operator $L$ in the space $L_{2}(G)$. It is shown that functions in certain $PW_{\omega}(L),\>\omega>0,$ are uniquely defined by their averages over some families of "small" subgraphs which form a cover of $G$. Reconstruction methods for reconstruction of an $f\in PW_{\omega}(L)$ from appropriate set of its averages are introduced. One method is using language of Hilbert frames. Another one is using average variational interpolating splines which are constructed in the setting of combinatorial graphs.
1
0
0
0
0
0
Hilbert isometries and maximal deviation preserving maps on JB-algebras
In this paper we characterize the surjective linear variation norm isometries on JB-algebras. Variation norm isometries are precisely the maps that preserve the maximal deviation, the quantum analogue of the standard deviation, which plays an important role in quantum statistics. Consequently, we characterize the Hilbert's metric isometries on cones in JB-algebras.
0
0
1
0
0
0
Visual Entailment: A Novel Task for Fine-Grained Image Understanding
Existing visual reasoning datasets such as Visual Question Answering (VQA), often suffer from biases conditioned on the question, image or answer distributions. The recently proposed CLEVR dataset addresses these limitations and requires fine-grained reasoning but the dataset is synthetic and consists of similar objects and sentence structures across the dataset. In this paper, we introduce a new inference task, Visual Entailment (VE) - consisting of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal of a trained VE model is to predict whether the image semantically entails the text. To realize this task, we build a dataset SNLI-VE based on the Stanford Natural Language Inference corpus and Flickr30k dataset. We evaluate various existing VQA baselines and build a model called Explainable Visual Entailment (EVE) system to address the VE task. EVE achieves up to 71% accuracy and outperforms several other state-of-the-art VQA based models. Finally, we demonstrate the explainability of EVE through cross-modal attention visualizations. The SNLI-VE dataset is publicly available at this https URL necla-ml/SNLI-VE.
1
0
0
0
0
0
On relations between weak and strong type inequalities for maximal operators on non-doubling metric measure spaces
In this article we characterize all possible cases that may occur in the relations between the sets of $p$ for which weak type $(p,p)$ and strong type $(p,p)$ inequalities for the Hardy--Littlewood maximal operators, both centered and non-centered, hold in the context of general metric measure spaces.
0
0
1
0
0
0
Random Caching Based Cooperative Transmission in Heterogeneous Wireless Networks
Base station cooperation in heterogeneous wireless networks (HetNets) is a promising approach to improve the network performance, but it also imposes a significant challenge on backhaul. On the other hand, caching at small base stations (SBSs) is considered as an efficient way to reduce backhaul load in HetNets. In this paper, we jointly consider SBS caching and cooperation in a downlink largescale HetNet. We propose two SBS cooperative transmission schemes under random caching at SBSs with the caching distribution as a design parameter. Using tools from stochastic geometry and adopting appropriate integral transformations, we first derive a tractable expression for the successful transmission probability under each scheme. Then, under each scheme, we consider the successful transmission probability maximization by optimizing the caching distribution, which is a challenging optimization problem with a non-convex objective function. By exploring optimality properties and using optimization techniques, under each scheme, we obtain a local optimal solution in the general case and global optimal solutions in some special cases. Compared with some existing caching designs in the literature, e.g., the most popular caching, the i.i.d. caching and the uniform caching, the optimal random caching under each scheme achieves better successful transmission probability performance. The analysis and optimization results provide valuable design insights for practical HetNets.
1
0
0
0
0
0
Electronic characteristics of ultrathin SrRuO$_3$ films and their relationship with the metal$-$insulator transition
SrRuO$_3$ (SRO) films are known to exhibit insulating behavior as their thickness approaches four unit cells. We employ electron energy$-$loss (EEL) spectroscopy to probe the spatially resolved electronic structures of both insulating and conducting SRO to correlate them with the metal$-$insulator transition (MIT). Importantly, the central layer of the ultrathin insulating film exhibits distinct features from the metallic SRO. Moreover, EEL near edge spectra adjacent to the SrTiO$_3$ (STO) substrate or to the capping layer are remarkably similar to those of STO. The site$-$projected density of states based on density functional theory (DFT) partially reflects the characteristics of the spectra of these layers. These results may provide important information on the possible influence of STO on the electronic states of ultrathin SRO.
0
1
0
0
0
0
Solution properties of a 3D stochastic Euler fluid equation
We prove local well-posedness in regular spaces and a Beale-Kato-Majda blow-up criterion for a recently derived stochastic model of the 3D Euler fluid equation for incompressible flow. This model describes incompressible fluid motions whose Lagrangian particle paths follow a stochastic process with cylindrical noise and also satisfy Newton's 2nd Law in every Lagrangian domain.
0
1
1
0
0
0
Existence of locally maximally entangled quantum states via geometric invariant theory
We study a question which has natural interpretations in both quantum mechanics and in geometry. Let $V_1,..., V_n$ be complex vector spaces of dimension $d_1,...,d_n$ and let $G= SL_{d_1} \times \dots \times SL_{d_n}$. Geometrically, we ask given $(d_1,...,d_n)$, when is the geometric invariant theory quotient $\mathbb{P}(V_1 \otimes \dots \otimes V_n)// G$ non-empty? This is equivalent to the quantum mechanical question of whether the multipart quantum system with Hilbert space $V_1\otimes \dots \otimes V_n$ has a locally maximally entangled state, i.e. a state such that the density matrix for each elementary subsystem is a multiple of the identity. We show that the answer to this question is yes if and only if $R(d_1,...,d_n)\geqslant 0$ where \[ R(d_1,...,d_n) = \prod_i d_i +\sum_{k=1}^n (-1)^k \sum_{1\leq i_1<\dotsb <i_k\leq n} (\gcd(d_{i_1},\dotsc ,d_{i_k}) )^{2}. \] We also provide a simple recursive algorithm which determines the answer to the question, and we compute the dimension of the resulting quotient in the non-empty cases.
0
0
1
0
0
0
Convergence of the Expectation-Maximization Algorithm Through Discrete-Time Lyapunov Stability Theory
In this paper, we propose a dynamical systems perspective of the Expectation-Maximization (EM) algorithm. More precisely, we can analyze the EM algorithm as a nonlinear state-space dynamical system. The EM algorithm is widely adopted for data clustering and density estimation in statistics, control systems, and machine learning. This algorithm belongs to a large class of iterative algorithms known as proximal point methods. In particular, we re-interpret limit points of the EM algorithm and other local maximizers of the likelihood function it seeks to optimize as equilibria in its dynamical system representation. Furthermore, we propose to assess its convergence as asymptotic stability in the sense of Lyapunov. As a consequence, we proceed by leveraging recent results regarding discrete-time Lyapunov stability theory in order to establish asymptotic stability (and thus, convergence) in the dynamical system representation of the EM algorithm.
1
0
0
0
0
0
A Non-monotone Alternating Updating Method for A Class of Matrix Factorization Problems
In this paper we consider a general matrix factorization model which covers a large class of existing models with many applications in areas such as machine learning and imaging sciences. To solve this possibly nonconvex, nonsmooth and non-Lipschitz problem, we develop a non-monotone alternating updating method based on a potential function. Our method essentially updates two blocks of variables in turn by inexactly minimizing this potential function, and updates another auxiliary block of variables using an explicit formula. The special structure of our potential function allows us to take advantage of efficient computational strategies for non-negative matrix factorization to perform the alternating minimization over the two blocks of variables. A suitable line search criterion is also incorporated to improve the numerical performance. Under some mild conditions, we show that the line search criterion is well defined, and establish that the sequence generated is bounded and any cluster point of the sequence is a stationary point. Finally, we conduct some numerical experiments using real datasets to compare our method with some existing efficient methods for non-negative matrix factorization and matrix completion. The numerical results show that our method can outperform these methods for these specific applications.
0
0
1
1
0
0
Fast Rates of ERM and Stochastic Approximation: Adaptive to Error Bound Conditions
Error bound conditions (EBC) are properties that characterize the growth of an objective function when a point is moved away from the optimal set. They have recently received increasing attention in the field of optimization for developing optimization algorithms with fast convergence. However, the studies of EBC in statistical learning are hitherto still limited. The main contributions of this paper are two-fold. First, we develop fast and intermediate rates of empirical risk minimization (ERM) under EBC for risk minimization with Lipschitz continuous, and smooth convex random functions. Second, we establish fast and intermediate rates of an efficient stochastic approximation (SA) algorithm for risk minimization with Lipschitz continuous random functions, which requires only one pass of $n$ samples and adapts to EBC. For both approaches, the convergence rates span a full spectrum between $\widetilde O(1/\sqrt{n})$ and $\widetilde O(1/n)$ depending on the power constant in EBC, and could be even faster than $O(1/n)$ in special cases for ERM. Moreover, these convergence rates are automatically adaptive without using any knowledge of EBC. Overall, this work not only strengthens the understanding of ERM for statistical learning but also brings new fast stochastic algorithms for solving a broad range of statistical learning problems.
0
0
0
1
0
0
Combining the $k$-CNF and XOR Phase-Transitions
The runtime performance of modern SAT solvers on random $k$-CNF formulas is deeply connected with the 'phase-transition' phenomenon seen empirically in the satisfiability of random $k$-CNF formulas. Recent universal hashing-based approaches to sampling and counting crucially depend on the runtime performance of SAT solvers on formulas expressed as the conjunction of both $k$-CNF and XOR constraints (known as $k$-CNF-XOR formulas), but the behavior of random $k$-CNF-XOR formulas is unexplored in prior work. In this paper, we present the first study of the satisfiability of random $k$-CNF-XOR formulas. We show empirical evidence of a surprising phase-transition that follows a linear trade-off between $k$-CNF and XOR constraints. Furthermore, we prove that a phase-transition for $k$-CNF-XOR formulas exists for $k = 2$ and (when the number of $k$-CNF constraints is small) for $k > 2$.
1
0
0
0
0
0
Advertising and Brand Attitudes: Evidence from 575 Brands over Five Years
Little is known about how different types of advertising affect brand attitudes. We investigate the relationships between three brand attitude variables (perceived quality, perceived value and recent satisfaction) and three types of advertising (national traditional, local traditional and digital). The data represent ten million brand attitude surveys and $264 billion spent on ads by 575 regular advertisers over a five-year period, approximately 37% of all ad spend measured between 2008 and 2012. Inclusion of brand/quarter fixed effects and industry/week fixed effects brings parameter estimates closer to expectations without major reductions in estimation precision. The findings indicate that (i) national traditional ads increase perceived quality, perceived value, and recent satisfaction; (ii) local traditional ads increase perceived quality and perceived value; (iii) digital ads increase perceived value; and (iv) competitor ad effects are generally negative.
0
0
0
0
0
1
NMR studies of the topological insulator Bi2Te3
Te NMR studies were carried out for the bismuth telluride topological insulator in a wide range from room temperature down to 12.5 K. The measurements were made on a Bruker Avance 400 pulse spectrometer. The NMR spectra were collected for the mortar and pestle powder sample and for single crystalline stacks with orientations c parallel and perpendicular to field. The activation energy responsible for thermal activation. The spectra for the stack with c parallel to field showed some particular behavior below 91 K.
0
1
0
0
0
0
The Gibbons-Hawking ansatz over a wedge
We discuss the Ricci-flat `model metrics' on $\mathbb{C}^2$ with cone singularities along the conic $\{zw=1\}$ constructed by Donaldson using the Gibbons-Hawking ansatz over wedges in $\mathbb{R}^3$. In particular we describe their asymptotic behavior at infinity and compute their energies.
0
0
1
0
0
0
Beacon-referenced Mutual Pursuit in Three Dimensions
Motivated by station-keeping applications in various unmanned settings, this paper introduces a steering control law for a pair of agents operating in the vicinity of a fixed beacon in a three-dimensional environment. This feedback law is a modification of the previously studied three-dimensional constant bearing (CB) pursuit law, in the sense that it incorporates an additional term to allocate attention to the beacon. We investigate the behavior of the closed-loop dynamics for a two agent mutual pursuit system in which each agent employs the beacon-referenced CB pursuit law with regards to the other agent and a stationary beacon. Under certain assumptions on the associated control parameters, we demonstrate that this problem admits circling equilibria wherein the agents move on circular orbits with a common radius, in planes perpendicular to a common axis passing through the beacon. As the common radius and distances from the beacon are determined by choice of parameters in the feedback law, this approach provides a means to engineer desired formations in a three-dimensional setting.
1
0
0
0
0
0
Increased stability of CuZrAl metallic glasses prepared by physical vapor deposition
We carried out molecular dynamics simulations (MD) using realistic empirical potentials for the vapor deposition (VD) of CuZrAl glasses. VD glasses have higher densities and lower potential and inherent structure energies than the melt-quenched glasses for the same alloys. The optimal substrate temperature for the deposition process is 0.625$\times T_\mathrm{g}$. In VD metallic glasses (MGs), the total number of icosahedral like clusters is higher than in the melt-quenched MGs. Surprisingly, the VD glasses have a lower degree of chemical mixing than the melt-quenched glasses. The reason for it is that the melt-quenched MGs can be viewed as frozen liquids, which means that their chemical order is the same as in the liquid state. In contrast, during the formation of the VD MGs, the absence of the liquid state results in the creation of a different chemical order with more Zr-Zr homonuclear bonds compared with the melt-quenched MGs. In order to obtain MGs from melt-quench technique with similarly low energies as in the VD process, the cooling rate during quenching would have to be many orders of magnitude lower than currently accessible to MD simulations. The method proposed in this manuscript is a more efficient way to create MGs by using MD simulations.
0
1
0
0
0
0
A geometric realization of the $m$-cluster categories of type $\tilde{D_n}$
We show that a subcategory of the $m$-cluster category of type $\tilde{D_n}$ is isomorphic to a category consisting of arcs in an $(n-2)m$-gon with two central $(m-1)$-gons inside of it. We show that the mutation of colored quivers and $m$-cluster-tilting objects is compatible with the flip of an $(m+2)$-angulation. In the final part of this paper, we detail an example of a quiver of type $\tilde{D_7}$.
0
0
1
0
0
0
The SoLid anti-neutrino detector's readout system
The SoLid collaboration have developed an intelligent readout system to reduce their 3200 silicon photomultiplier detector's data rate by a factor of 10000 whilst maintaining high efficiency for storing data from anti-neutrino interactions. The system employs an FPGA-level waveform characterisation to trigger on neutron signals. Following a trigger, data from a space time region of interest around the neutron will be read out using the IPbus protocol. In these proceedings the design of the readout system is explained and results showing the performance of a prototype version of the system are presented.
0
1
0
0
0
0
Data hiding in Fingerprint Minutiae Template for Privacy Protection
In this paper, we propose a novel scheme for data hiding in the fingerprint minutiae template, which is the most popular in fingerprint recognition systems. Various strategies are proposed in data embedding in order to maintain the accuracy of fingerprint recognition as well as the undetectability of data hiding. In bits replacement based data embedding, we replace the last few bits of each element of the original minutiae template with the data to be hidden. This strategy can be further improved using an optimized bits replacement based data embedding, which is able to minimize the impact of data hiding on the performance of fingerprint recognition. The third strategy is an order preserving mechanism which is proposed to reduce the detectability of data hiding. By using such a mechanism, it would be difficult for the attacker to differentiate the minutiae template with hidden data from the original minutiae templates. The experimental results show that the proposed data hiding scheme achieves sufficient capacity for hiding common personal data, where the accuracy of fingerprint recognition is acceptable after the data hiding.
1
0
0
0
0
0
On asymptotic normality of certain linear rank statistics
We consider asymptotic normality of linear rank statistics under various randomization rules met in clinical trials and designed for patients' allocation into treatment and placebo arms. Exposition relies on some general limit theorem due to McLeish (1974) which appears to be well suited for the problem considered and may be employed for other similar rules undis- cussed in the paper. Examples of applications include well known results as well as several new ones.
0
0
1
1
0
0
Computations with p-adic numbers
This document contains the notes of a lecture I gave at the "Journées Nationales du Calcul Formel" (JNCF) on January 2017. The aim of the lecture was to discuss low-level algorithmics for p-adic numbers. It is divided into two main parts: first, we present various implementations of p-adic numbers and compare them and second, we introduce a general framework for studying precision issues and apply it in several concrete situations.
1
0
1
0
0
0
A novel procedure for the identification of chaos in complex biological systems
We demonstrate the presence of chaos in stochastic simulations that are widely used to study biodiversity in nature. The investigation deals with a set of three distinct species that evolve according to the standard rules of mobility, reproduction and predation, with predation following the cyclic rules of the popular rock, paper and scissors game. The study uncovers the possibility to distinguish between time evolutions that start from slightly different initial states, guided by the Hamming distance which heuristically unveils the chaotic behavior. The finding opens up a quantitative approach that relates the correlation length to the average density of maxima of a typical species, and an ensemble of stochastic simulations is implemented to support the procedure. The main result of the work shows how a single and simple experimental realization that counts the density of maxima associated with the chaotic evolution of the species serves to infer its correlation length. We use the result to investigate others distinct complex systems, one dealing with a set of differential equations that can be used to model a diversity of natural and artificial chaotic systems, and another one, focusing on the ocean water level.
0
1
0
0
0
0
DeepStory: Video Story QA by Deep Embedded Memory Networks
Question-answering (QA) on video contents is a significant challenge for achieving human-level intelligence as it involves both vision and language in real-world settings. Here we demonstrate the possibility of an AI agent performing video story QA by learning from a large amount of cartoon videos. We develop a video-story learning model, i.e. Deep Embedded Memory Networks (DEMN), to reconstruct stories from a joint scene-dialogue video stream using a latent embedding space of observed data. The video stories are stored in a long-term memory component. For a given question, an LSTM-based attention model uses the long-term memory to recall the best question-story-answer triplet by focusing on specific words containing key information. We trained the DEMN on a novel QA dataset of children's cartoon video series, Pororo. The dataset contains 16,066 scene-dialogue pairs of 20.5-hour videos, 27,328 fine-grained sentences for scene description, and 8,913 story-related QA pairs. Our experimental results show that the DEMN outperforms other QA models. This is mainly due to 1) the reconstruction of video stories in a scene-dialogue combined form that utilize the latent embedding and 2) attention. DEMN also achieved state-of-the-art results on the MovieQA benchmark.
1
0
0
0
0
0
Optimisation de la QoS dans un r{é}seau de radio cognitive en utilisant la m{é}taheuristique SFLA (Shuffled Frog Leaping Algorithm)
This work proposes a study of quality of service (QoS) in cognitive radio networks. This study is based on a stochastic optimization method called shuffled frog leaping algorithm (SFLA). The interest of the SFLA algorithm is to guarantee a better solution in a multi-carrier context in order to satisfy the requirements of the secondary user (SU).
1
0
0
0
0
0
Some remarks on Kuratowski partitions
We introduce the notion of $K$-ideals associated with Kuratowski partitions and we prove that each $\kappa$-complete ideal on a measurable cardinal $\kappa$ can be represented as a $K$-ideal. Moreover, we show some results concerning precipitous and Fréchet ideals.
0
0
1
0
0
0
Boosted nonparametric hazards with time-dependent covariates
Given functional data samples from a survival process with time dependent covariates, we propose a practical boosting procedure for estimating its hazard function nonparametrically. The estimator is consistent if the model is correctly specified; alternatively an oracle inequality can be demonstrated for tree-based models. To avoid overfitting, boosting employs several regularization devices. One of them is step-size restriction, but the rationale for this is somewhat mysterious from the viewpoint of consistency. Our convergence bounds bring some clarity to this issue by revealing that step-size restriction is a mechanism for preventing the curvature of the risk from derailing convergence. We use our boosting procedure to shed new light on a question from the operations literature concerning the effect of workload on service rates in an emergency department.
0
0
0
1
0
0
$J^+$-like invariants of periodic orbits of the second kind in the restricted three body problem
We determine three invariants: Arnold's $J^+$-invariant as well as $\mathcal{J}_1$ and $\mathcal{J}_2$ invariants, which were introduced by Cieliebak-Frauenfelder-van Koert, of periodic orbits of the second kind near the heavier primary in the restricted three-body problem, provided that the mass ratio is sufficiently small.
0
0
1
0
0
0
Nonlinear Calderón-Zygmund inequalities for maps
Being motivated by the problem of deducing $L^p$-bounds on the second fundamental form of an isometric immersion from $L^p$-bounds on its mean curvature vector field, we prove a (nonlinear) Calderón-Zygmund inequality for maps between complete (possibly noncompact) Riemannian manifolds.
0
0
1
0
0
0
Comparing multiple networks using the Co-expression Differential Network Analysis (CoDiNA)
Biomedical sciences are increasingly recognising the relevance of gene co-expression-networks for analysing complex-systems, phenotypes or diseases. When the goal is investigating complex-phenotypes under varying conditions, it comes naturally to employ comparative network methods. While approaches for comparing two networks exist, this is not the case for multiple networks. Here we present a method for the systematic comparison of an unlimited number of networks: Co-expression Differential Network Analysis (CoDiNA) for detecting links and nodes that are common, specific or different to the networks. Applying CoDiNA to a neurogenesis study identified genes for neuron differentiation. Experimentally overexpressing one candidate resulted in significant disturbance in the underlying neurogenesis' gene regulatory network. We compared data from adults and children with active tuberculosis to test for signatures of HIV. We also identified common and distinct network features for particular cancer types with CoDiNA. These studies show that CoDiNA successfully detects genes associated with the diseases.
0
0
0
1
1
0
Classical and Quantum Factors of Channels
Given a classical channel, a stochastic map from inputs to outputs, can we replace the input with a simple intermediate variable that still yields the correct conditional output distribution? We examine two cases: first, when the intermediate variable is classical; second, when the intermediate variable is quantum. We show that the quantum variable's size is generically smaller than the classical, according to two different measures---cardinality and entropy. We demonstrate optimality conditions for a special case. We end with several related results: a proposal for extending the special case, a demonstration of the impact of quantum phases, and a case study concerning pure versus mixed states.
0
0
0
1
0
0
Learning Intrinsic Sparse Structures within Long Short-Term Memory
Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59x speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to non- LSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is publicly available at this https URL
1
0
0
0
0
0
Electron Acceleration Mechanisms in Thunderstorms
Thunderstorms produce strong electric fields over regions on the order of kilometer. The corresponding electric potential differences are on the order of 100 MV. Secondary cosmic rays reaching these regions may be significantly accelerated and even amplified in relativistic runaway avalanche processes. These phenomena lead to enhancements of the high-energy background radiation observed by detectors on the ground and on board aircraft. Moreover, intense submillisecond gamma-ray bursts named terrestrial gamma-ray flashes (TGFs) produced in thunderstorms are detected from low Earth orbit satellites. When passing through the atmosphere, these gamma-rays are recognized to produce secondary relativistic electrons and positrons rapidly trapped in the geomagnetic field and injected into the near-Earth space environment. In the present work, we attempt to give an overview of the current state of research on high-energy phenomena associated with thunderstorms.
0
1
0
0
0
0
Consistent estimation in Cox proportional hazards model with measurement errors and unbounded parameter set
Cox proportional hazards model with measurement error is investigated. In Kukush et al. (2011) [Journal of Statistical Research 45, 77-94] and Chimisov and Kukush (2014) [Modern Stochastics: Theory and Applications 1, 13-32] asymptotic properties of simultaneous estimator $\lambda_n(\cdot)$, $\beta_n$ were studied for baseline hazard rate $\lambda(\cdot)$ and regression parameter $\beta$, at that the parameter set $\Theta=\Theta_{\lambda}\times \Theta_{\beta}$ was assumed bounded. In the present paper, the set $\Theta_{\lambda}$ is unbounded from above and not separated away from $0$. We construct the estimator in two steps: first we derive a strongly consistent estimator and then modify it to provide its asymptotic normality.
0
0
1
1
0
0
Comparative Efficiency of Altruism and Egoism as Voting Strategies in Stochastic Environment
In this paper, we study the efficiency of egoistic and altruistic strategies within the model of social dynamics determined by voting in a stochastic environment (the ViSE model) using two criteria: maximizing the average capital increment and minimizing the number of bankrupt participants. The proposals are generated stochastically; three families of the corresponding distributions are considered: normal distributions, symmetrized Pareto distributions, and Student's $t$-distributions. It is found that the "pit of losses" paradox described earlier does not occur in the case of heavy-tailed distributions. The egoistic strategy better protects agents from extinction in aggressive environments than the altruistic ones, however, the efficiency of altruism is higher in more favorable environments. A comparison of altruistic strategies with each other shows that in aggressive environments, everyone should be supported to minimize extinction, while under more favorable conditions, it is more efficient to support the weakest participants. Studying the dynamics of participants' capitals we identify situations where the two considered criteria contradict each other. At the next stage of the study, combined voting strategies and societies involving participants with selfish and altruistic strategies will be explored.
1
0
0
0
0
0
Learning End-to-end Autonomous Driving using Guided Auxiliary Supervision
Learning to drive faithfully in highly stochastic urban settings remains an open problem. To that end, we propose a Multi-task Learning from Demonstration (MT-LfD) framework which uses supervised auxiliary task prediction to guide the main task of predicting the driving commands. Our framework involves an end-to-end trainable network for imitating the expert demonstrator's driving commands. The network intermediately predicts visual affordances and action primitives through direct supervision which provide the aforementioned auxiliary supervised guidance. We demonstrate that such joint learning and supervised guidance facilitates hierarchical task decomposition, assisting the agent to learn faster, achieve better driving performance and increases transparency of the otherwise black-box end-to-end network. We run our experiments to validate the MT-LfD framework in CARLA, an open-source urban driving simulator. We introduce multiple non-player agents in CARLA and induce temporal noise in them for realistic stochasticity.
1
0
0
1
0
0
Active Anomaly Detection via Ensembles: Insights, Algorithms, and Interpretability
Anomaly detection (AD) task corresponds to identifying the true anomalies from a given set of data instances. AD algorithms score the data instances and produce a ranked list of candidate anomalies, which are then analyzed by a human to discover the true anomalies. However, this process can be laborious for the human analyst when the number of false-positives is very high. Therefore, in many real-world AD applications including computer security and fraud prevention, the anomaly detector must be configurable by the human analyst to minimize the effort on false positives. In this paper, we study the problem of active learning to automatically tune ensemble of anomaly detectors to maximize the number of true anomalies discovered. We make four main contributions towards this goal. First, we present an important insight that explains the practical successes of AD ensembles and how ensembles are naturally suited for active learning. Second, we present several algorithms for active learning with tree-based AD ensembles. These algorithms help us to improve the diversity of discovered anomalies, generate rule sets for improved interpretability of anomalous instances, and adapt to streaming data settings in a principled manner. Third, we present a novel algorithm called GLocalized Anomaly Detection (GLAD) for active learning with generic AD ensembles. GLAD allows end-users to retain the use of simple and understandable global anomaly detectors by automatically learning their local relevance to specific data instances using label feedback. Fourth, we present extensive experiments to evaluate our insights and algorithms. Our results show that in addition to discovering significantly more anomalies than state-of-the-art unsupervised baselines, our active learning algorithms under the streaming-data setup are competitive with the batch setup.
1
0
0
1
0
0
Accuracy of parameterized proton range models; a comparison
An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy four different parameterizations models: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.
0
1
0
0
0
0
On the connectivity of level sets of automorphisms of free groups, with applications to decision problems
We show that the level sets of automorphisms of free groups with respect to the Lipschitz metric are connected as subsets of Culler-Vogtmann space. In fact we prove our result in a more general setting of deformation spaces. As applications, we give metric solutions of the conjugacy problem for irreducible automorphisms and the detection of reducibility. We additionally prove technical results that may be of independent interest --- such as the fact that the set of displacements is well ordered.
0
0
1
0
0
0
Learning a Code: Machine Learning for Approximate Non-Linear Coded Computation
Machine learning algorithms are typically run on large scale, distributed compute infrastructure that routinely face a number of unavailabilities such as failures and temporary slowdowns. Adding redundant computations using coding-theoretic tools called "codes" is an emerging technique to alleviate the adverse effects of such unavailabilities. A code consists of an encoding function that proactively introduces redundant computation and a decoding function that reconstructs unavailable outputs using the available ones. Past work focuses on using codes to provide resilience for linear computations and specific iterative optimization algorithms. However, computations performed for a variety of applications including inference on state-of-the-art machine learning algorithms, such as neural networks, typically fall outside this realm. In this paper, we propose taking a learning-based approach to designing codes that can handle non-linear computations. We present carefully designed neural network architectures and a training methodology for learning encoding and decoding functions that produce approximate reconstructions of unavailable computation results. We present extensive experimental results demonstrating the effectiveness of the proposed approach: we show that the our learned codes can accurately reconstruct $64 - 98\%$ of the unavailable predictions from neural-network based image classifiers on the MNIST, Fashion-MNIST, and CIFAR-10 datasets. To the best of our knowledge, this work proposes the first learning-based approach for designing codes, and also presents the first coding-theoretic solution that can provide resilience for any non-linear (differentiable) computation. Our results show that learning can be an effective technique for designing codes, and that learned codes are a highly promising approach for bringing the benefits of coding to non-linear computations.
0
0
0
1
0
0
Detection via simultaneous trajectory estimation and long time integration
In this work, we consider the detection of manoeuvring small objects with radars. Such objects induce low signal to noise ratio (SNR) reflections in the received signal. We consider both co-located and separated transmitter/receiver pairs, i.e., mono-static and bi-static configurations, respectively, as well as multi-static settings involving both types. We propose a detection approach which is capable of coherently integrating these reflections within a coherent processing interval (CPI) in all these configurations and continuing integration for an arbitrarily long time across consecutive CPIs. We estimate the complex value of the reflection coefficients for integration while simultaneously estimating the object trajectory. Compounded with this is the estimation of the unknown time reference shift of the separated transmitters necessary for coherent processing. Detection is made by using the resulting integration value in a Neyman-Pearson test against a constant false alarm rate threshold. We demonstrate the efficacy of our approach in a simulation example with a very low SNR object which cannot be detected with conventional techniques.
1
0
0
1
0
0
Dust Growth and Magnetic Fields: from Cores to Disks (even down to Planets)
The recent rapid progress in observations of circumstellar disks and extrasolar planets has reinforced the importance of understanding an intimate coupling between star and planet formation. Under such a circumstance, it may be invaluable to attempt to specify when and how planet formation begins in star-forming regions and to identify what physical processes/quantities are the most significant to make a link between star and planet formation. To this end, we have recently developed a couple of projects. These include an observational project about dust growth in Class 0 YSOs and a theoretical modeling project of the HL Tauri disk. For the first project, we utilize the archive data of radio interferometric observations, and examine whether dust growth, a first step of planet formation, occurs in Class 0 YSOs. We find that while our observational results can be reproduced by the presence of large ($\sim$ mm) dust grains for some of YSOs under the single-component modified blackbody formalism, an interpretation of no dust growth would be possible when a more detailed model is used. For the second project, we consider an origin of the disk configuration around HL Tauri, focusing on magnetic fields. We find that magnetically induced disk winds may play an important role in the HL Tauri disk. The combination of these attempts may enable us to move towards a comprehensive understanding of how star and planet formation are intimately coupled with each other.
0
1
0
0
0
0
Economic Design of Memory-Type Control Charts: The Fallacy of the Formula Proposed by Lorenzen and Vance (1986)
The memory-type control charts, such as EWMA and CUSUM, are powerful tools for detecting small quality changes in univariate and multivariate processes. Many papers on economic design of these control charts use the formula proposed by Lorenzen and Vance (1986) [Lorenzen, T. J., & Vance, L. C. (1986). The economic design of control charts: A unified approach. Technometrics, 28(1), 3-10, DOI: 10.2307/1269598]. This paper shows that this formula is not correct for memory-type control charts and its values can significantly deviate from the original values even if the ARL values used in this formula are accurately computed. Consequently, the use of this formula can result in charts that are not economically optimal. The formula is corrected for memory-type control charts, but unfortunately the modified formula is not a helpful tool from a computational perspective. We show that simulation-based optimization is a possible alternative method.
1
0
0
1
0
0
Error estimates for Riemann sums of some singular functions
In this short note, we obtain error estimates for Riemann sums of some singular functions.
0
0
1
0
0
0
The Relation Between Fundamental Constants and Particle Physics Parameters
The observed constraints on the variability of the proton to electron mass ratio $\mu$ and the fine structure constant $\alpha$ are used to establish constraints on the variability of the Quantum Chromodynamic Scale and a combination of the Higgs Vacuum Expectation Value and the Yukawa couplings. Further model dependent assumptions provide constraints on the Higgs VEV and the Yukawa couplings separately. A primary conclusion is that limits on the variability of dimensionless fundamental constants such as $\mu$ and $\alpha$ provide important constraints on the parameter space of new physics and cosmologies.
0
1
0
0
0
0
Real-Time Object Pose Estimation with Pose Interpreter Networks
In this work, we introduce pose interpreter networks for 6-DoF object pose estimation. In contrast to other CNN-based approaches to pose estimation that require expensively annotated object pose data, our pose interpreter network is trained entirely on synthetic pose data. We use object masks as an intermediate representation to bridge real and synthetic. We show that when combined with a segmentation model trained on RGB images, our synthetically trained pose interpreter network is able to generalize to real data. Our end-to-end system for object pose estimation runs in real-time (20 Hz) on live RGB data, without using depth information or ICP refinement.
1
0
0
0
0
0
Regularization, sparse recovery, and median-of-means tournaments
A regularized risk minimization procedure for regression function estimation is introduced that achieves near optimal accuracy and confidence under general conditions, including heavy-tailed predictor and response variables. The procedure is based on median-of-means tournaments, introduced by the authors in [8]. It is shown that the new procedure outperforms standard regularized empirical risk minimization procedures such as lasso or slope in heavy-tailed problems.
0
0
1
1
0
0
A Restaurant Process Mixture Model for Connectivity Based Parcellation of the Cortex
One of the primary objectives of human brain mapping is the division of the cortical surface into functionally distinct regions, i.e. parcellation. While it is generally agreed that at macro-scale different regions of the cortex have different functions, the exact number and configuration of these regions is not known. Methods for the discovery of these regions are thus important, particularly as the volume of available information grows. Towards this end, we present a parcellation method based on a Bayesian non-parametric mixture model of cortical connectivity.
1
0
0
1
0
0
On Unifying Deep Generative Models
Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as emerging families for generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transferred techniques.
1
0
0
1
0
0
DeepVisage: Making face recognition simple yet with powerful generalization skills
Face recognition (FR) methods report significant performance by adopting the convolutional neural network (CNN) based learning methods. Although CNNs are mostly trained by optimizing the softmax loss, the recent trend shows an improvement of accuracy with different strategies, such as task-specific CNN learning with different loss functions, fine-tuning on target dataset, metric learning and concatenating features from multiple CNNs. Incorporating these tasks obviously requires additional efforts. Moreover, it demotivates the discovery of efficient CNN models for FR which are trained only with identity labels. We focus on this fact and propose an easily trainable and single CNN based FR method. Our CNN model exploits the residual learning framework. Additionally, it uses normalized features to compute the loss. Our extensive experiments show excellent generalization on different datasets. We obtain very competitive and state-of-the-art results on the LFW, IJB-A, YouTube faces and CACD datasets.
1
0
0
0
0
0
The Erdős-Ginzburg-Ziv constant and progression-free subsets
Ellenberg and Gijswijt gave recently a new exponential upper bound for the size of three-term arithmetic progression free sets in $({\mathbb Z_p})^n$, where $p$ is a prime. Petrov summarized their method and generalized their result to linear forms. In this short note we use Petrov's result to give new exponential upper bounds for the Erdős-Ginzburg-Ziv constant of finite Abelian groups of high rank. Our main results depend on a conjecture about Property D.
0
0
1
0
0
0
The Differing Relationships Between Size, Mass, Metallicity and Core Velocity Dispersion of Central and Satellite Galaxies
We study the role of environment in the evolution of central and satellite galaxies with the Sloan Digital Sky Survey. We begin by studying the size-mass relation, replicating previous studies, which showed no difference between the sizes of centrals and satellites at fixed stellar mass, before turning our attention to the size-core velocity dispersion ($\sigma_0$) and mass-$\sigma_0$ relations. By comparing the median size and mass of the galaxies at fixed velocity dispersion we find that the central galaxies are consistently larger and more massive than their satellite counterparts in the quiescent population. In the star forming population we find there is no difference in size and only a small difference in mass. To analyse why these difference may be present we investigate the radial mass profiles and stellar metallicity of the galaxies. We find that in the cores of the galaxies there is no difference in mass surface density between centrals and satellites, but there is a large difference at larger radii. We also find almost no difference between the stellar metallicity of centrals and satellites when they are separated into star forming and quiescent groups. Under the assumption that $\sigma_0$ is invariant to environmental processes, our results imply that central galaxies are likely being increased in mass and size by processes such as minor mergers, particularly at high $\sigma_0$, while satellites are being slightly reduced in mass and size by tidal stripping and harassment, particularly at low $\sigma_0$, all of which predominantly affect the outer regions of the galaxies.
0
1
0
0
0
0
Configuration Space Singularities of The Delta Manipulator
We investigate the configuration space of the Delta-Manipulator, identify 24 points in the configuration space, where the Jacobian of the Constraint Equations looses rank and show, that these are not manifold points of the Real Algebraic Set, which is defined by the Constraint Equations.
1
0
0
0
0
0
Constrained Optimisation of Rational Functions for Accelerating Subspace Iteration
Earlier this decade, the so-called FEAST algorithm was released for computing the eigenvalues of a matrix in a given interval. Previously, rational filter functions have been examined as a parameter of FEAST. In this thesis, we expand on existing work with the following contributions: (i) Obtaining well-performing rational filter functions via standard minimisation algorithms, (ii) Obtaining constrained rational filter functions efficiently, and (iii) Improving existing rational filter functions algorithmically. Using our new rational filter functions, FEAST requires up to one quarter fewer iterations on average compared to state-of-art rational filter functions.
1
0
0
0
0
0
Sets of lengths in atomic unit-cancellative finitely presented monoids
For an element $a$ of a monoid $H$, its set of lengths $\mathsf L (a) \subset \mathbb N$ is the set of all positive integers $k$ for which there is a factorization $a=u_1 \cdot \ldots \cdot u_k$ into $k$ atoms. We study the system $\mathcal L (H) = \{\mathsf L (a) \mid a \in H \}$ with a focus on the unions $\mathcal U_k (H) \subset \mathbb N$ which are the unions of all sets of lengths containing a given $k \in \mathbb N$. The Structure Theorem for Unions -- stating that for all sufficiently large $k$, the sets $\mathcal U_k (H)$ are almost arithmetical progressions with the same difference and global bound -- has found much attention for commutative monoids and domains. We show that it holds true for the not necessarily commutative monoids in the title satisfying suitable algebraic finiteness conditions. Furthermore, we give an explicit description of the system of sets of lengths of monoids $B_{n} = \langle a,b \mid ba=b^{n} \rangle$ for $n \in \N_{\ge 2}$. Based on this description, we show that the monoids $B_n$ are not transfer Krull, which implies that their systems $\mathcal L (B_n)$ are distinct from systems of sets of lengths of commutative Krull monoids and others.
0
0
1
0
0
0
Spin-orbit effective fields in Pt/GdFeCo bilayers
In the increasing interests on spin-orbit torque (SOT) with various magnetic materials, we investigated SOT in rare earth-transition metal ferrimagnetic alloys. The harmonic Hall measurements were performed in Pt/GdFeCo bilayers to quantify the effective fields resulting from the SOT. It is found that the damping-like torque rapidly increases near the magnetization compensation temperature TM of the GdFeCo, which is attributed to the reduction of the net magnetic moment.
0
1
0
0
0
0
Bridging Static and Dynamic Program Analysis using Fuzzy Logic
Static program analysis is used to summarize properties over all dynamic executions. In a unifying approach based on 3-valued logic properties are either assigned a definite value or unknown. But in summarizing a set of executions, a property is more accurately represented as being biased towards true, or towards false. Compilers use program analysis to determine benefit of an optimization. Since benefit (e.g., performance) is justified based on the common case understanding bias is essential in guiding the compiler. Furthermore, successful optimization also relies on understanding the quality of the information, i.e. the plausibility of the bias. If the quality of the static information is too low to form a decision we would like a mechanism that improves dynamically. We consider the problem of building such a reasoning framework and present the fuzzy data-flow analysis. Our approach generalize previous work that use 3-valued logic. We derive fuzzy extensions of data-flow analyses used by the lazy code motion optimization and unveil opportunities previous work would not detect due to limited expressiveness. Furthermore we show how the results of our analysis can be used in an adaptive classifier that improve as the application executes.
1
0
0
0
0
0
The Formation of Heliospheric Arcs of Slow Solar Wind
A major challenge in solar and heliospheric physics is understanding how highly localized regions, far smaller than 1 degree at the Sun, are the source of solar-wind structures spanning more than 20 degrees near Earth. The Sun's atmosphere is divided into magnetically open regions, coronal holes, where solar-wind plasma streams out freely and fills the solar system, and closed regions, where the plasma is confined to coronal loops. The boundary between these regions extends outward as the heliospheric current sheet (HCS). Measurements of plasma composition imply that the solar wind near the HCS, the so-called slow solar wind, originates in closed regions, presumably by the processes of field-line opening or interchange reconnection. Mysteriously, however, slow wind is also often seen far from the HCS. We use high-resolution, three-dimensional magnetohydrodynamic simulations to calculate the dynamics of a coronal hole whose geometry includes a narrow corridor flanked by closed field and which is driven by supergranule-like flows at the coronal-hole boundary. We find that these dynamics result in the formation of giant arcs of closed-field plasma that extend far from the HCS and span tens of degrees in latitude and longitude at Earth, accounting for the slow solar wind observations.
0
1
0
0
0
0
A Construction of Infinitely Many Solutions to the Strominger System
In this paper we construct explicit smooth solutions to the Strominger system on generalized Calabi-Gray manifolds, which are compact non-Kähler Calabi-Yau 3-folds with infinitely many distinct topological types and sets of Hodge numbers.
0
0
1
0
0
0
A clustering algorithm for multivariate data streams with correlated components
Common clustering algorithms require multiple scans of all the data to achieve convergence, and this is prohibitive when large databases, with data arriving in streams, must be processed. Some algorithms to extend the popular K-means method to the analysis of streaming data are present in literature since 1998 (Bradley et al. in Scaling clustering algorithms to large databases. In: KDD. p. 9-15, 1998; O'Callaghan et al. in Streaming-data algorithms for high-quality clustering. In: Proceedings of IEEE international conference on data engineering. p. 685, 2001), based on the memorization and recursive update of a small number of summary statistics, but they either don't take into account the specific variability of the clusters, or assume that the random vectors which are processed and grouped have uncorrelated components. Unfortunately this is not the case in many practical situations. We here propose a new algorithm to process data streams, with data having correlated components and coming from clusters with different covariance matrices. Such covariance matrices are estimated via an optimal double shrinkage method, which provides positive definite estimates even in presence of a few data points, or of data having components with small variance. This is needed to invert the matrices and compute the Mahalanobis distances that we use for the data assignment to the clusters. We also estimate the total number of clusters from the data.
0
0
1
1
0
0
Semantic Interpolation in Implicit Models
In implicit models, one often interpolates between sampled points in latent space. As we show in this paper, care needs to be taken to match-up the distributional assumptions on code vectors with the geometry of the interpolating paths. Otherwise, typical assumptions about the quality and semantics of in-between points may not be justified. Based on our analysis we propose to modify the prior code distribution to put significantly more probability mass closer to the origin. As a result, linear interpolation paths are not only shortest paths, but they are also guaranteed to pass through high-density regions, irrespective of the dimensionality of the latent space. Experiments on standard benchmark image datasets demonstrate clear visual improvements in the quality of the generated samples and exhibit more meaningful interpolation paths.
1
0
0
1
0
0
Error analysis for global minima of semilinear optimal control problems
In [1] we consider an optimal control problem subject to a semilinear elliptic PDE together with its variational discretization, where we provide a condition which allows to decide whether a solution of the necessary first order conditions is a global minimum. This condition can be explicitly evaluated at the discrete level. Furthermore, we prove that if the above condition holds uniformly with respect to the discretization parameter the sequence of discrete solutions converges to a global solution of the corresponding limit problem. With the present work we complement our investigations of [1] in that we prove an error estimate for those discrete global solutions. Numerical experiments confirm our analytical findings.
0
0
1
0
0
0
Random dynamics of two-dimensional stochastic second grade fluids
In this paper, we consider a stochastic model of incompressible non-Newtonian fluids of second grade on a bounded domain of $\mathbb{R}^2$ with multiplicative noise. We first show that the solutions to the stochastic equations of second grade fluids generate a continuous random dynamical system. Second, we investigate the Fréchet differentiability of the random dynamical system. Finally, we establish the asymptotic compactness of the random dynamical system, and the existence of random attractors for the random dynamical system, we also obtain the upper semi-continuity of the perturbed random attractors when the noise intensity approaches zero.
0
0
1
0
0
0
Lago Distributed Network Of Data Repositories
We describe a set of tools, services and strategies of the Latin American Giant Observatory (LAGO) data repository network, to implement Data Accessibility, Reproducibility and Trustworthiness.
1
0
0
0
0
0
Hyperfunctions, the Duistermaat-Heckman theorem, and Loop Groups
In this article we investigate the Duistermaat-Heckman theorem using the theory of hyperfunctions. In applications involving Hamiltonian torus actions on infinite dimensional manifolds, this more general theory seems to be necessary in order to accomodate the existence of the infinite order differential operators which arise from the isotropy representations on the tangent spaces to fixed points. We will quickly review of the theory of hyperfunctions and their Fourier transforms. We will then apply this theory to construct a hyperfunction analogue of the Duistermaat-Heckman distribution. Our main goal will be to study the Duistermaat-Heckman hyperfunction of $\Omega SU(2)$, but in getting to this goal we will also characterize the singular locus of the moment map for the Hamiltonian action of $T\times S^1$ on $\Omega G$. The main goal of this paper is to present a Duistermaat-Heckman hyperfunction arising from a Hamiltonian action on an infinite dimensional manifold.
0
0
1
0
0
0
When is selfish routing bad? The price of anarchy in light and heavy traffic
This paper examines the behavior of the price of anarchy as a function of the traffic inflow in nonatomic congestion games with multiple origin-destination (O/D) pairs. Empirical studies in real-world networks show that the price of anarchy is close to 1 in both light and heavy traffic, thus raising the question: can these observations be justified theoretically? We first show that this is not always the case: the price of anarchy may remain a positive distance away from 1 for all values of the traffic inflow, even in simple three-link networks with a single O/D pair and smooth, convex costs. On the other hand, for a large class of cost functions (including all polynomials), the price of anarchy does converge to 1 in both heavy and light traffic, irrespective of the network topology and the number of O/D pairs in the network. We also examine the rate of convergence of the price of anarchy, and we show that it follows a power law whose degree can be computed explicitly when the network's cost functions are polynomials.
1
0
1
0
0
0
Roadmap for the international, accelerator-based neutrino programme
In line with its terms of reference the ICFA Neutrino Panel has developed a roadmapfor the international, accelerator-based neutrino programme. A "roadmap discussion document" was presented in May 2016 taking into account the peer-group-consultation described in the Panel's initial report. The "roadmap discussion document" was used to solicit feedback from the neutrino community---and more broadly, the particle- and astroparticle-physics communities---and the various stakeholders in the programme. The roadmap, the conclusions and recommendations presented in this document take into account the comments received following the publication of the roadmap discussion document. With its roadmap the Panel documents the approved objectives and milestones of the experiments that are presently in operation or under construction. Approval, construction and exploitation milestones are presented for experiments that are being considered for approval. The timetable proposed by the proponents is presented for experiments that are not yet being considered formally for approval. Based on this information, the evolution of the precision with which the critical parameters governinger the neutrino are known has been evaluated. Branch or decision points have been identified based on the anticipated evolution in precision. The branch or decision points have in turn been used to identify desirable timelines for the neutrino-nucleus cross section and hadro-production measurements that are required to maximise the integrated scientific output of the programme. The branch points have also been used to identify the timeline for the R&D required to take the programme beyond the horizon of the next generation of experiments. The theory and phenomenology programme, including nuclear theory, required to ensure that maximum benefit is derived from the experimental programme is also discussed.
0
1
0
0
0
0
Parallel Implementation of Lossy Data Compression for Temporal Data Sets
Many scientific data sets contain temporal dimensions. These are the data storing information at the same spatial location but different time stamps. Some of the biggest temporal datasets are produced by parallel computing applications such as simulations of climate change and fluid dynamics. Temporal datasets can be very large and cost a huge amount of time to transfer among storage locations. Using data compression techniques, files can be transferred faster and save storage space. NUMARCK is a lossy data compression algorithm for temporal data sets that can learn emerging distributions of element-wise change ratios along the temporal dimension and encodes them into an index table to be concisely represented. This paper presents a parallel implementation of NUMARCK. Evaluated with six data sets obtained from climate and astrophysics simulations, parallel NUMARCK achieved scalable speedups of up to 8788 when running 12800 MPI processes on a parallel computer. We also compare the compression ratios against two lossy data compression algorithms, ISABELA and ZFP. The results show that NUMARCK achieved higher compression ratio than ISABELA and ZFP.
1
0
0
0
0
0
What we really want to find by Sentiment Analysis: The Relationship between Computational Models and Psychological State
As the first step to model emotional state of a person, we build sentiment analysis models with existing deep neural network algorithms and compare the models with psychological measurements to enlighten the relationship. In the experiments, we first examined psychological state of 64 participants and asked them to summarize the story of a book, Chronicle of a Death Foretold (Marquez, 1981). Secondly, we trained models using crawled 365,802 movie review data; then we evaluated participants' summaries using the pretrained model as a concept of transfer learning. With the background that emotion affects on memories, we investigated the relationship between the evaluation score of the summaries from computational models and the examined psychological measurements. The result shows that although CNN performed the best among other deep neural network algorithms (LSTM, GRU), its results are not related to the psychological state. Rather, GRU shows more explainable results depending on the psychological state. The contribution of this paper can be summarized as follows: (1) we enlighten the relationship between computational models and psychological measurements. (2) we suggest this framework as objective methods to evaluate the emotion; the real sentiment analysis of a person.
1
0
0
0
0
0
Survey of multifidelity methods in uncertainty propagation, inference, and optimization
In many situations across computational science and engineering, multiple computational models are available that describe a system of interest. These different models have varying evaluation costs and varying fidelities. Typically, a computationally expensive high-fidelity model describes the system with the accuracy required by the current application at hand, while lower-fidelity models are less accurate but computationally cheaper than the high-fidelity model. Outer-loop applications, such as optimization, inference, and uncertainty quantification, require multiple model evaluations at many different inputs, which often leads to computational demands that exceed available resources if only the high-fidelity model is used. This work surveys multifidelity methods that accelerate the solution of outer-loop applications by combining high-fidelity and low-fidelity model evaluations, where the low-fidelity evaluations arise from an explicit low-fidelity model (e.g., a simplified physics approximation, a reduced model, a data-fit surrogate, etc.) that approximates the same output quantity as the high-fidelity model. The overall premise of these multifidelity methods is that low-fidelity models are leveraged for speedup while the high-fidelity model is kept in the loop to establish accuracy and/or convergence guarantees. We categorize multifidelity methods according to three classes of strategies: adaptation, fusion, and filtering. The paper reviews multifidelity methods in the outer-loop contexts of uncertainty propagation, inference, and optimization.
1
0
0
1
0
0
Nebular spectroscopy: A guide on H II regions and planetary nebulae
We present a tutorial on the determination of the physical conditions and chemical abundances in gaseous nebulae. We also include a brief review of recent results on the study of gaseous nebulae, their relevance for the study of stellar evolution, galactic chemical evolution, and the evolution of the universe. One of the most important problems in abundance determinations is the existence of a discrepancy between the abundances determined with collisionally excited lines and those determined by recombination lines, this is called the ADF (abundance discrepancy factor) problem; we review results related to this problem. Finally, we discuss possible reasons for the large t$^2$ values observed in gaseous nebulae.
0
1
0
0
0
0