title
stringlengths
7
239
abstract
stringlengths
7
2.76k
cs
int64
0
1
phy
int64
0
1
math
int64
0
1
stat
int64
0
1
quantitative biology
int64
0
1
quantitative finance
int64
0
1
The Urban Last Mile Problem: Autonomous Drone Delivery to Your Balcony
Drone delivery has been a hot topic in the industry in the past few years. However, existing approaches either focus on rural areas or rely on centralized drop-off locations from where the last mile delivery is performed. In this paper we tackle the problem of autonomous last mile delivery in urban environments using an off-the-shelf drone. We build a prototype system that is able to fly to the approximate delivery location using GPS and then find the exact drop-off location using visual navigation. The drop-off location could, e.g., be on a balcony or porch, and simply needs to be indicated by a visual marker on the wall or window. We test our system components in simulated environments, including the visual navigation and collision avoidance. Finally, we deploy our drone in a real-world environment and show how it can find the drop-off point on a balcony. To stimulate future research in this topic we open source our code.
1
0
0
0
0
0
Extragalactic VLBI surveys in the MeerKAT era
The past decade has seen significant advances in cm-wave VLBI extragalactic observations due to a wide range of technical successes, including the increase in processed field-of-view and bandwidth. The future inclusion of MeerKAT into global VLBI networks would provide further enhancement, particularly the dramatic sensitivity boost to >7000 km baselines. This will not be without its limitations, however, considering incomplete MeerKAT band overlap with current VLBI arrays and the small (real-time) field-of-view afforded by the phased up MeerKAT array. We provide a brief overview of the significant contributions MeerKAT-VLBI could make, with an emphasis on the scientific output of several MeerKAT extragalactic Large Survey Projects.
0
1
0
0
0
0
Equicontinuity, orbit closures and invariant compact open sets for group actions on zero-dimensional spaces
Let $X$ be a locally compact zero-dimensional space, let $S$ be an equicontinuous set of homeomorphisms such that $1 \in S = S^{-1}$, and suppose that $\overline{Gx}$ is compact for each $x \in X$, where $G = \langle S \rangle$. We show in this setting that a number of conditions are equivalent: (a) $G$ acts minimally on the closure of each orbit; (b) the orbit closure relation is closed; (c) for every compact open subset $U$ of $X$, there is $F \subseteq G$ finite such that $\bigcap_{g \in F}g(U)$ is $G$-invariant. All of these are equivalent to a notion of recurrence, which is a variation on a concept of Auslander-Glasner-Weiss. It follows in particular that the action is distal if and only if it is equicontinuous.
0
0
1
0
0
0
The Ramsey property for Banach spaces, Choquet simplices, and their noncommutative analogs
We show that the Gurarij space $\mathbb{G}$ and its noncommutative analog $\mathbb{NG}$ both have extremely amenable automorphism group. We also compute the universal minimal flows of the automorphism groups of the Poulsen simplex $\mathbb{P}$ and its noncommutative analogue $\mathbb{NP}$. The former is $\mathbb{P}$ itself, and the latter is the state space of the operator system associated with $\mathbb{NP}$. This answers a question of Conley and Törnquist. We also show that the pointwise stabilizer of any closed proper face of $\mathbb{P}$ is extremely amenable. Similarly, the pointwise stabilizer of any closed proper biface of the unit ball of the dual of the Gurarij space (the Lusky simplex) is extremely amenable. These results are obtained via the Kechris--Pestov--Todorcevic correspondence, by establishing the approximate Ramsey property for several classes of finite-dimensional operator spaces and operator systems (with distinguished linear functionals), including: Banach spaces, exact operator spaces, function systems with a distinguished state, and exact operator systems with a distinguished state. This is the first direct application of the Kechris--Pestov--Todorcevic correspondence in the setting of metric structures. The fundamental combinatorial principle that underpins the proofs is the Dual Ramsey Theorem of Graham and Rothschild. In the second part of the paper, we obtain factorization theorems for colorings of matrices and Grassmannians over $\mathbb{R}$ and ${\mathbb{C}}$, which can be considered as continuous versions of the Dual Ramsey Theorem for Boolean matrices and of the Graham-Leeb-Rothschild Theorem for Grassmannians over a finite field.
0
0
1
0
0
0
Evasion Attacks against Machine Learning at Test Time
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
1
0
0
0
0
0
Evaluation of Lightweight Block Ciphers in Hardware Implementation: A Comprehensive Survey
The conventional cryptography solutions are ill-suited to strict memory, size and power limitations of resource-constrained devices, so lightweight cryptography solutions have been specifically developed for this type of applications. In this domain of cryptography, the term lightweight never refers to inadequately low security, but rather to establishing the best balance to maintain sufficient security. This paper presents the first comprehensive survey evaluation of lightweight block ciphers in terms of their speed, cost, performance, and balanced efficiency in hardware implementation, and facilitates the comparison of studied ciphers in these respects. The cost of lightweight block ciphers is evaluated with the metric of Gate Equivalent (Fig.1), their speed with the metric of clock-cycle-per-block (Fig.2), their performance with the metric of throughput (Fig.3) and their balanced efficiency with the metric of Figure of Merit (Fig.4). The results of these evaluations show that SIMON, SPECK, and Piccolo are the best lightweight block ciphers in hardware implementation.(Abstract)
1
0
0
0
0
0
The Sample Complexity of Online One-Class Collaborative Filtering
We consider the online one-class collaborative filtering (CF) problem that consists of recommending items to users over time in an online fashion based on positive ratings only. This problem arises when users respond only occasionally to a recommendation with a positive rating, and never with a negative one. We study the impact of the probability of a user responding to a recommendation, p_f, on the sample complexity, i.e., the number of ratings required to make `good' recommendations, and ask whether receiving positive and negative ratings, instead of positive ratings only, improves the sample complexity. Both questions arise in the design of recommender systems. We introduce a simple probabilistic user model, and analyze the performance of an online user-based CF algorithm. We prove that after an initial cold start phase, where recommendations are invested in exploring the user's preferences, this algorithm makes---up to a fraction of the recommendations required for updating the user's preferences---perfect recommendations. The number of ratings required for the cold start phase is nearly proportional to 1/p_f, and that for updating the user's preferences is essentially independent of p_f. As a consequence we find that, receiving positive and negative ratings instead of only positive ones improves the number of ratings required for initial exploration by a factor of 1/p_f, which can be significant.
1
0
0
1
0
0
Coherent long-distance displacement of individual electron spins
Controlling nanocircuits at the single electron spin level is a possible route for large-scale quantum information processing. In this context, individual electron spins have been identified as versatile quantum information carriers to interconnect different nodes of a spin-based semiconductor quantum circuit. Despite important experimental efforts to control the electron displacement over long distances, keeping the electron spin coherence after transfer remained up to now elusive. Here we demonstrate that individual electron spins can be displaced coherently over a distance of 5 micrometers. This displacement is realized on a closed path made of three tunnel-coupled lateral quantum dots. Using fast quantum dot control, the electrons tunnel from one dot to another at a speed approaching 100 m/s. We find that the spin coherence length is 8 times longer than expected from the electron spin coherence without displacement. Such an enhanced spin coherence points at a process similar to motional narrowing observed in nuclear magnetic resonance experiments6. The demonstrated coherent displacement will enable long-range interaction between distant spin-qubits and will open the route towards non-abelian and holonomic manipulation of a single electron spin.
0
1
0
0
0
0
Transparency and Explanation in Deep Reinforcement Learning Neural Networks
Autonomous AI systems will be entering human society in the near future to provide services and work alongside humans. For those systems to be accepted and trusted, the users should be able to understand the reasoning process of the system, i.e. the system should be transparent. System transparency enables humans to form coherent explanations of the system's decisions and actions. Transparency is important not only for user trust, but also for software debugging and certification. In recent years, Deep Neural Networks have made great advances in multiple application areas. However, deep neural networks are opaque. In this paper, we report on work in transparency in Deep Reinforcement Learning Networks (DRLN). Such networks have been extremely successful in accurately learning action control in image input domains, such as Atari games. In this paper, we propose a novel and general method that (a) incorporates explicit object recognition processing into deep reinforcement learning models, (b) forms the basis for the development of "object saliency maps", to provide visualization of internal states of DRLNs, thus enabling the formation of explanations and (c) can be incorporated in any existing deep reinforcement learning framework. We present computational results and human experiments to evaluate our approach.
0
0
0
1
0
0
Refactoring Software Packages via Community Detection from Stability Point of View
As the complexity and size of software projects increases in real-world environments, maintaining and creating maintainable and dependable code becomes harder and more costly. Refactoring is considered as a method for enhancing the internal structure of code for improving many software properties such as maintainability. In this thesis, the subject of refactoring software packages using community detection algorithms is discussed, with a focus on the notion of package stability. The proposed algorithm starts by extracting a package dependency network from Java byte code and a community detection algorithm is used to find possible changes in package structures. In this work, the reasons for the importance of considering dependency directions while modeling package dependencies with graphs are also discussed, and a proof for the relationship between package stability and the modularity of package dependency graphs is presented that shows how modularity is in favor of package stability. For evaluating the proposed algorithm, a tool for live analysis of software packages is implemented, and two software systems are tested. Results show that modeling package dependencies with directed graphs and applying the presented refactoring method, leads to a higher increase in package stability than undirected graph modeling approaches that have been studied in the literature.
1
0
0
0
0
0
Mechanism Design in Social Networks
This paper studies an auction design problem for a seller to sell a commodity in a social network, where each individual (the seller or a buyer) can only communicate with her neighbors. The challenge to the seller is to design a mechanism to incentivize the buyers, who are aware of the auction, to further propagate the information to their neighbors so that more buyers will participate in the auction and hence, the seller will be able to make a higher revenue. We propose a novel auction mechanism, called information diffusion mechanism (IDM), which incentivizes the buyers to not only truthfully report their valuations on the commodity to the seller, but also further propagate the auction information to all their neighbors. In comparison, the direct extension of the well-known Vickrey-Clarke-Groves (VCG) mechanism in social networks can also incentivize the information diffusion, but it will decrease the seller's revenue or even lead to a deficit sometimes. The formalization of the problem has not yet been addressed in the literature of mechanism design and our solution is very significant in the presence of large-scale online social networks.
1
0
0
0
0
0
Prototype Matching Networks for Large-Scale Multi-label Genomic Sequence Classification
One of the fundamental tasks in understanding genomics is the problem of predicting Transcription Factor Binding Sites (TFBSs). With more than hundreds of Transcription Factors (TFs) as labels, genomic-sequence based TFBS prediction is a challenging multi-label classification task. There are two major biological mechanisms for TF binding: (1) sequence-specific binding patterns on genomes known as "motifs" and (2) interactions among TFs known as co-binding effects. In this paper, we propose a novel deep architecture, the Prototype Matching Network (PMN) to mimic the TF binding mechanisms. Our PMN model automatically extracts prototypes ("motif"-like features) for each TF through a novel prototype-matching loss. Borrowing ideas from few-shot matching models, we use the notion of support set of prototypes and an LSTM to learn how TFs interact and bind to genomic sequences. On a reference TFBS dataset with $2.1$ $million$ genomic sequences, PMN significantly outperforms baselines and validates our design choices empirically. To our knowledge, this is the first deep learning architecture that introduces prototype learning and considers TF-TF interactions for large-scale TFBS prediction. Not only is the proposed architecture accurate, but it also models the underlying biology.
1
0
0
1
0
0
Quantum Multicriticality near the Dirac-Semimetal to Band-Insulator Critical Point in Two Dimensions: A Controlled Ascent from One Dimension
We compute the effects of generic short-range interactions on gapless electrons residing at the quantum critical point separating a two-dimensional Dirac semimetal (DSM) and a symmetry-preserving band insulator (BI). The electronic dispersion at this critical point is anisotropic ($E_{\mathbf k}=\pm \sqrt{v^2 k^2_x + b^2 k^{2n}_y}$ with $n=2$), which results in unconventional scaling of physical observables. Due to the vanishing density of states ($\varrho(E) \sim |E|^{1/n}$), this anisotropic semimetal (ASM) is stable against weak short-range interactions. However, for stronger interactions the direct DSM-BI transition can either $(i)$ become a first-order transition, or $(ii)$ get avoided by an intervening broken-symmetry phase (BSP). We perform a renormalization group analysis by perturbing away from the one-dimensional limit with the small parameter $\epsilon = 1/n$, augmented with a $1/n$ expansion (parametrically suppressing quantum fluctuations in higher dimension). We identify charge density wave (CDW), antiferromagnet (AFM) and singlet s-wave superconductor as the three dominant candidates for the BSP. The onset of any such order at strong coupling $(\sim \epsilon)$ takes place through a continuous quantum phase transition across multicritical point. We also present the phase diagram of an extended Hubbard model for the ASM, obtained via the controlled deformation of its counterpart in one dimension. The latter displays spin-charge separation and instabilities to CDW, spin density wave, and Luther-Emery liquid phases at arbitrarily weak coupling. The spin density wave and Luther-Emery liquid phases deform into pseudospin SU(2)-symmetric quantum critical points separating the ASM from the AFM and superconducting orders, respectively. Our results can be germane for a uniaxially strained honeycomb lattice or organic compound $\alpha$-(BEDT-TTF)$_2\text{I}_3$.
0
1
0
0
0
0
Wider frequency domain for negative refraction index in a quantized composite right-left handed transmission line
The refraction index of the quantized lossy composite right-left handed transmission line (CRLH-TL) is deduced in the thermal coherence state. The results show that the negative refraction index (herein the left-handedness) can be implemented by the electric circuit dissipative factors(i.e., the resistances \(R\) and conductances \( G\)) in a higher frequency band (1.446GHz\(\leq\omega\leq \) 15GHz), and flexibly adjusted by the left-handed circuit components (\(C_l\), \(L_l\)) and the right-handed circuit components (\(C_r\), \(L_r\)) at a lower frequency (\(\omega\)=0.995GHz) . The flexible adjustment for left-handedness in a wider bandwidth will be significant for the microscale circuit design of the CRLH-TL and may make the theoretical preparation for its compact applications.
0
1
0
0
0
0
Model Averaging and its Use in Economics
The method of model averaging has become an important tool to deal with model uncertainty, for example in situations where a large amount of different theories exist, as are common in economics. Model averaging is a natural and formal response to model uncertainty in a Bayesian framework, and most of the paper deals with Bayesian model averaging. The important role of the prior assumptions in these Bayesian procedures is highlighted. In addition, frequentist model averaging methods are also discussed. Numerical methods to implement these methods are explained, and I point the reader to some freely available computational resources. The main focus is on uncertainty regarding the choice of covariates in normal linear regression models, but the paper also covers other, more challenging, settings, with particular emphasis on sampling models commonly used in economics. Applications of model averaging in economics are reviewed and discussed in a wide range of areas, among which growth economics, production modelling, finance and forecasting macroeconomic quantities.
0
0
0
1
0
0
Dropout Feature Ranking for Deep Learning Models
Deep neural networks (DNNs) achieve state-of-the-art results in a variety of domains. Unfortunately, DNNs are notorious for their non-interpretability, and thus limit their applicability in hypothesis-driven domains such as biology and healthcare. Moreover, in the resource-constraint setting, it is critical to design tests relying on fewer more informative features leading to high accuracy performance within reasonable budget. We aim to close this gap by proposing a new general feature ranking method for deep learning. We show that our simple yet effective method performs on par or compares favorably to eight strawman, classical and deep-learning feature ranking methods in two simulations and five very different datasets on tasks ranging from classification to regression, in both static and time series scenarios. We also illustrate the use of our method on a drug response dataset and show that it identifies genes relevant to the drug-response.
1
0
0
1
0
0
Per-instance Differential Privacy
We consider a refinement of differential privacy --- per instance differential privacy (pDP), which captures the privacy of a specific individual with respect to a fixed data set. We show that this is a strict generalization of the standard DP and inherits all its desirable properties, e.g., composition, invariance to side information and closedness to postprocessing, except that they all hold for every instance separately. When the data is drawn from a distribution, we show that per-instance DP implies generalization. Moreover, we provide explicit calculations of the per-instance DP for the output perturbation on a class of smooth learning problems. The result reveals an interesting and intuitive fact that an individual has stronger privacy if he/she has small "leverage score" with respect to the data set and if he/she can be predicted more accurately using the leave-one-out data set. Our simulation shows several orders-of-magnitude more favorable privacy and utility trade-off when we consider the privacy of only the users in the data set. In a case study on differentially private linear regression, provide a novel analysis of the One-Posterior-Sample (OPS) estimator and show that when the data set is well-conditioned it provides $(\epsilon,\delta)$-pDP for any target individuals and matches the exact lower bound up to a $1+\tilde{O}(n^{-1}\epsilon^{-2})$ multiplicative factor. We also demonstrate how we can use a "pDP to DP conversion" step to design AdaOPS which uses adaptive regularization to achieve the same results with $(\epsilon,\delta)$-DP.
1
0
0
1
0
0
Dual quadratic differentials and entire minimal graphs in Heisenberg space
We define holomorphic quadratic differentials for spacelike surfaces with constant mean curvature in the Lorentzian homogeneous spaces $\mathbb{L}(\kappa,\tau)$ with isometry group of dimension 4, which are dual to the Abresch-Rosenberg differentials in the Riemannian counterparts $\mathbb{E}(\kappa,\tau)$, and obtain some consequences. On the one hand, we give a very short proof of the Bernstein problem in Heisenberg space, and provide a geometric description of the family of entire graphs sharing the same differential in terms of a 2-parameter conformal deformation. On the other hand, we prove that entire minimal graphs in Heisenberg space have negative Gauss curvature.
0
0
1
0
0
0
Adaptive Modular Exponentiation Methods v.s. Python's Power Function
In this paper we use Python to implement two efficient modular exponentiation methods: the adaptive m-ary method and the adaptive sliding-window method of window size k, where both m's are adaptively chosen based on the length of exponent. We also conduct the benchmark for both methods. Evaluation results show that compared to the industry-standard efficient implementations of modular power function in CPython and Pypy, our algorithms can reduce 1-5% computing time for exponents with more than 3072 bits.
1
0
0
0
0
0
Bayesian radiocarbon modelling for beginners
Due to freely available, tailored software, Bayesian statistics is fast becoming the dominant paradigm in archaeological chronology construction. Such software provides users with powerful tools for Bayesian inference for chronological models with little need to undertake formal study of statistical modelling or computer programming. This runs the risk that it is reduced to the status of a black-box which is not sensible given the power and complexity of the modelling tools it implements. In this paper we seek to offer intuitive insight to ensure that readers from the archaeological research community who use Bayesian chronological modelling software will be better able to make well educated choices about the tools and techniques they adopt. Our hope is that they will then be both better informed about their own research designs and better prepared to offer constructively critical assessments of the modelling undertaken by others.
0
0
0
1
0
0
Suspended Load Path Tracking Control Using a Tilt-rotor UAV Based on Zonotopic State Estimation
This work addresses the problem of path tracking control of a suspended load using a tilt-rotor UAV. The main challenge in controlling this kind of system arises from the dynamic behavior imposed by the load, which is usually coupled to the UAV by means of a rope, adding unactuated degrees of freedom to the whole system. Furthermore, to perform the load transportation it is often needed the knowledge of the load position to accomplish the task. Since available sensors are commonly embedded in the mobile platform, information on the load position may not be directly available. To solve this problem in this work, initially, the kinematics of the multi-body mechanical system are formulated from the load's perspective, from which a detailed dynamic model is derived using the Euler-Lagrange approach, yielding a highly coupled, nonlinear state-space representation of the system, affine in the inputs, with the load's position and orientation directly represented by state variables. A zonotopic state estimator is proposed to solve the problem of estimating the load position and orientation, which is formulated based on sensors located at the aircraft, with different sampling times, and unknown-but-bounded measurement noise. To solve the path tracking problem, a discrete-time mixed $\mathcal{H}_2/\mathcal{H}_\infty$ controller with pole-placement constraints is designed with guaranteed time-response properties and robust to unmodeled dynamics, parametric uncertainties, and external disturbances. Results from numerical experiments, performed in a platform based on the Gazebo simulator and on a Computer Aided Design (CAD) model of the system, are presented to corroborate the performance of the zonotopic state estimator along with the designed controller.
1
0
0
0
0
0
Decay Estimates and Strichartz Estimates of Fourth-order Schrödinger Operator
We study time decay estimates of the fourth-order Schrödinger operator $H=(-\Delta)^{2}+V(x)$ in $\mathbb{R}^{d}$ for $d=3$ and $d\geq5$. We analyze the low energy and high energy behaviour of resolvent $R(H; z)$, and then derive the Jensen-Kato dispersion decay estimate and local decay estimate for $e^{-itH}P_{ac}$ under suitable spectrum assumptions of $H$. Based on Jensen-Kato decay estimate and local decay estimate, we obtain the $L^1\rightarrow L^{\infty}$ estimate of $e^{-itH}P_{ac}$ in $3$-dimension by Ginibre argument, and also establish the endpoint global Strichartz estimates of $e^{-itH}P_{ac}$ for $d\geq5$. Furthermore, using the local decay estimate and the Georgescu-Larenas-Soffer conjugate operator method, we prove the Jensen-Kato type decay estimates for some functions of $H$.
0
0
1
0
0
0
On the Uniqueness of FROG Methods
The problem of recovering a signal from its power spectrum, called phase retrieval, arises in many scientific fields. One of many examples is ultra-short laser pulse characterization in which the electromagnetic field is oscillating with ~10^15 Hz and phase information cannot be measured directly due to limitations of the electronic sensors. Phase retrieval is ill-posed in most cases as there are many different signals with the same Fourier transform magnitude. To overcome this fundamental ill-posedness, several measurement techniques are used in practice. One of the most popular methods for complete characterization of ultra-short laser pulses is the Frequency-Resolved Optical Gating (FROG). In FROG, the acquired data is the power spectrum of the product of the unknown pulse with its delayed replica. Therefore the measured signal is a quartic function of the unknown pulse. A generalized version of FROG, where the delayed replica is replaced by a second unknown pulse, is called blind FROG. In this case, the measured signal is quadratic with respect to both pulses. In this letter we introduce and formulate FROG-type techniques. We then show that almost all band-limited signals are determined uniquely, up to trivial ambiguities, by blind FROG measurements (and thus also by FROG), if in addition we have access to the signals power spectrum.
1
0
1
0
0
0
Riesz sequences and generalized arithmetic progressions
The purpose of this note is to verify that the results attained in [6] admit an extension to the multidimensional setting. Namely, for subsets of the two dimensional torus we find the sharp growth rate of the step(s) of a generalized arithmetic progression in terms of its size which may be found in an exponential systems satisfying the Riesz sequence property.
0
0
1
0
0
0
Positive Scalar Curvature and Minimal Hypersurface Singularities
In this paper we develop methods to extend the minimal hypersurface approach to positive scalar curvature problems to all dimensions. This includes a proof of the positive mass theorem in all dimensions without a spin assumption. It also includes statements about the structure of compact manifolds of positive scalar curvature extending the work of \cite{sy1} to all dimensions. The technical work in this paper is to construct minimal slicings and associated weight functions in the presence of small singular sets and to show that the singular sets do not become too large in the lower dimensional slices. It is shown that the singular set in any slice is a closed set with Hausdorff codimension at least three. In particular for arguments which involve slicing down to dimension $1$ or $2$ the method is successful. The arguments can be viewed as an extension of the minimal hypersurface regularity theory to this setting of minimal slicings.
0
0
1
0
0
0
Moving Beyond Sub-Gaussianity in High-Dimensional Statistics: Applications in Covariance Estimation and Linear Regression
Concentration inequalities form an essential toolkit in the study of high-dimensional statistical methods. Most of the relevant statistics literature is based on the assumptions of sub-Gaussian/sub-exponential random vectors. In this paper, we bring together various probability inequalities for sums of independent random variables under much weaker exponential type (sub-Weibull) tail assumptions. These results extract a part sub-Gaussian tail behavior in finite samples, matching the asymptotics governed by the central limit theorem, and are compactly represented in terms of a new Orlicz quasi-norm - the Generalized Bernstein-Orlicz norm - that typifies such tail behaviors. We illustrate the usefulness of these inequalities through the analysis of four fundamental problems in high-dimensional statistics. In the first two problems, we study the rate of convergence of the sample covariance matrix in terms of the maximum elementwise norm and the maximum k-sub-matrix operator norm which are key quantities of interest in bootstrap procedures and high-dimensional structured covariance matrix estimation. The third example concerns the restricted eigenvalue condition, required in high dimensional linear regression, which we verify for all sub-Weibull random vectors under only marginal (not joint) tail assumptions on the covariates. To our knowledge, this is the first unified result obtained in such generality. In the final example, we consider the Lasso estimator for linear regression and establish its rate of convergence under much weaker tail assumptions (on the errors as well as the covariates) than those in the existing literature. The common feature in all our results is that the convergence rates under most exponential tails match the usual ones under sub-Gaussian assumptions. Finally, we also establish a high-dimensional CLT and tail bounds for empirical processes for sub-Weibulls.
0
0
0
1
0
0
The additive groups of $\mathbb{Z}$ and $\mathbb{Q}$ with predicates for being square-free
We consider the four structures $(\mathbb{Z}; \mathrm{Sqf}^\mathbb{Z})$, $(\mathbb{Z}; <, \mathrm{Sqf}^\mathbb{Z})$, $(\mathbb{Q}; \mathrm{Sqf}^\mathbb{Q})$, and $(\mathbb{Q}; <, \mathrm{Sqf}^\mathbb{Q})$ where $\mathbb{Z}$ is the additive group of integers, $\mathrm{Sqf}^\mathbb{Z}$ is the set of $a \in \mathbb{Z}$ such that $v_{p}(a) < 2$ for every prime $p$ and corresponding $p$-adic valuation $v_{p}$, $\mathbb{Q}$ and $\mathrm{Sqf}^\mathbb{Q}$ are defined likewise for rational numbers, and $<$ denotes the natural ordering on each of these domains. We prove that the second structure is model-theoretically wild while the other three structures are model-theoretically tame. Moreover, all these results can be seen as examples where number-theoretic randomness yields model-theoretic consequences.
0
0
1
0
0
0
Particle-hole symmetry of charge excitation spectra in the paramagnetic phase of the Hubbard model
The Kotliar and Ruckenstein slave-boson representation of the Hubbard model allows to obtain an approximation of the charge dynamical response function resulting from the Gaussian fluctuations around the paramagnetic saddle-point in analytical form. Numerical evaluation in the thermodynamical limit yields charge excitation spectra consisting of a continuum, a gapless collective mode with anisotropic zero-sound velocity, and a correlation induced high-frequency mode at $\omega\approx U$. In this work we show that this analytical expression obeys the particle-hole symmetry of the model on any bipartite lattice with one atom in the unit cell. Other formal aspects of the approach are also addressed.
0
1
0
0
0
0
OLÉ: Orthogonal Low-rank Embedding, A Plug and Play Geometric Loss for Deep Learning
Deep neural networks trained using a softmax layer at the top and the cross-entropy loss are ubiquitous tools for image classification. Yet, this does not naturally enforce intra-class similarity nor inter-class margin of the learned deep representations. To simultaneously achieve these two goals, different solutions have been proposed in the literature, such as the pairwise or triplet losses. However, such solutions carry the extra task of selecting pairs or triplets, and the extra computational burden of computing and learning for many combinations of them. In this paper, we propose a plug-and-play loss term for deep networks that explicitly reduces intra-class variance and enforces inter-class margin simultaneously, in a simple and elegant geometric manner. For each class, the deep features are collapsed into a learned linear subspace, or union of them, and inter-class subspaces are pushed to be as orthogonal as possible. Our proposed Orthogonal Low-rank Embedding (OLÉ) does not require carefully crafting pairs or triplets of samples for training, and works standalone as a classification loss, being the first reported deep metric learning framework of its kind. Because of the improved margin between features of different classes, the resulting deep networks generalize better, are more discriminative, and more robust. We demonstrate improved classification performance in general object recognition, plugging the proposed loss term into existing off-the-shelf architectures. In particular, we show the advantage of the proposed loss in the small data/model scenario, and we significantly advance the state-of-the-art on the Stanford STL-10 benchmark.
1
0
0
1
0
0
Decidability problems in automaton semigroups
We consider decidability problems in self-similar semigroups, and in particular in semigroups of automatic transformations of $X^*$. We describe algorithms answering the word problem, and bound its complexity under some additional assumptions. We give a partial algorithm that decides in a group generated by an automaton, given $x,y$, whether an Engel identity ($[\cdots[[x,y],y],\dots,y]=1$ for a long enough commutator sequence) is satisfied. This algorithm succeeds, importantly, in proving that Grigorchuk's $2$-group is not Engel. We consider next the problem of recognizing Engel elements, namely elements $y$ such that the map $x\mapsto[x,y]$ attracts to $\{1\}$. Although this problem seems intractable in general, we prove that it is decidable for Grigorchuk's group: Engel elements are precisely those of order at most $2$. We include, in the text, a large number of open problems. Our computations were implemented using the package "Fr" within the computer algebra system "Gap".
1
0
1
0
0
0
Migration of a Carbon Adatom on a Charged Single-Walled Carbon Nanotube
We find that negative charges on an armchair single-walled carbon nanotube (SWCNT) can significantly enhance the migration of a carbon adatom on the external surfaces of SWCNTs, along the direction of the tube axis. Nanotube charging results in stronger binding of adatoms to SWCNTs and consequent longer lifetimes of adatoms before desorption, which in turn increases their migration distance several orders of magnitude. These results support the hypothesis of diffusion enhanced SWCNT growth in the volume of arc plasma. This process could enhance effective carbon flux to the metal catalyst.
0
1
0
0
0
0
Function Norms and Regularization in Deep Networks
Deep neural networks (DNNs) have become increasingly important due to their excellent empirical performance on a wide range of problems. However, regularization is generally achieved by indirect means, largely due to the complex set of functions defined by a network and the difficulty in measuring function complexity. There exists no method in the literature for additive regularization based on a norm of the function, as is classically considered in statistical learning theory. In this work, we propose sampling-based approximations to weighted function norms as regularizers for deep neural networks. We provide, to the best of our knowledge, the first proof in the literature of the NP-hardness of computing function norms of DNNs, motivating the necessity of an approximate approach. We then derive a generalization bound for functions trained with weighted norms and prove that a natural stochastic optimization strategy minimizes the bound. Finally, we empirically validate the improved performance of the proposed regularization strategies for both convex function sets as well as DNNs on real-world classification and image segmentation tasks demonstrating improved performance over weight decay, dropout, and batch normalization. Source code will be released at the time of publication.
1
0
0
1
0
0
Efficiently Clustering Very Large Attributed Graphs
Attributed graphs model real networks by enriching their nodes with attributes accounting for properties. Several techniques have been proposed for partitioning these graphs into clusters that are homogeneous with respect to both semantic attributes and to the structure of the graph. However, time and space complexities of state of the art algorithms limit their scalability to medium-sized graphs. We propose SToC (for Semantic-Topological Clustering), a fast and scalable algorithm for partitioning large attributed graphs. The approach is robust, being compatible both with categorical and with quantitative attributes, and it is tailorable, allowing the user to weight the semantic and topological components. Further, the approach does not require the user to guess in advance the number of clusters. SToC relies on well known approximation techniques such as bottom-k sketches, traditional graph-theoretic concepts, and a new perspective on the composition of heterogeneous distance measures. Experimental results demonstrate its ability to efficiently compute high-quality partitions of large scale attributed graphs.
1
1
0
0
0
0
Thermochemistry and vertical mixing in the tropospheres of Uranus and Neptune: How convection inhibition can affect the derivation of deep oxygen abundances
Thermochemical models have been used in the past to constrain the deep oxygen abundance in the gas and ice giant planets from tropospheric CO spectroscopic measurements. Knowing the oxygen abundance of these planets is a key to better understand their formation. These models have widely used dry and/or moist adiabats to extrapolate temperatures from the measured values in the upper troposphere down to the level where the thermochemical equilibrium between H$_2$O and CO is established. The mean molecular mass gradient produced by the condensation of H$_2$O stabilizes the atmosphere against convection and results in a vertical thermal profile and H$_2$O distribution that departs significantly from previous estimates. We revisit O/H estimates using an atmospheric structure that accounts for the inhibition of the convection by condensation. We use a thermochemical network and the latest observations of CO in Uranus and Neptune to calculate the internal oxygen enrichment required to satisfy both these new estimates of the thermal profile and the observations. We also present the current limitations of such modeling.
0
1
0
0
0
0
Attentive cross-modal paratope prediction
Antibodies are a critical part of the immune system, having the function of directly neutralising or tagging undesirable objects (the antigens) for future destruction. Being able to predict which amino acids belong to the paratope, the region on the antibody which binds to the antigen, can facilitate antibody design and contribute to the development of personalised medicine. The suitability of deep neural networks has recently been confirmed for this task, with Parapred outperforming all prior physical models. Our contribution is twofold: first, we significantly outperform the computational efficiency of Parapred by leveraging à trous convolutions and self-attention. Secondly, we implement cross-modal attention by allowing the antibody residues to attend over antigen residues. This leads to new state-of-the-art results on this task, along with insightful interpretations.
0
0
0
1
1
0
Tomonaga-Luttinger spin liquid in the spin-1/2 inequilateral diamond-chain compound K$_3$Cu$_3$AlO$_2$(SO$_4$)$_4$
K$_3$Cu$_3$AlO$_2$(SO$_4$)$_4$ is a highly one-dimensional spin-1/2 inequilateral diamond-chain antiferromagnet. Spinon continuum and spin-singlet dimer excitations are observed in the inelastic neutron scattering spectra, which is in excellent agreement with a theoretical prediction: a dimer-monomer composite structure, where the dimer is caused by strong antiferromagnetic (AFM) coupling and the monomer forms an almost isolated quantum AFM chain controlling low-energy excitations. Moreover, muon spin rotation/relaxation spectroscopy shows no long-range ordering down to 90~mK, which is roughly three orders of magnitude lower than the exchange interaction of the quantum AFM chain. K$_3$Cu$_3$AlO$_2$(SO$_4$)$_4$ is, thus, regarded as a compound that exhibits a Tomonaga-Luttinger spin liquid behavior at low temperatures close to the ground state.
0
1
0
0
0
0
Weak Convergence of Stationary Empirical Processes
We offer an umbrella type result which extends weak convergence of the classical empirical process on the line to that of more general processes indexed by functions of bounded variation. This extension is not contingent on the type of dependence of the underlying sequence of random variables. As a consequence we establish weak convergence for stationary empirical processes indexed by general classes of functions under alpha mixing conditions.
0
0
1
1
0
0
Syntax Error Recovery in Parsing Expression Grammars
Parsing Expression Grammars (PEGs) are a formalism used to describe top-down parsers with backtracking. As PEGs do not provide a good error recovery mechanism, PEG-based parsers usually do not recover from syntax errors in the input, or recover from syntax errors using ad-hoc, implementation-specific features. The lack of proper error recovery makes PEG parsers unsuitable for using with Integrated Development Environments (IDEs), which need to build syntactic trees even for incomplete, syntactically invalid programs. We propose a conservative extension, based on PEGs with labeled failures, that adds a syntax error recovery mechanism for PEGs. This extension associates recovery expressions to labels, where a label now not only reports a syntax error but also uses this recovery expression to reach a synchronization point in the input and resume parsing. We give an operational semantics of PEGs with this recovery mechanism, and use an implementation based on such semantics to build a robust parser for the Lua language. We evaluate the effectiveness of this parser, alone and in comparison with a Lua parser with automatic error recovery generated by ANTLR, a popular parser generator.
1
0
0
0
0
0
Collaborative Pressure Ulcer Prevention: An Automated Skin Damage and Pressure Ulcer Assessment Tool for Nursing Professionals, Patients, Family Members and Carers
This paper describes the Pressure Ulcers Online Website, which is a first step solution towards a new and innovative platform for helping people to detect, understand and manage pressure ulcers. It outlines the reasons why the project has been developed and provides a central point of contact for pressure ulcer analysis and ongoing research. Using state-of-the-art technologies in convolutional neural networks and transfer learning along with end-to-end web technologies, this platform allows pressure ulcers to be analysed and findings to be reported. As the system evolves through collaborative partnerships, future versions will provide decision support functions to describe the complex characteristics of pressure ulcers along with information on wound care across multiple user boundaries. This project is therefore intended to raise awareness and support for people suffering with or providing care for pressure ulcers.
0
0
0
1
0
0
Brain networks reveal the effects of antipsychotic drugs on schizophrenia patients and controls
The study of brain networks, including derived from functional neuroimaging data, attracts broad interest and represents a rapidly growing interdisciplinary field. Comparing networks of healthy volunteers with those of patients can potentially offer new, quantitative diagnostic methods, and a framework for better understanding brain and mind disorders. We explore resting state fMRI data through network measures, and demonstrate that not only is there a distinctive network architecture in the healthy brain that is disrupted in schizophrenia, but also that both networks respond to medication. We construct networks representing 15 healthy individuals and 12 schizophrenia patients (males and females), all of whom are administered three drug treatments: (i) a placebo; and two antipsychotic medications (ii) aripiprazole and; (iii) sulpiride. We first reproduce the established finding that brain networks of schizophrenia patients exhibit increased efficiency and reduced clustering compared to controls. Our data then reveals that the antipsychotic medications mitigate this effect, shifting the metrics towards those observed in healthy volunteers, with a marked difference in efficacy between the two drugs. Additionally, we find that aripiprazole considerably alters the network statistics of healthy controls. Using a test of cognitive ability, we establish that aripiprazole also adversely affects their performance. This provides evidence that changes to macroscopic brain network architecture result in measurable behavioural differences. This is the first time different medications have been assessed in this way. Our results lay the groundwork for an objective methodology with which to calculate and compare the efficacy of different treatments of mind and brain disorders.
0
0
0
0
1
0
New approach to Minkowski fractional inequalities using generalized k-fractional integral operator
In this paper, we obtain new results related to Minkowski fractional integral inequality using generalized k-fractional integral operator which is in terms of the Gauss hypergeometric function.
0
0
1
0
0
0
Segmentation of nearly isotropic overlapped tracks in photomicrographs using successive erosions as watershed markers
The major challenges of automatic track counting are distinguishing tracks and material defects, identifying small tracks and defects of similar size, and detecting overlapping tracks. Here we address the latter issue using WUSEM, an algorithm which combines the watershed transform, morphological erosions and labeling to separate regions in photomicrographs. WUSEM shows reliable results when used in photomicrographs presenting almost isotropic objects. We tested this method in two datasets of diallyl phthalate (DAP) photomicrographs and compared the results when counting manually and using the classic watershed. The mean automatic/manual efficiency ratio when using WUSEM in the test datasets is 0.97 +/- 0.11.
1
1
0
0
0
0
A Bayesian Hyperprior Approach for Joint Image Denoising and Interpolation, with an Application to HDR Imaging
Recently, impressive denoising results have been achieved by Bayesian approaches which assume Gaussian models for the image patches. This improvement in performance can be attributed to the use of per-patch models. Unfortunately such an approach is particularly unstable for most inverse problems beyond denoising. In this work, we propose the use of a hyperprior to model image patches, in order to stabilize the estimation procedure. There are two main advantages to the proposed restoration scheme: Firstly it is adapted to diagonal degradation matrices, and in particular to missing data problems (e.g. inpainting of missing pixels or zooming). Secondly it can deal with signal dependent noise models, particularly suited to digital cameras. As such, the scheme is especially adapted to computational photography. In order to illustrate this point, we provide an application to high dynamic range imaging from a single image taken with a modified sensor, which shows the effectiveness of the proposed scheme.
1
0
0
1
0
0
Development and evaluation of a deep learning model for protein-ligand binding affinity prediction
Structure based ligand discovery is one of the most successful approaches for augmenting the drug discovery process. Currently, there is a notable shift towards machine learning (ML) methodologies to aid such procedures. Deep learning has recently gained considerable attention as it allows the model to "learn" to extract features that are relevant for the task at hand. We have developed a novel deep neural network estimating the binding affinity of ligand-receptor complexes. The complex is represented with a 3D grid, and the model utilizes a 3D convolution to produce a feature map of this representation, treating the atoms of both proteins and ligands in the same manner. Our network was tested on the CASF "scoring power" benchmark and Astex Diverse Set and outperformed classical scoring functions. The model, together with usage instructions and examples, is available as a git repository at this http URL
1
0
0
1
0
0
The Bayesian update: variational formulations and gradient flows
The Bayesian update can be viewed as a variational problem by characterizing the posterior as the minimizer of a functional. The variational viewpoint is far from new and is at the heart of popular methods for posterior approximation. However, some of its consequences seem largely unexplored. We focus on the following one: defining the posterior as the minimizer of a functional gives a natural path towards the posterior by moving in the direction of steepest descent of the functional. This idea is made precise through the theory of gradient flows, allowing to bring new tools to the study of Bayesian models and algorithms. Since the posterior may be characterized as the minimizer of different functionals, several variational formulations may be considered. We study three of them and their three associated gradient flows. We show that, in all cases, the rate of convergence of the flows to the posterior can be bounded by the geodesic convexity of the functional to be minimized. Each gradient flow naturally suggests a nonlinear diffusion with the posterior as invariant distribution. These diffusions may be discretized to build proposals for Markov chain Monte Carlo (MCMC) algorithms. By construction, the diffusions are guaranteed to satisfy a certain optimality condition, and rates of convergence are given by the convexity of the functionals. We use this observation to propose a criterion for the choice of metric in Riemannian MCMC methods.
0
0
1
1
0
0
Magnetic diode at $T$ = 300 K
We report the finding of unidirectional electronic properties, analogous to a semiconductor diode, in two-dimensional artificial permalloy honeycomb lattice of ultra-small bond, with a typical length of ~ 12 nm. The unidirectional transport behavior, characterized by the asymmetric colossal enhancement in differential conductivity at a modest current application of ~ 10-15 $\mu$A, persists to T = 300 K in honeycomb lattice of thickness ~ 6 nm. The asymmetric behavior arises without the application of magnetic field. A qualitative analysis of experimental data suggests the role of magnetic charge or monopoles in the unusual observations with strong implication for spintronics.
0
1
0
0
0
0
Efficient mixture model for clustering of sparse high dimensional binary data
In this paper we propose a mixture model, SparseMix, for clustering of sparse high dimensional binary data, which connects model-based with centroid-based clustering. Every group is described by a representative and a probability distribution modeling dispersion from this representative. In contrast to classical mixture models based on EM algorithm, SparseMix: -is especially designed for the processing of sparse data, -can be efficiently realized by an on-line Hartigan optimization algorithm, -is able to automatically reduce unnecessary clusters. We perform extensive experimental studies on various types of data, which confirm that SparseMix builds partitions with higher compatibility with reference grouping than related methods. Moreover, constructed representatives often better reveal the internal structure of data.
1
0
0
1
0
0
Normalized Total Gradient: A New Measure for Multispectral Image Registration
Image registration is a fundamental issue in multispectral image processing. In filter wheel based multispectral imaging systems, the non-coplanar placement of the filters always causes the misalignment of multiple channel images. The selective characteristic of spectral response in multispectral imaging raises two challenges to image registration. First, the intensity levels of a local region may be different in individual channel images. Second, the local intensity may vary rapidly in some channel images while keeps stationary in others. Conventional multimodal measures, such as mutual information, correlation coefficient, and correlation ratio, can register images with different regional intensity levels, but will fail in the circumstance of severe local intensity variation. In this paper, a new measure, namely normalized total gradient (NTG), is proposed for multispectral image registration. The NTG is applied on the difference between two channel images. This measure is based on the key assumption (observation) that the gradient of difference image between two aligned channel images is sparser than that between two misaligned ones. A registration framework, which incorporates image pyramid and global/local optimization, is further introduced for rigid transform. Experimental results validate that the proposed method is effective for multispectral image registration and performs better than conventional methods.
1
0
0
0
0
0
Graphical-model based estimation and inference for differential privacy
Many privacy mechanisms reveal high-level information about a data distribution through noisy measurements. It is common to use this information to estimate the answers to new queries. In this work, we provide an approach to solve this estimation problem efficiently using graphical models, which is particularly effective when the distribution is high-dimensional but the measurements are over low-dimensional marginals. We show that our approach is far more efficient than existing estimation techniques from the privacy literature and that it can improve the accuracy and scalability of many state-of-the-art mechanisms.
1
0
0
1
0
0
Hierarchical Kriging for multi-fidelity aero-servo-elastic simulators - Application to extreme loads on wind turbines
In the present work, we consider multi-fidelity surrogate modelling to fuse the output of multiple aero-servo-elastic computer simulators of varying complexity. In many instances, predictions from multiple simulators for the same quantity of interest on a wind turbine are available. In this type of situation, there is strong evidence that fusing the output from multiple aero-servo-elastic simulators yields better predictive ability and lower model uncertainty than using any single simulator. Hierarchical Kriging is a multi-fidelity surrogate modelling method in which the Kriging surrogate model of the cheap (low-fidelity) simulator is used as a trend of the Kriging surrogate model of the higher fidelity simulator. We propose a parametric approach to Hierarchical Kriging where the best surrogate models are selected based on evaluating all possible combinations of the available Kriging parameters candidates. The parametric Hierarchical Kriging approach is illustrated by fusing the extreme flapwise bending moment at the blade root of a large multi-megawatt wind turbine as a function of wind velocity, turbulence and wind shear exponent in the presence of model uncertainty and heterogeneously noisy output. The extreme responses are obtained by two widely accepted wind turbine specific aero-servo-elastic computer simulators, FAST and Bladed. With limited high-fidelity simulations, Hierarchical Kriging produces more accurate predictions of validation data compared to conventional Kriging. In addition, contrary to conventional Kriging, Hierarchical Kriging is shown to be a robust surrogate modelling technique because it is less sensitive to the choice of the Kriging parameters and the choice of the estimation error.
0
0
0
1
0
0
Unifying PAC and Regret: Uniform PAC Bounds for Episodic Reinforcement Learning
Statistical performance bounds for reinforcement learning (RL) algorithms can be critical for high-stakes applications like healthcare. This paper introduces a new framework for theoretically measuring the performance of such algorithms called Uniform-PAC, which is a strengthening of the classical Probably Approximately Correct (PAC) framework. In contrast to the PAC framework, the uniform version may be used to derive high probability regret guarantees and so forms a bridge between the two setups that has been missing in the literature. We demonstrate the benefits of the new framework for finite-state episodic MDPs with a new algorithm that is Uniform-PAC and simultaneously achieves optimal regret and PAC guarantees except for a factor of the horizon.
1
0
0
1
0
0
Alternative Semantic Representations for Zero-Shot Human Action Recognition
A proper semantic representation for encoding side information is key to the success of zero-shot learning. In this paper, we explore two alternative semantic representations especially for zero-shot human action recognition: textual descriptions of human actions and deep features extracted from still images relevant to human actions. Such side information are accessible on Web with little cost, which paves a new way in gaining side information for large-scale zero-shot human action recognition. We investigate different encoding methods to generate semantic representations for human actions from such side information. Based on our zero-shot visual recognition method, we conducted experiments on UCF101 and HMDB51 to evaluate two proposed semantic representations . The results suggest that our proposed text- and image-based semantic representations outperform traditional attributes and word vectors considerably for zero-shot human action recognition. In particular, the image-based semantic representations yield the favourable performance even though the representation is extracted from a small number of images per class.
1
0
0
0
0
0
Being Corrupt Requires Being Clever, But Detecting Corruption Doesn't
We consider a variation of the problem of corruption detection on networks posed by Alon, Mossel, and Pemantle '15. In this model, each vertex of a graph can be either truthful or corrupt. Each vertex reports about the types (truthful or corrupt) of all its neighbors to a central agency, where truthful nodes report the true types they see and corrupt nodes report adversarially. The central agency aggregates these reports and attempts to find a single truthful node. Inspired by real auditing networks, we pose our problem for arbitrary graphs and consider corruption through a computational lens. We identify a key combinatorial parameter of the graph $m(G)$, which is the minimal number of corrupted agents needed to prevent the central agency from identifying a single corrupt node. We give an efficient (in fact, linear time) algorithm for the central agency to identify a truthful node that is successful whenever the number of corrupt nodes is less than $m(G)/2$. On the other hand, we prove that for any constant $\alpha > 1$, it is NP-hard to find a subset of nodes $S$ in $G$ such that corrupting $S$ prevents the central agency from finding one truthful node and $|S| \leq \alpha m(G)$, assuming the Small Set Expansion Hypothesis (Raghavendra and Steurer, STOC '10). We conclude that being corrupt requires being clever, while detecting corruption does not. Our main technical insight is a relation between the minimum number of corrupt nodes required to hide all truthful nodes and a certain notion of vertex separability for the underlying graph. Additionally, this insight lets us design an efficient algorithm for a corrupt party to decide which graphs require the fewest corrupted nodes, up to a multiplicative factor of $O(\log n)$.
1
0
0
0
0
0
Computing the homology of basic semialgebraic sets in weak exponential time
We describe and analyze an algorithm for computing the homology (Betti numbers and torsion coefficients) of basic semialgebraic sets which works in weak exponential time. That is, out of a set of exponentially small measure in the space of data the cost of the algorithm is exponential in the size of the data. All algorithms previously proposed for this problem have a complexity which is doubly exponential (and this is so for almost all data).
1
0
1
0
0
0
Deterministic and Randomized Diffusion based Iterative Generalized Hard Thresholding (DiFIGHT) for Distributed Sparse Signal Recovery
In this paper, we propose a distributed iterated hard thresholding algorithm termed DiFIGHT over a network that is built on the diffusion mechanism and also propose a modification of the proposed algorithm termed MoDiFIGHT, that has low complexity in terms of communication in the network. We additionally propose four different strategies termed RP, RNP, RGPr, and RGNPr that are used to randomly select a subset of nodes that are subsequently activated to take part in the distributed algorithm, so as to reduce the mean number of communications during the run of the distributed algorithm. We present theoretical estimates of the long run communication per unit time for these different strategies, when used by the two proposed algorithms. Also, we present an analysis of the two proposed algorithms and provide provable bounds on their recovery performance with or without using the random node selection strategies. Finally, we use numerical studies to show that both when the random strategies are used as well as when they are not used, the proposed algorithms display performances far superior to distributed IHT algorithm using consensus mechanism.
1
0
0
0
0
0
Iterated doubles of the Joker and their realisability
Let $\mathcal{A}(1)^*$ be the subHopf algebra of the mod~$2$ Steenrod algebra $\mathcal{A}^*$ generated by $\mathrm{Sq}^1$ and $\mathrm{Sq}^2$. The \emph{Joker} is the cyclic $\mathcal{A}(1)^*$-module $\mathcal{A}(1)^*/\mathcal{A}(1)^*\{\mathrm{Sq}^3\}$ which plays a special rôle in the study of $\mathcal{A}(1)^*$-modules. We discuss realisations of the Joker both as an $\mathcal{A}^*$-module and as the cohomology of a spectrum. We also consider analogous $\mathcal{A}(n)^*$-modules for $n\geq2$ and prove realisability results (both stable and unstable) for $n=2,3$ and non-realisability results for $n\geq4$.
0
0
1
0
0
0
On subtrees of the representation tree in rational base numeration systems
Every rational number p/q defines a rational base numeration system in which every integer has a unique finite representation, up to leading zeroes. This work is a contribution to the study of the set of the representations of integers. This prefix-closed subset of the free monoid is naturally represented as a highly non-regular tree. Its nodes are the integers, its edges bear labels taken in {0,1,...,p-1}, and its subtrees are all distinct. We associate with each subtree (or with its root n) three infinite words. The bottom word of n is the lexicographically smallest word that is the label of a branch of the subtree. The top word of n is defined similarly. The span-word of n is the digitwise difference between the latter and the former. First, we show that the set of all the span-words is accepted by an infinite automaton whose underlying graph is essentially the same as the tree itself. Second, we study the function that computes for all n the bottom word associated with n+1 from the one associated with n, and show that it is realised by an infinite sequential transducer whose underlying graph is once again essentially the same as the tree itself. An infinite word may be interpreted as an expansion in base p/q after the radix point, hence evaluated to a real number. If T is a subtree whose root is n, then the evaluations of the labels of the branches of T form an interval of $\mathbb{R}$. The length of this interval is called the span of n and is equal to the evaluation of the span-word of n. The set of all spans is then a subset of R and we use the preceding construction to study its topological closure. We show that it is an interval when p is greater than or equal to 2q-1, and a Cantor set of measure zero otherwise.
1
0
0
0
0
0
A Practical Method for Solving Contextual Bandit Problems Using Decision Trees
Many efficient algorithms with strong theoretical guarantees have been proposed for the contextual multi-armed bandit problem. However, applying these algorithms in practice can be difficult because they require domain expertise to build appropriate features and to tune their parameters. We propose a new method for the contextual bandit problem that is simple, practical, and can be applied with little or no domain expertise. Our algorithm relies on decision trees to model the context-reward relationship. Decision trees are non-parametric, interpretable, and work well without hand-crafted features. To guide the exploration-exploitation trade-off, we use a bootstrapping approach which abstracts Thompson sampling to non-Bayesian settings. We also discuss several computational heuristics and demonstrate the performance of our method on several datasets.
1
0
0
1
0
0
Trainable back-propagated functional transfer matrices
Connections between nodes of fully connected neural networks are usually represented by weight matrices. In this article, functional transfer matrices are introduced as alternatives to the weight matrices: Instead of using real weights, a functional transfer matrix uses real functions with trainable parameters to represent connections between nodes. Multiple functional transfer matrices are then stacked together with bias vectors and activations to form deep functional transfer neural networks. These neural networks can be trained within the framework of back-propagation, based on a revision of the delta rules and the error transmission rule for functional connections. In experiments, it is demonstrated that the revised rules can be used to train a range of functional connections: 20 different functions are applied to neural networks with up to 10 hidden layers, and most of them gain high test accuracies on the MNIST database. It is also demonstrated that a functional transfer matrix with a memory function can roughly memorise a non-cyclical sequence of 400 digits.
1
0
0
1
0
0
Multiple Exciton Generation in Chiral Carbon Nanotubes: Density Functional Theory Based Computation
We use Boltzmann transport equation (BE) to study time evolution of a photo-excited state in a nanoparticle including phonon-mediated exciton relaxation and the multiple exciton generation (MEG) processes, such as exciton-to-biexciton multiplication and biexciton-to-exciton recombination. BE collision integrals are computed using Kadanoff-Baym-Keldysh many-body perturbation theory (MBPT) based on density functional theory (DFT) simulations, including exciton effects. We compute internal quantum efficiency (QE), which is the number of excitons generated from an absorbed photon in the course of the relaxation. We apply this approach to chiral single-wall carbon nanotubes (SWCNTs), such as (6,2), and (6,5). We predict efficient MEG in the (6,2) and (6,5) SWCNTs within the solar spectrum range starting at the $2 E_g$ energy threshold and with QE reaching $\sim 1.6$ at about $3 E_g,$ where $E_g$ is the electronic gap.
0
1
0
0
0
0
Turaev-Viro invariants, colored Jones polynomials and volume
We obtain a formula for the Turaev-Viro invariants of a link complement in terms of values of the colored Jones polynomial of the link. As an application we give the first examples for which the volume conjecture of Chen and the third named author\,\cite{Chen-Yang} is verified. Namely, we show that the asymptotics of the Turaev-Viro invariants of the Figure-eight knot and the Borromean rings complement determine the corresponding hyperbolic volumes. Our calculations also exhibit new phenomena of asymptotic behavior of values of the colored Jones polynomials that seem not to be predicted by neither the Kashaev-Murakami-Murakami volume conjecture and various of its generalizations nor by Zagier's quantum modularity conjecture. We conjecture that the asymptotics of the Turaev-Viro invariants of any link complement determine the simplicial volume of the link, and verify it for all knots with zero simplicial volume. Finally we observe that our simplicial volume conjecture is stable under connect sum and split unions of links.
0
0
1
0
0
0
Credit card fraud detection through parenclitic network analysis
The detection of frauds in credit card transactions is a major topic in financial research, of profound economic implications. While this has hitherto been tackled through data analysis techniques, the resemblances between this and other problems, like the design of recommendation systems and of diagnostic / prognostic medical tools, suggest that a complex network approach may yield important benefits. In this contribution we present a first hybrid data mining / complex network classification algorithm, able to detect illegal instances in a real card transaction data set. It is based on a recently proposed network reconstruction algorithm that allows creating representations of the deviation of one instance from a reference group. We show how the inclusion of features extracted from the network data representation improves the score obtained by a standard, neural network-based classification algorithm; and additionally how this combined approach can outperform a commercial fraud detection system in specific operation niches. Beyond these specific results, this contribution represents a new example on how complex networks and data mining can be integrated as complementary tools, with the former providing a view to data beyond the capabilities of the latter.
1
1
0
0
0
0
On the Consistency of Graph-based Bayesian Learning and the Scalability of Sampling Algorithms
A popular approach to semi-supervised learning proceeds by endowing the input data with a graph structure in order to extract geometric information and incorporate it into a Bayesian framework. We introduce new theory that gives appropriate scalings of graph parameters that provably lead to a well-defined limiting posterior as the size of the unlabeled data set grows. Furthermore, we show that these consistency results have profound algorithmic implications. When consistency holds, carefully designed graph-based Markov chain Monte Carlo algorithms are proved to have a uniform spectral gap, independent of the number of unlabeled inputs. Several numerical experiments corroborate both the statistical consistency and the algorithmic scalability established by the theory.
1
0
0
1
0
0
Bayesian Optimization for Parameter Tuning of the XOR Neural Network
When applying Machine Learning techniques to problems, one must select model parameters to ensure that the system converges but also does not become stuck at the objective function's local minimum. Tuning these parameters becomes a non-trivial task for large models and it is not always apparent if the user has found the optimal parameters. We aim to automate the process of tuning a Neural Network, (where only a limited number of parameter search attempts are available) by implementing Bayesian Optimization. In particular, by assigning Gaussian Process Priors to the parameter space, we utilize Bayesian Optimization to tune an Artificial Neural Network used to learn the XOR function, with the result of achieving higher prediction accuracy.
1
0
0
1
0
0
A differential model for growing sandpiles on networks
We consider a system of differential equations of Monge-Kantorovich type which describes the equilibrium configurations of granular material poured by a constant source on a network. Relying on the definition of viscosity solution for Hamilton-Jacobi equations on networks, recently introduced by P.-L. Lions and P. E. Souganidis, we prove existence and uniqueness of the solution of the system and we discuss its numerical approximation. Some numerical experiments are carried out.
0
0
1
0
0
0
DCT-like Transform for Image Compression Requires 14 Additions Only
A low-complexity 8-point orthogonal approximate DCT is introduced. The proposed transform requires no multiplications or bit-shift operations. The derived fast algorithm requires only 14 additions, less than any existing DCT approximation. Moreover, in several image compression scenarios, the proposed transform could outperform the well-known signed DCT, as well as state-of-the-art algorithms.
1
0
0
1
0
0
On realizability of sign patterns by real polynomials
The classical Descartes' rule of signs limits the number of positive roots of a real polynomial in one variable by the number of sign changes in the sequence of its coefficients. One can ask the question which pairs of nonnegative integers $(p,n)$, chosen in accordance with this rule and with some other natural conditions, can be the pairs of numbers of positive and negative roots of a real polynomial with prescribed signs of the coefficients. The paper solves this problem for degree $8$ polynomials.
0
0
1
0
0
0
HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing
Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. After comparing with most available state-of-the-art methods, our experimental results indicate the following: 1) HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large biological sequences; 2) HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource; 3) HAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at this http URL.
1
0
0
0
0
0
Search for Evergreens in Science: A Functional Data Analysis
Evergreens in science are papers that display a continual rise in annual citations without decline, at least within a sufficiently long time period. Aiming to better understand evergreens in particular and patterns of citation trajectory in general, this paper develops a functional data analysis method to cluster citation trajectories of a sample of 1699 research papers published in 1980 in the American Physical Society (APS) journals. We propose a functional Poisson regression model for individual papers' citation trajectories, and fit the model to the observed 30-year citations of individual papers by functional principal component analysis and maximum likelihood estimation. Based on the estimated paper-specific coefficients, we apply the K-means clustering algorithm to cluster papers into different groups, for uncovering general types of citation trajectories. The result demonstrates the existence of an evergreen cluster of papers that do not exhibit any decline in annual citations over 30 years.
1
0
0
1
0
0
Tied Hidden Factors in Neural Networks for End-to-End Speaker Recognition
In this paper we propose a method to model speaker and session variability and able to generate likelihood ratios using neural networks in an end-to-end phrase dependent speaker verification system. As in Joint Factor Analysis, the model uses tied hidden variables to model speaker and session variability and a MAP adaptation of some of the parameters of the model. In the training procedure our method jointly estimates the network parameters and the values of the speaker and channel hidden variables. This is done in a two-step backpropagation algorithm, first the network weights and factor loading matrices are updated and then the hidden variables, whose gradients are calculated by aggregating the corresponding speaker or session frames, since these hidden variables are tied. The last layer of the network is defined as a linear regression probabilistic model whose inputs are the previous layer outputs. This choice has the advantage that it produces likelihoods and additionally it can be adapted during the enrolment using MAP without the need of a gradient optimization. The decisions are made based on the ratio of the output likelihoods of two neural network models, speaker adapted and universal background model. The method was evaluated on the RSR2015 database.
1
0
0
0
0
0
An Optimal Control Formulation of Pulse-Based Control Using Koopman Operator
In many applications, and in systems/synthetic biology, in particular, it is desirable to compute control policies that force the trajectory of a bistable system from one equilibrium (the initial point) to another equilibrium (the target point), or in other words to solve the switching problem. It was recently shown that, for monotone bistable systems, this problem admits easy-to-implement open-loop solutions in terms of temporal pulses (i.e., step functions of fixed length and fixed magnitude). In this paper, we develop this idea further and formulate a problem of convergence to an equilibrium from an arbitrary initial point. We show that this problem can be solved using a static optimization problem in the case of monotone systems. Changing the initial point to an arbitrary state allows to build closed-loop, event-based or open-loop policies for the switching/convergence problems. In our derivations we exploit the Koopman operator, which offers a linear infinite-dimensional representation of an autonomous nonlinear system. One of the main advantages of using the Koopman operator is the powerful computational tools developed for this framework. Besides the presence of numerical solutions, the switching/convergence problem can also serve as a building block for solving more complicated control problems and can potentially be applied to non-monotone systems. We illustrate this argument on the problem of synchronizing cardiac cells by defibrillation. Potentially, our approach can be extended to problems with different parametrizations of control signals since the only fundamental limitation is the finite time application of the control signal.
1
0
1
0
0
0
A Converse to Banach's Fixed Point Theorem and its CLS Completeness
Banach's fixed point theorem for contraction maps has been widely used to analyze the convergence of iterative methods in non-convex problems. It is a common experience, however, that iterative maps fail to be globally contracting under the natural metric in their domain, making the applicability of Banach's theorem limited. We explore how generally we can apply Banach's fixed point theorem to establish the convergence of iterative methods when pairing it with carefully designed metrics. Our first result is a strong converse of Banach's theorem, showing that it is a universal analysis tool for establishing global convergence of iterative methods to unique fixed points, and for bounding their convergence rate. In other words, we show that, whenever an iterative map globally converges to a unique fixed point, there exists a metric under which the iterative map is contracting and which can be used to bound the number of iterations until convergence. We illustrate our approach in the widely used power method, providing a new way of bounding its convergence rate through contraction arguments. We next consider the computational complexity of Banach's fixed point theorem. Making the proof of our converse theorem constructive, we show that computing a fixed point whose existence is guaranteed by Banach's fixed point theorem is CLS-complete. We thus provide the first natural complete problem for the class CLS, which was defined in [Daskalakis, Papadimitriou 2011] to capture the complexity of problems such as P-matrix LCP, computing KKT-points, and finding mixed Nash equilibria in congestion and network coordination games.
1
0
1
1
0
0
Improved Discrete RRT for Coordinated Multi-robot Planning
This paper addresses the problem of coordination of a fleet of mobile robots - the problem of finding an optimal set of collision-free trajectories for individual robots in the fleet. Many approaches have been introduced during the last decades, but a minority of them is practically applicable, i.e. fast, producing near-optimal solutions, and complete. We propose a novel probabilistic approach based on the Rapidly Exploring Random Tree algorithm (RRT) by significantly improving its multi-robot variant for discrete environments. The presented experimental results show that the proposed approach is fast enough to solve problems with tens of robots in seconds. Although the solutions generated by the approach are slightly worse than one of the best state-of-the-art algorithms presented in (ter Mors et al., 2010), it solves problems where ter Mors's algorithm fails.
1
0
0
0
0
0
A note on the stratification by automorphisms of smooth plane curves of genus 6
In this note, we give a so-called representative classification for the strata by automorphism group of smooth $\bar{k}$-plane curves of genus $6$, where $\bar{k}$ is a fixed separable closure of a field $k$ of characteristic $p = 0$ or $p > 13$. We start with a classification already obtained by the first author and we use standard techniques. Interestingly, in the way to get these families for the different strata, we find two remarkable phenomenons that did not appear before. One is the existence of a non $0$-dimensional final stratum of plane curves. At a first sight it may sound odd, but we will see that this is a normal situation for higher degrees and we will give a explanation for it. We explicitly describe representative families for all strata, except for the stratum with automorphism group $\mathbb{Z}/5\mathbb{Z}$. Here we find the second difference with the lower genus cases where the previous techniques do not fully work. Fortunately, we are still able to prove the existence of such family by applying a version of Luroth's theorem in dimension $2$.
0
0
1
0
0
0
Relativistic verifiable delegation of quantum computation
The importance of being able to verify quantum computation delegated to remote servers increases with recent development of quantum technologies. In some of the proposed protocols for this task, a client delegates her quantum computation to non-communicating servers. The fact that the servers do not communicate is not physically justified and it is essential for the proof of security of such protocols. For the best of our knowledge, we present in this work the first verifiable delegation scheme where a classical client delegates her quantum computation to two entangled servers that are allowed to communicate, but respecting the plausible assumption that information cannot be propagated faster than speed of light. We achieve this result by proposing the first one-round two-prover game for the Local Hamiltonian problem where provers only need polynomial time quantum computation and access to copies of the groundstate of the Hamiltonian.
1
0
0
0
0
0
Finiteness of étale fundamental groups by reduction modulo $p$
We introduce a spreading out technique to deduce finiteness results for étale fundamental groups of complex varieties by characteristic $p$ methods, and apply this to recover a finiteness result proven recently for local fundamental groups in characteristic $0$ using birational geometry.
0
0
1
0
0
0
Morphological Error Detection in 3D Segmentations
Deep learning algorithms for connectomics rely upon localized classification, rather than overall morphology. This leads to a high incidence of erroneously merged objects. Humans, by contrast, can easily detect such errors by acquiring intuition for the correct morphology of objects. Biological neurons have complicated and variable shapes, which are challenging to learn, and merge errors take a multitude of different forms. We present an algorithm, MergeNet, that shows 3D ConvNets can, in fact, detect merge errors from high-level neuronal morphology. MergeNet follows unsupervised training and operates across datasets. We demonstrate the performance of MergeNet both on a variety of connectomics data and on a dataset created from merged MNIST images.
1
0
0
1
0
0
Structural Feature Selection for Event Logs
We consider the problem of classifying business process instances based on structural features derived from event logs. The main motivation is to provide machine learning based techniques with quick response times for interactive computer assisted root cause analysis. In particular, we create structural features from process mining such as activity and transition occurrence counts, and ordering of activities to be evaluated as potential features for classification. We show that adding such structural features increases the amount of information thus potentially increasing classification accuracy. However, there is an inherent trade-off as using too many features leads to too long run-times for machine learning classification models. One way to improve the machine learning algorithms' run-time is to only select a small number of features by a feature selection algorithm. However, the run-time required by the feature selection algorithm must also be taken into account. Also, the classification accuracy should not suffer too much from the feature selection. The main contributions of this paper are as follows: First, we propose and compare six different feature selection algorithms by means of an experimental setup comparing their classification accuracy and achievable response times. Second, we discuss the potential use of feature selection results for computer assisted root cause analysis as well as the properties of different types of structural features in the context of feature selection.
1
0
0
1
0
0
Counting $G$-Extensions by Discriminant
The problem of analyzing the number of number field extensions $L/K$ with bounded (relative) discriminant has been the subject of renewed interest in recent years, with significant advances made by Schmidt, Ellenberg-Venkatesh, Bhargava, Bhargava-Shankar-Wang, and others. In this paper, we use the geometry of numbers and invariant theory of finite groups, in a manner similar to Ellenberg and Venkatesh, to give an upper bound on the number of extensions $L/K$ with fixed degree, bounded relative discriminant, and specified Galois closure.
0
0
1
0
0
0
Robust and Efficient Transfer Learning with Hidden-Parameter Markov Decision Processes
We introduce a new formulation of the Hidden Parameter Markov Decision Process (HiP-MDP), a framework for modeling families of related tasks using low-dimensional latent embeddings. Our new framework correctly models the joint uncertainty in the latent parameters and the state space. We also replace the original Gaussian Process-based model with a Bayesian Neural Network, enabling more scalable inference. Thus, we expand the scope of the HiP-MDP to applications with higher dimensions and more complex dynamics.
1
0
0
1
0
0
Lattice Gaussian Sampling by Markov Chain Monte Carlo: Bounded Distance Decoding and Trapdoor Sampling
Sampling from the lattice Gaussian distribution plays an important role in various research fields. In this paper, the Markov chain Monte Carlo (MCMC)-based sampling technique is advanced in several fronts. Firstly, the spectral gap for the independent Metropolis-Hastings-Klein (MHK) algorithm is derived, which is then extended to Peikert's algorithm and rejection sampling; we show that independent MHK exhibits faster convergence. Then, the performance of bounded distance decoding using MCMC is analyzed, revealing a flexible trade-off between the decoding radius and complexity. MCMC is further applied to trapdoor sampling, again offering a trade-off between security and complexity. Finally, the independent multiple-try Metropolis-Klein (MTMK) algorithm is proposed to enhance the convergence rate. The proposed algorithms allow parallel implementation, which is beneficial for practical applications.
1
0
0
0
0
0
$S$-Leaping: An adaptive, accelerated stochastic simulation algorithm, bridging $τ$-leaping and $R$-leaping
We propose the $S$-leaping algorithm for the acceleration of Gillespie's stochastic simulation algorithm that combines the advantages of the two main accelerated methods; the $\tau$-leaping and $R$-leaping algorithms. These algorithms are known to be efficient under different conditions; the $\tau$-leaping is efficient for non-stiff systems or systems with partial equilibrium, while the $R$-leaping performs better in stiff system thanks to an efficient sampling procedure. However, even a small change in a system's set up can critically affect the nature of the simulated system and thus reduce the efficiency of an accelerated algorithm. The proposed algorithm combines the efficient time step selection from the $\tau$-leaping with the effective sampling procedure from the $R$-leaping algorithm. The $S$-leaping is shown to maintain its efficiency under different conditions and in the case of large and stiff systems or systems with fast dynamics, the $S$-leaping outperforms both methods. We demonstrate the performance and the accuracy of the $S$-leaping in comparison with the $\tau$-leaping and $R$-leaping on a number of benchmark systems involving biological reaction networks.
0
0
0
0
1
0
Diclofenac sodium ion exchange resin complex loaded melt cast films for sustained release ocular delivery
The goal of the present study is to develop polymeric matrix films loaded with a combination of free diclofenac sodium (DFSfree) and DFS:Ion exchange resin complexes (DFS:IR) for immediate and sustained release profiles, respectively. Effect of ratio of DFS and IR on the DFS:IR complexation efficiency was studied using batch processing. DFS:IR complex, DFSfree, or a combination of DFSfree+DFS:IR loaded matrix films were prepared by melt-cast technology. DFS content was 20% w/w in these matrix films. In vitro transcorneal permeability from the film formulations were compared against DFS solution, using a side-by-side diffusion apparatus, over a 6 h period. Ocular disposition of DFS from the solution, films and corresponding suspensions were evaluated in conscious New Zealand albino rabbits, 4 h and 8 h post-topical administration. All in vivo studies were carried out as per the University of Mississippi IACUC approved protocol. Complexation efficiency of DFS:IR was found to be 99% with a 1:1 ratio of DFS:IR. DFS release from DFS:IR suspension and the film were best-fit to a Higuchi model. In vitro transcorneal flux with the DFSfree+DFS:IR(1:1)(1 + 1) was twice that of only DFS:IR(1:1) film. In vivo, DFS solution and DFS:IR(1:1) suspension formulations were not able to maintain therapeutic DFS levels in the aqueous humor (AH). Both DFSfree and DFSfree+DFS:IR(1:1)(3 + 1) loaded matrix films were able to achieve and maintain high DFS concentrations in the AH, but elimination of DFS from the ocular tissues was much faster with the DFSfree formulation. DFSfree+DFS:IR combination loaded matrix films were able to deliver and maintain therapeutic DFS concentrations in the anterior ocular chamber for up to 8 h. Thus, free drug/IR complex loaded matrix films could be a potential topical ocular delivery platform for achieving immediate and sustained release characteristics.
0
1
0
0
0
0
Method of precision increase by averaging with application to numerical differentiation
If several independent algorithms for a computer-calculated quantity exist, then one can expect their results (which differ because of numerical errors) to follow approximately Gaussian distribution. The mean of this distribution, interpreted as the value of the quantity of interest, can be determined with better precision than what is the precision provided by a single algorithm. Often, with lack of enough independent algorithms, one can proceed differently: many practical algorithms introduce a bias using a parameter, e.g. a small but finite number to compute a limit or a large but finite number (cutoff) to approximate infinity. One may vary such parameter of a single algorithm and interpret the resulting numbers as generated by several algorithms. A numerical evidence for the validity of this approach is shown for differentiation.
0
0
1
0
0
0
On-line Building Energy Optimization using Deep Reinforcement Learning
Unprecedented high volumes of data are becoming available with the growth of the advanced metering infrastructure. These are expected to benefit planning and operation of the future power system, and to help the customers transition from a passive to an active role. In this paper, we explore for the first time in the smart grid context the benefits of using Deep Reinforcement Learning, a hybrid type of methods that combines Reinforcement Learning with Deep Learning, to perform on-line optimization of schedules for building energy management systems. The learning procedure was explored using two methods, Deep Q-learning and Deep Policy Gradient, both of them being extended to perform multiple actions simultaneously. The proposed approach was validated on the large-scale Pecan Street Inc. database. This highly-dimensional database includes information about photovoltaic power generation, electric vehicles as well as buildings appliances. Moreover, these on-line energy scheduling strategies could be used to provide real-time feedback to consumers to encourage more efficient use of electricity.
1
0
1
0
0
0
A KL-LUCB Bandit Algorithm for Large-Scale Crowdsourcing
This paper focuses on best-arm identification in multi-armed bandits with bounded rewards. We develop an algorithm that is a fusion of lil-UCB and KL-LUCB, offering the best qualities of the two algorithms in one method. This is achieved by proving a novel anytime confidence bound for the mean of bounded distributions, which is the analogue of the LIL-type bounds recently developed for sub-Gaussian distributions. We corroborate our theoretical results with numerical experiments based on the New Yorker Cartoon Caption Contest.
0
0
1
1
0
0
Refractive index tomography with structured illumination
This work introduces a novel reinterpretation of structured illumination (SI) microscopy for coherent imaging that allows three-dimensional imaging of complex refractive index (RI). To do so, we show that coherent SI is mathematically equivalent to a superposition of angled illuminations. It follows that raw acquisitions for standard SI-enhanced quantitative-phase images can be processed into complex electric-field maps describing sample diffraction under angled illuminations. Standard diffraction tomography (DT) computation can then be used to reconstruct the sample 3D RI distribution at sub-diffraction resolutions. We demonstrate this concept by using a SI-quantitative-phase imaging system to computationally reconstruct 3D RI distributions of human breast (MCF-7) and colorectal (HT-29) adenocarcinoma cells. Our experimental setup uses a spatial light modulator to generate structured patterns at the sample and collects angle-dependent sample diffraction using a common-path, off-axis interference configuration with no moving components. Furthermore, this technique holds promise for easy pairing with SI fluorescence microscopy, and important future extensions may include multimodal, sub-diffraction resolution, 3D RI and fluorescent visualizations.
0
1
0
0
0
0
Foreign English Accent Adjustment by Learning Phonetic Patterns
State-of-the-art automatic speech recognition (ASR) systems struggle with the lack of data for rare accents. For sufficiently large datasets, neural engines tend to outshine statistical models in most natural language processing problems. However, a speech accent remains a challenge for both approaches. Phonologists manually create general rules describing a speaker's accent, but their results remain underutilized. In this paper, we propose a model that automatically retrieves phonological generalizations from a small dataset. This method leverages the difference in pronunciation between a particular dialect and General American English (GAE) and creates new accented samples of words. The proposed model is able to learn all generalizations that previously were manually obtained by phonologists. We use this statistical method to generate a million phonological variations of words from the CMU Pronouncing Dictionary and train a sequence-to-sequence RNN to recognize accented words with 59% accuracy.
1
0
0
1
0
0
The geometry of hypothesis testing over convex cones: Generalized likelihood tests and minimax radii
We consider a compound testing problem within the Gaussian sequence model in which the null and alternative are specified by a pair of closed, convex cones. Such cone testing problem arise in various applications, including detection of treatment effects, trend detection in econometrics, signal detection in radar processing, and shape-constrained inference in non-parametric statistics. We provide a sharp characterization of the GLRT testing radius up to a universal multiplicative constant in terms of the geometric structure of the underlying convex cones. When applied to concrete examples, this result reveals some interesting phenomena that do not arise in the analogous problems of estimation under convex constraints. In particular, in contrast to estimation error, the testing error no longer depends purely on the problem complexity via a volume-based measure (such as metric entropy or Gaussian complexity), other geometric properties of the cones also play an important role. To address the issue of optimality, we prove information-theoretic lower bounds for minimax testing radius again in terms of geometric quantities. Our general theorems are illustrated by examples including the cases of monotone and orthant cones, and involve some results of independent interest.
1
0
1
1
0
0
Permutation methods for factor analysis and PCA
Researchers often have datasets measuring features $x_{ij}$ of samples, such as test scores of students. In factor analysis and PCA, these features are thought to be influenced by unobserved factors, such as skills. Can we determine how many components affect the data? This is an important problem, because it has a large impact on all downstream data analysis. Consequently, many approaches have been developed to address it. Parallel Analysis is a popular permutation method. It works by randomly scrambling each feature of the data. It selects components if their singular values are larger than those of the permuted data. Despite widespread use in leading textbooks and scientific publications, as well as empirical evidence for its accuracy, it currently has no theoretical justification. In this paper, we show that the parallel analysis permutation method consistently selects the large components in certain high-dimensional factor models. However, it does not select the smaller components. The intuition is that permutations keep the noise invariant, while "destroying" the low-rank signal. This provides justification for permutation methods in PCA and factor models under some conditions. Our work uncovers drawbacks of permutation methods, and paves the way to improvements.
0
0
1
1
0
0
Adapting the CVA model to Leland's framework
We consider the framework proposed by Burgard and Kjaer (2011) that derives the PDE which governs the price of an option including bilateral counterparty risk and funding. We extend this work by relaxing the assumption of absence of transaction costs in the hedging portfolio by proposing a cost proportional to the amount of assets traded and the traded price. After deriving the nonlinear PDE, we prove the existence of a solution for the corresponding initial-boundary value problem. Moreover, we develop a numerical scheme that allows to find the solution of the PDE by setting different values for each parameter of the model. To understand the impact of each variable within the model, we analyze the Greeks of the option and the sensitivity of the price to changes in all the risk factors.
0
0
0
0
0
1
Blue Sky Ideas in Artificial Intelligence Education from the EAAI 2017 New and Future AI Educator Program
The 7th Symposium on Educational Advances in Artificial Intelligence (EAAI'17, co-chaired by Sven Koenig and Eric Eaton) launched the EAAI New and Future AI Educator Program to support the training of early-career university faculty, secondary school faculty, and future educators (PhD candidates or postdocs who intend a career in academia). As part of the program, awardees were asked to address one of the following "blue sky" questions: * How could/should Artificial Intelligence (AI) courses incorporate ethics into the curriculum? * How could we teach AI topics at an early undergraduate or a secondary school level? * AI has the potential for broad impact to numerous disciplines. How could we make AI education more interdisciplinary, specifically to benefit non-engineering fields? This paper is a collection of their responses, intended to help motivate discussion around these issues in AI education.
1
0
0
0
0
0
Network Analysis of Particles and Grains
The arrangements of particles and forces in granular materials have a complex organization on multiple spatial scales that ranges from local structures to mesoscale and system-wide ones. This multiscale organization can affect how a material responds or reconfigures when exposed to external perturbations or loading. The theoretical study of particle-level, force-chain, domain, and bulk properties requires the development and application of appropriate physical, mathematical, statistical, and computational frameworks. Traditionally, granular materials have been investigated using particulate or continuum models, each of which tends to be implicitly agnostic to multiscale organization. Recently, tools from network science have emerged as powerful approaches for probing and characterizing heterogeneous architectures across different scales in complex systems, and a diverse set of methods have yielded fascinating insights into granular materials. In this paper, we review work on network-based approaches to studying granular matter and explore the potential of such frameworks to provide a useful description of these systems and to enhance understanding of their underlying physics. We also outline a few open questions and highlight particularly promising future directions in the analysis and design of granular matter and other kinds of material networks.
0
1
1
0
0
0
Perfect Sequences and Arrays over the Unit Quaternions
We introduce several new constructions for perfect periodic autocorrelation sequences and arrays over the unit quaternions. This paper uses both mathematical proofs and com- puter experiments to prove the (bounded) array constructions have perfect periodic auto- correlation. Furthermore, the first sequence construction generates odd-perfect sequences of unbounded lengths, with good ZCZ.
1
0
1
0
0
0
Schwarzian conditions for linear differential operators with selected differential Galois groups (unabridged version)
We show that non-linear Schwarzian differential equations emerging from covariance symmetry conditions imposed on linear differential operators with hypergeometric function solutions, can be generalized to arbitrary order linear differential operators with polynomial coefficients having selected differential Galois groups. For order three and order four linear differential operators we show that this pullback invariance up to conjugation eventually reduces to symmetric powers of an underlying order-two operator. We give, precisely, the conditions to have modular correspondences solutions for such Schwarzian differential equations, which was an open question in a previous paper. We analyze in detail a pullbacked hypergeometric example generalizing modular forms, that ushers a pullback invariance up to operator homomorphisms. We expect this new concept to be well-suited in physics and enumerative combinatorics. We finally consider the more general problem of the equivalence of two different order-four linear differential Calabi-Yau operators up to pullbacks and conjugation, and clarify the cases where they have the same Yukawa couplings.
0
1
0
0
0
0
Robust Bayesian Filtering and Smoothing Using Student's t Distribution
State estimation in heavy-tailed process and measurement noise is an important challenge that must be addressed in, e.g., tracking scenarios with agile targets and outlier-corrupted measurements. The performance of the Kalman filter (KF) can deteriorate in such applications because of the close relation to the Gaussian distribution. Therefore, this paper describes the use of Student's t distribution to develop robust, scalable, and simple filtering and smoothing algorithms. After a discussion of Student's t distribution, exact filtering in linear state-space models with t noise is analyzed. Intermediate approximation steps are used to arrive at filtering and smoothing algorithms that closely resemble the KF and the Rauch-Tung-Striebel (RTS) smoother except for a nonlinear measurement-dependent matrix update. The required approximations are discussed and an undesirable behavior of moment matching for t densities is revealed. A favorable approximation based on minimization of the Kullback-Leibler divergence is presented. Because of its relation to the KF, some properties and algorithmic extensions are inherited by the t filter. Instructive simulation examples demonstrate the performance and robustness of the novel algorithms.
1
0
0
1
0
0
Multi-pass configuration for Improved Squeezed Vacuum Generation in Hot Rb Vapor
We study a squeezed vacuum field generated in hot Rb vapor via the polarization self-rotation effect. Our previous experiments showed that the amount of observed squeezing may be limited by the contamination of the squeezed vacuum output with higher-order spatial modes, also generated inside the cell. Here, we demonstrate that the squeezing can be improved by making the light interact several times with a less dense atomic ensemble. With optimization of some parameters we can achieve up to -2.6 dB of squeezing in the multi-pass case, which is 0.6 dB improvement compared to the single-pass experimental configuration. Our results show that other than the optical depth of the medium, the spatial mode structure and cell configuration also affect the squeezing level.
0
1
0
0
0
0
Improved Computation of Involutive Bases
In this paper, we describe improved algorithms to compute Janet and Pommaret bases. To this end, based on the method proposed by Moller et al., we present a more efficient variant of Gerdt's algorithm (than the algorithm presented by Gerdt-Hashemi-M.Alizadeh) to compute minimal involutive bases. Further, by using the involutive version of Hilbert driven technique, along with the new variant of Gerdt's algorithm, we modify the algorithm, given by Seiler, to compute a linear change of coordinates for a given homogeneous ideal so that the new ideal (after performing this change) possesses a finite Pommaret basis. All the proposed algorithms have been implemented in Maple and their efficiency is discussed via a set of benchmark polynomials.
1
0
1
0
0
0
Dynamic Stochastic Approximation for Multi-stage Stochastic Optimization
In this paper, we consider multi-stage stochastic optimization problems with convex objectives and conic constraints at each stage. We present a new stochastic first-order method, namely the dynamic stochastic approximation (DSA) algorithm, for solving these types of stochastic optimization problems. We show that DSA can achieve an optimal ${\cal O}(1/\epsilon^4)$ rate of convergence in terms of the total number of required scenarios when applied to a three-stage stochastic optimization problem. We further show that this rate of convergence can be improved to ${\cal O}(1/\epsilon^2)$ when the objective function is strongly convex. We also discuss variants of DSA for solving more general multi-stage stochastic optimization problems with the number of stages $T > 3$. The developed DSA algorithms only need to go through the scenario tree once in order to compute an $\epsilon$-solution of the multi-stage stochastic optimization problem. To the best of our knowledge, this is the first time that stochastic approximation type methods are generalized for multi-stage stochastic optimization with $T \ge 3$.
1
0
1
1
0
0
Stability Analysis of Piecewise Affine Systems with Multi-model Model Predictive Control
Constrained model predictive control (MPC) is a widely used control strategy, which employs moving horizon-based on-line optimisation to compute the optimum path of the manipulated variables. Nonlinear MPC can utilize detailed models but it is computationally expensive; on the other hand linear MPC may not be adequate. Piecewise affine (PWA) models can describe the underlying nonlinear dynamics more accurately, therefore they can provide a viable trade-off through their use in multi-model linear MPC configurations, which avoid integer programming. However, such schemes may introduce uncertainty affecting the closed loop stability. In this work, we propose an input to output stability analysis for closed loop systems, consisting of PWA models, where an observer and multi-model linear MPC are applied together, under unstructured uncertainty. Integral quadratic constraints (IQCs) are employed to assess the robustness of MPC under uncertainty. We create a model pool, by performing linearisation on selected transient points. All the possible uncertainties and nonlinearities (including the controller) can be introduced in the framework, assuming that they admit the appropriate IQCs, whilst the dissipation inequality can provide necessary conditions incorporating IQCs. We demonstrate the existence of static multipliers, which can reduce the conservatism of the stability analysis significantly. The proposed methodology is demonstrated through two engineering case studies.
1
0
0
0
0
0