content
stringlengths
0
14.9M
filename
stringlengths
44
136
--- title: "Bayesian Inference with Log-normal Data" author: "Aldo Gardini, Carlo Trivisano and Enrico Fabrizi" date: "`r Sys.Date()`" bibliography: bibliography.bib output: rmarkdown::html_vignette vignette: > %\VignetteIndexEntry{Bayesian Inference with Log-normal Data} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```{r setup, include = FALSE} knitr::opts_chunk$set( collapse = TRUE, comment = "#>" ) library(BayesLN) ``` ## Introduction Inference under the log-normal assumption for the data looks simple as parameters can be estimated taking the log- transform and then working with normality of the transformed data. Estimation of descriptors of the variable in question before transformation (such as median, mean, quantiles, variance, etc...) involve back-transformation can be critical as naive estimators can perform poorly. Here we focus on the estimation of a log-normal mean and quantiles and on the prediction of the conditional expectation in a lognormal linear and linear mixed models. In all these cases these estimates can be defined as functionals (involving the exp) of parameters estimated on log-transformed data. In the first place, back-transforming involves bias whenever the transformation is nonlinear, but this is not the only problem. In fact, one may suppose that this inferential issue is easily overcome in the Bayesian framework by sampling directly from the posterior distributions of the target functional, but there can be problems with the posteriors obtained assuming most of the priors popular in the analysis of normal data. If Bayes estimator under the quadratic loss function are to be considered (i.e., the posterior mean), the finiteness of the posterior moments must be assured at least up to the second order, to obtain the posterior variance too. The existence of such posterior moments, which is crucial to summarize the posterior distribution using squared loss, is often taken for granted, but this may not be the case for many prior choices. Furthermore, if estimation is performed through MCMC methods the non-existence of posterior moments cannot be easily detected. When an improper prior is fixed, a lot of care is usually taken in the properness of the posterior distribution. Even if the distribution is proper, it is not guaranteed that its moments are finite. This is the case with the Bayes estimators of log-normal functionals when the analysis is based on the choice of popular priors, both improper and proper (like the inverse gamma for the log-scale variance). For the estimation of the mean of a log-normal variable, this issue was first highlighted by @zellner1971bayesian and then the issues affecting the Bayesian estimation of the log-normal mean were faced by @fabrizi2012bayes and @fabrizi2016bayesian, wherein the log-normal linear model was considered. The core of their proposal consists of specifying a generalized inverse Gaussian (GIG) prior for the variance in the log-scale $\sigma^2$. In this way, existence conditions for the posterior moments of the target functionals to estimate were found and a careful inferential procedure in the Bayesian framework was proposed. Functions that allows to carry out Bayesian inference for important functionals under the log-normality assumption are included in the `BayesLN` package. With respect to the theory covered in Fabrizi and Trivisano (2012, 2016), the `BayesLN` package offers tools for the estimation of quantiles [@gardini2020bayesian] and means under mixed models too. ## Some theoretical results In this section, a brief overview of the theoretical problems are presented, followed by some key results, in order to motivate and describe the usefulness of the `R` functions implemented in the package. ### Model with only fixed effects The conditional estimation problem is directly faced, since the unconditional case can be easily deduced as a special case. In this context, a random sample of size $n$ is observed: \begin{equation*} (y_i,\mathbf{x}_i),\ i=1\dots n; \end{equation*} where $\mathbf{x}_i$ is a vector containing the values of the $p$ covariates that are related to the $i$-th unit. These vectors are stored as rows of the usual design matrix $\mathbf{X}\in\mathbb{R}^{n\times p}$. Besides, the vector of the logarithmic transformation of the response variable is $\mathbf{w}=\log(\mathbf{y})$. Finally, the following distributional assumption is fixed: \begin{equation}\label{eq:ass_reg} y_i|\mathbf{x}_i,\boldsymbol{\beta},\sigma^2\sim \log\mathcal{N}\left(\mathbf{x}_i^T\boldsymbol{\beta},\sigma^2\right),\ i=1,\dots n, \end{equation} where $\boldsymbol{\beta}=(\beta_0,...,\beta_{p-1})$ is the vector of coefficients. To complete the inferential setting, the improper flat prior is assumed for the regression coefficients and a generalized inverse Gaussian (GIG) prior is fixed for the variance in the log scale $\sigma^2$: \begin{align} &\boldsymbol{\beta}\propto 1,\\ &\sigma^2\sim GIG(\lambda, \delta, \gamma)\label{eq:priors_model_GIG}; \end{align} where $\lambda\in \mathbb{R}$, $\delta\in \mathbb{R}^+$ and $\gamma\in \mathbb{R}^+$ are the hyperparameter to specify. The inferential questions that will be answered involve two basic functionals of the log-normal theory: * the conditional mean at a given a point $\tilde{\mathbf{x}}\in\mathbb{R}^{q}$ of the covariate space: \begin{equation} \theta_m(\tilde{\mathbf{x}})=\mathbb{E}\left[\tilde{y}|\tilde{\mathbf{x}}\right]=\exp\left\{\tilde{\mathbf{x}}^T\boldsymbol{\beta}+\frac{\sigma^2}{2} \right\}; \end{equation} and the function `LN_MeanReg()` allows to make inference on this quantity; * the $p$-th quantile at a given a point $\tilde{\mathbf{x}}\in\mathbb{R}^{q}$ of the covariate space: \begin{equation} \theta_p(\tilde{\mathbf{x}})=\mathbb{Q}_p\left[\tilde{y}|\tilde{\mathbf{x}}\right]=\exp\left\{\tilde{\mathbf{x}}^T\boldsymbol{\beta}+\Phi^{-1}(p)\sigma \right\}, \end{equation} and the function `LN_QuantReg()` can be used to obtain posterior summaries for this quantity. It is possible to prove that the posterior moments of these functionals are finite up to order $r$ if the following conditions on the tail parameter $\gamma$ of the GIG prior holds. * $\mathbb{E}[\theta_m(\tilde{\mathbf{x}})^r|\mathbf{y}]<\infty$ if $\gamma>r+r^2\tilde{\mathbf{x}}^T(\mathbf{X}^T\mathbf{X})^{-1}\tilde{\mathbf{x}}$; * $\mathbb{E}[\theta_p(\tilde{\mathbf{x}})^r|\mathbf{y}]<\infty$ if $\gamma>r^2\tilde{\mathbf{x}}^T(\mathbf{X}^T\mathbf{X})^{-1}\tilde{\mathbf{x}}$. In the proposed software implementation of the methodologies, the conditions on the parameter $\gamma$ are evaluated with $r=3$ to set the hyperparameter value, in order to assure the stable existence of the posterior variance. It is useful to remark that in case of unconditional estimation, the previous target quantities and the related conditions reduce to the following ones: * The unconditional mean is $\theta_m=\exp\{\beta_0+\frac{\sigma^2}{2}\}$ and the moments are defined up to order $r$ if $\gamma>r+\frac{r^2}{n}$. The function `LN_Mean()` can be used for this particular case. * The unconditional quantile is $\theta_p=\exp\{\beta_0+\Phi^{-1}(p)\sigma\}$, the moments are defined up to order $r$ if $\gamma>\frac{r^2}{n}$ and the function `LN_Quan()` can be used. The last aspect to determine is the hyperparameters specification. For all the `R` functions related to these quantities, two different strategies are proposed and can be selected through the `method` argument: * If a weakly informative prior for the variance is desired, the (default) `"weak_inf"` option can be chosen. In this way, it has been proved that credibility intervals with good frequentist properties are obtained [@fabrizi2012bayes]. * If the point estimation is desired, optimal-MSE procedures are implemented too and can be set using the `"optimal"` option. For details of the setting related to the mean estimation process see @fabrizi2012bayes and @fabrizi2016bayesian. For quantiles a numerical procedure is called. ### Conditional means estimation under linear mixed models In this case we are considering a vector of responses $\mathbf{y}\in\mathbb{R}^n$ and the assumption of log-normality for the response means analysing the log-transformed vector $\mathbf{w}=\log \mathbf{y}$ as normally distributed. The classical formulation of the model is: \begin{equation} \mathbf{w}= \mathbf{X}\boldsymbol{\beta}+\mathbf{Zu}+\boldsymbol{\varepsilon}. \end{equation} The coefficients of the fixed effects are in the vector $\boldsymbol{\beta}\in\mathbb{R}^p$, whereas $\mathbf{u}\in\mathbb{R}^m$ is the vector of random effects and $\boldsymbol{\varepsilon}\in\mathbb{R}^{n}$ is the vector of residuals. The design matrices are $\mathbf{X}\in\mathbb{R}^{n\times p}$, that is assumed to be full rank in order to guarantee the existence of $(\mathbf{X}^T\mathbf{X})^{-1}$, and $\mathbf{Z}\in\mathbb{R}^{n\times m}$. The following Bayesian hierarchical model is studied: \begin{equation}\label{eq:mod_mix} \begin{aligned} &\mathbf{w}|\mathbf{u}, \boldsymbol{\beta}, \sigma^2\sim \mathcal{N}_n\left(\mathbf{X}\boldsymbol{\beta}+\mathbf{Zu}, \mathbf{I}_n\sigma^2 \right);\\ &\mathbf{u}|\tau^2_1,...,\tau^2_q\sim\mathcal{N}_m\left(\mathbf{0}, \mathbf{D}\right),\ \mathbf{D}=\oplus^q_{s=1}\mathbf{I}_{m_s}\tau_s^2;\\ &(\boldsymbol{\beta},\sigma^2)\sim p(\boldsymbol{\beta},\sigma^2);\\ &\boldsymbol{\tau}^2\sim p(\tau_1^2,...,\tau_q^2). \end{aligned} \end{equation} Since $q$ random factors are considered, $q$ different variances related to the random components $\boldsymbol{\tau}^2=(\tau^2_1,...,\tau^2_q)$ are included in the model. Therefore, it is possible to split the vector of random effects in $\mathbf{u}=[\mathbf{u}_1^T,...,\mathbf{u}_s^T,...,\mathbf{u}_q^T]^T$, where $\mathbf{u}_s\in\mathbb{R}^{m_s}$ with $\sum_{s=1}^q m_s=m$. The design matrix of the random effects might be partitioned too: $\mathbf{Z}=[\mathbf{Z}_1\cdots \mathbf{Z}_s\cdots\mathbf{Z}_q]$. The function `LN_hierarchical()` allows the user to make inference on the desired log-normal linear mixed model by sampling from the posterior distributions through a Gibbs sampler. The model equation need to be given to the `formula_lme` argument using the same syntax as the `lmer()` function of the `lme4` package [@lme]. In practice, the interpretable outputs are usually provided in the original data scale, back-transforming the results obtained estimating the previous model. Exploiting the properties of the log-normal distribution, the following quantities can be of interest: * the conditioned expectation of the observation $\tilde{y}$ given the random effects and the covariate patterns $\tilde{\mathbf{x}},\ \tilde{\mathbf{z}}$ (quantity that could be also labelled as subject-specific expectation). It is defined as: \begin{equation} \theta_c(\tilde{\mathbf{x}},\tilde{\mathbf{z}})=\mathbb{E}\left[\tilde{y}|\mathbf{u},\tilde{\mathbf{x}},\tilde{\mathbf{z}}\right]=\exp\left\{\tilde{\mathbf{x}}^T\boldsymbol{\beta}+\tilde{\mathbf{z}}^T\mathbf{u}+\frac{\sigma^2}{2} \right\}, \end{equation} * if the random effects are ignored and they are integrated out, then the conditioned expectation of interest is: \begin{equation}\label{eq:avg_marg} \theta_m(\tilde{\mathbf{x}})=\mathbb{E}\left[\tilde{y}|\tilde{\mathbf{x}}\right]=\exp\left\{\tilde{\mathbf{x}}^T\boldsymbol{\beta}+\frac{1}{2}\left(\sigma^2+\sum_{s=1}^q \tau_s^2\right) \right\}; \end{equation} * the posterior predictive distribution $p(\tilde{y}|\mathbf{y})$ and its posterior moments is a further quantity that might be investigated. The argument `functional` of the `LN_hierarchical()` function let the user specify the kind of functionals for which the posterior distribution is of interest: the posterior of $\theta_c(\tilde{\mathbf{x}},\tilde{\mathbf{z}})$ is obtained by specifying `"Subject"`, $\theta_m(\tilde{\mathbf{x}})$ with `"Marginal"` and the posterior predictive distribution with `"PostPredictive"`. Moreover, the argument `data_pred` allow to provide a data frame containing the desired covariate points for which the target quantities need to be computed. As in the previous section, independent GIG priors are adopted for the variance components: \begin{equation} p(\sigma^2)\sim GIG(\lambda_\sigma,\delta_\sigma,\gamma_\sigma);\ \ p(\tau_s^2)\sim GIG(\lambda_{\tau,s},\delta_{\tau,s} ,\gamma_{\tau,s}),\ \forall s. \end{equation} Moreover, it is possible to prove that the tail parameter $\gamma$ is involved again in the existence conditions for the posterior moments of the target quantities defined above. In particular: * $\mathbb{E}\left[\theta_c^r(\tilde{\mathbf{x}},\tilde{\mathbf{z}})|\mathbf{w}\right]$ exists if $\gamma_{\sigma}^2>r+r^2\tilde{\mathbf{x}}^T\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\tilde{\mathbf{x}}$; * $\mathbb{E}\left[\theta_m^r(\tilde{\mathbf{x}})|\mathbf{w}\right]$ exists if $\gamma_{\sigma}^2>r+r^2\tilde{\mathbf{x}}^T\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\tilde{\mathbf{x}}$ and $\gamma^2_{\tau,s}>r+r^2\tilde{\mathbf{x}}_{o}^T\mathbf{L}_s\tilde{\mathbf{x}}_{o},\ \forall s$; * $\mathbb{E}\left[\tilde{y}^r|\mathbf{y}\right]$ exists if $\gamma_{\sigma}^2>r^2+r^2\tilde{\mathbf{x}}^T\left(\mathbf{X}^T\mathbf{X}\right)^{-1}\tilde{\mathbf{x}}$. If the first and the latter conditions are equal to the ones stated in the previous section and only the tail parameter of the prior for $\sigma^2$ is involved, the existence condition for the posterior moments of $\theta_m(\tilde{\mathbf{x}})$ requires a constraint on $\gamma_{\tau,s}$ too. This expression is function of the the matrix $\mathbf{L}_s\in\mathbb{R}^{p\times p}$: its entries are all 0s with the exception of the first $l \times l$ square block $\mathbf{L}_{s;1,1}$, where $l=p-\text{rank}\{ \mathbf{X}^T\left(\mathbf{I}-\mathbf{P_Z} \right)\mathbf{X}\}$. It coincides with the number of variables of $\mathbf{X}$ that are included in $\mathbf{Z}$ too. Furthermore, to simplify the final form of the result, it is useful to place the columns related to these variables as first $l$ columns of the \textit{ordered design matrix} $\mathbf{X}_o$, without loss of generality. As a consequence, the matrix $\mathbf{L}_{s;1,1}$ coincides with the inverse of the upper left $l \times l$ block on the diagonal of the matrix $\mathbf{X}_o^T\left(\mathbf{Z}(\mathbf{Z}^T\mathbf{Z})^{-}\mathbf{C}_s (\mathbf{Z}^T\mathbf{Z})^{-}\mathbf{Z}^T\right)\mathbf{X}_o$, where $\mathbf{C}_s$ is the null matrix with the exception of $\mathbf{I}_{m_s}$ as block on the diagonal in correspondence to the $s$-th variance component of the random effect. To complete the notation, $\tilde{\mathbf{x}}_{o}$ is the covariate pattern of the observation to estimate that is ordered coherently with respect to $\mathbf{X}_o$. Because of the non-intuitive expressions of the existence conditions, the function `LN_hier_existence()` is implemented to compute them. This routine is called by the function `LN_hierarchical()` to fix the values of the hyperparameters in order to fulfil the more restrictive existence condition for the functionals of interest, if the default priors are desired. To specify different priors, the arguments `par_tau` and `par_sigma` can be used. Otherwise, if the proposed prior specification is adopted, the key concepts of the strategy can be synthesized as follows: * the hyperparameters of all the priors are the same, to preserve the prior balance among the different variance components; * as tail parameter $\gamma$, the more restrictive condition is evaluated replacing $r$ with the specified `order_moment` (default 2) plus 1; * to obtain uniform marginal priors for the intraclass correlation coefficients, it is fixed $\lambda=1$ and $\delta=\varepsilon=0.01$. ## Real data applications To show how the functions of the package work and to briefly illustrate the produced outputs, some real data application are presented in this section. ### Unconditional estimation In environmental monitoring, it is common to deal with small datasets containing observations of pollutant concentrations and for which the log-normality assumption appears to be appropriate. In these applications, it is important to provide both point estimates and intervals, that constitutes the so-called confidence limits. A popular example included in @USEPA09 is faced: it consists of a small sample ($n=8$) of chrysene concentrations (ppb) obtained from two background wells. The vector of observations is already included in the package and is named `EPA09`. First, the mean estimation problem is faced and the function `LN_Mean()` is used. If a point estimate is desired, the advise is to use the `"optimal"` prior setting. Since the observations are not already log-transformed, the argument `x_transf` is set as `FALSE`. ```{r} # Load dataset data("EPA09") # Bayes estimator under relative quadratic loss and optimal prior setting LN_Mean(x = EPA09, x_transf = FALSE, method = "optimal", CI = FALSE) ``` The output reports the prior parameters for the variance $\sigma^2\sim GIG(\lambda, \delta, \gamma)$ and the 5 parameters that characterize the posterior distribution of the log-normal mean $\theta_m$, i.e. a Generalized Hyperbolic distribution (see @fabrizi2012bayes for more methodological details). Then, the basic summaries of the posterior distributions of the log-normal parameters are reported (`xi` is the log-scale mean and `sigma2` the log-scale variance). Finally, the posterior mean $\mathbb{E}[\theta_m|\mathbf{y}]$ and the posterior standard deviation of the target quantity are reported (note that these values are obtained in closed form, without MC simulations). On the other hand, if the interval estimate is required, it is advisable to use the weakly informative (`"weak_inf"`) prior setting, specify the desired credibility level `alpha_CI` and select the type of interval: it is possible to obtain as output the usual two sided interval (`"two-sided"`), the lower credible limit (`"LCL"`) and the upper credible limit (`"UCL"`). The last two interval kinds are often required in environmental problems to estimate pollutants legal limits. For example, the $95\%$ UCL can be estimated as follows. ```{r} LN_Mean(x = EPA09, x_transf = FALSE, method = "weak_inf", alpha_CI = 0.05, type_CI = "UCL") ``` The interval is added to the previous output, noting that the posterior quantiles required to produce the interval are obtained by simulation. The same procedures can be implemented also if the interest is in estimating a quantile $\theta_p$ under the log-normality assumption. For example, if the target is quantile $p=0.95$, to find an optimal point estimate it is possible to use the following command. ```{r} LN_Quant(x = EPA09, x_transf = FALSE, quant = 0.95, method = "optimal", CI = FALSE) ``` The output is similar to the one printed for the mean $\theta_m$ and in this case the posterior mean and standard deviation of the desired quantile $\theta_p$ are reported. To compute an interval estimate, the function can be used as showed for `LN_Mean()`. ### Log-normal regression The presented methods can be useful in predicting conditioned means under a log-normal linear model. The function `LN_MeanReg()` receives as input the vector `y` containing the observations of the response variable and the design matrix `X`. A matrix `Xtilde`, containing the covariate patterns for which a prediction is required, must be provided too. Likewise the unconditional estimation problem, it is possible to specify both an optimal prior setting and a weakly informative one. As illustrative example, the same data used in @fabrizi2016bayesian are considered, loading the `"fatigue"` dataset. Results for the weakly informative setting are reported. ```{r} # Load data data("fatigue") # Design matrices Xtot <- cbind(1, log(fatigue$stress), log(fatigue$stress)^2) X <- Xtot[-c(1,13,22),] y <- fatigue$cycle[-c(1,13,22)] Xtilde <- Xtot[c(1,13,22),] # units to predict #Estimation LN_MeanReg(y = y, X = X, Xtilde = Xtilde, method = "weak_inf", y_transf = FALSE) ``` For each one of the points for which a prediction is required, the summaries of the posterior distributions are reported: `$Sigma2` represents the variance in the log scale, whereas the `$Coefficients` reports the summaries of the vector of coefficients $\boldsymbol{\beta}$. ### Random coefficient model As last example, the estimation of log-normal linear mixed model is presented. The analysed dataset is due to @gibson2013processing and consists of a two-conditions repeated measure collection of observations of the time (in milliseconds) required to read the head noun of a Chinese clause. The following model is specified: \begin{equation} w_{ijk}=\log(y_{ijk})=\beta_0+\beta_1 x_{i}+u_j+v_k+\varepsilon_{ijk}, \end{equation} where $y_{ijk}$ is the reading time observed for subject $j=1,...,37$, reading item $k=1,...,15$ and condition $i=1,2$. Moreover, it is fixed $x_i=-1$ in case of subject relative, and $x_i=1$ for object relative condition. The goal of the analysis is to predict the expectation conditioned on $x_i$ and marginalized with respect both the random effect: \begin{equation} \theta_m(x_i=\pm 1)=\exp\left\{\beta_0\pm\beta_1+\frac{\tau^2_u+\tau^2_v+\sigma^2}{2} \right\}. \end{equation} Moreover, the expectation specific of a particular subject and item might be of interest too: \begin{equation} \theta_c(x_i,u_j,v_k)=\exp\left\{\beta_0+x_i\beta_1+u_j+v_k+\frac{\sigma^2}{2} \right\}, \end{equation} As example, the prediction of these quantities for both the values of the covariate $x_i$ related to subject $12$ and item $8$ are desired. A new `data.frame` containing the desired covariate patterns must be created. ```{r} # Load the dataset included in the package data("ReadingTime") # Define data.frame containing the covariate patterns to investigate data_pred_new <- expand.grid(so=c(-1,1), subj=factor(12), item=factor(8)) # Model estimation Mod_est_RT <- LN_hierarchical(formula_lme = log_rt ~ so +(1|subj)+(1|item), data_lme = ReadingTime, data_pred = data_pred_new, functional = c("Marginal", "Subject"), nsamp = 25000, burnin = 5000, n_thin = 5) ``` As hinted before, the same priors for all the variance components are specified, choosing the most restrictive value for the parameter $\gamma$ (i.e. the highest one). To check the values, the element `$par_prior` of the output can be printed. ```{r} # Prior parameters Mod_est_RT$par_prior ``` The `$samples` element is an object of class `mcmc` containing the samples drown from the posterior distributions of the model parameters and of the target functionals. The usual tools provided by the `coda` package [@coda] can be used to explore them. For example, the chains convergence can be explored plotting the traceplots. ```{r, fig.width = 6.5} # coda package library(coda) # Traceplots model parameters oldpar <- par(mfrow=c(2,3)) traceplot(Mod_est_RT$samples$par[, 1:6]) par(oldpar) ``` Finally, the `$summaries` part contains the summary statistics of the posterior distributions of the parameters and the functionals required. It is possible to isolate the outputs related to the latter as follows. ```{r} # Posterior summaries Mod_est_RT$summaries$marg Mod_est_RT$summaries$subj ``` ## References
/scratch/gouwar.j/cran-all/cranData/BayesLN/vignettes/BayesLogNormal.Rmd
# (C) Nicholas Polson, James Scott, Jesse Windle, 2012-2019 # This file is part of BayesLogit. # BayesLogit is free software: you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation, either version 3 of the License, or (at your option) any later # version. # BayesLogit is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # You should have received a copy of the GNU General Public License along with # BayesLogit. If not, see <https://www.gnu.org/licenses/>. ################################################################################ ## POSTERIOR BY GIBBS ## ################################################################################ ## Bayesian logistic regression ##------------------------------------------------------------------------------ logit.R <- function(y, X, n=rep(1, length(y)), m0=rep(0, ncol(X)), P0=matrix(0, nrow=ncol(X), ncol=ncol(X)), samp=1000, burn=500, verbose=500) { ## X: n by p matrix ## y: n by 1 vector, avg response ## n: n by 1 vector, # of obs at distinct x ## Combine data. ## new.data = logit.combine(y, X, n); ## y = new.data$y; ## X = new.data$X; ## n = new.data$n; ## n.prior = 0.0; X = as.matrix(X); y = as.numeric(y) p = ncol(X) N = nrow(X) alpha = (y-1/2)*n Z = colSums(X*alpha) + P0 %*% m0; ## PsiToBeta = solve(t(X) %*% X) %*% t(X); w = rep(0,N) ## w = w.known; beta = rep(0.0, p) output <- list(w = matrix(nrow=samp, ncol=N), beta = matrix(nrow=samp, ncol=p) ) ## c_k = (1:200-1/2)^2 * pi^2 * 4; ## Timing start.time = proc.time() ## Sample for ( j in 1:(samp+burn) ) { if (j==burn+1) start.ess = proc.time(); ## draw w psi = drop(X%*%beta) ## Sum of gamma: poor approximation when psi is large! Causes crash. ## w = rpg.gamma(N, n, psi) ## Devroye is faster anyway. w = rpg.devroye(N, n, psi); ## draw beta - Joint Sample. PP = t(X) %*% (X * w) + P0; ## U = chol(PP); ## m = backsolve(U, Z, transpose=TRUE); ## m = backsolve(U, m); ## beta = m + backsolve(U, rnorm(p)) S = chol2inv(chol(PP)); m = S %*% as.vector(Z); beta = m + t(chol(S)) %*% rnorm(p); # Record if we are past burn-in. if (j>burn) { output$w[j-burn,] <- w output$beta[j-burn,] <- beta } if (j %% verbose == 0) { print(paste("LogitPG: Iteration", j)); } } end.time = proc.time() output$total.time = end.time - start.time output$ess.time = end.time - start.ess ## Add new data to output. output$"y" = y; output$"X" = X; output$"n" = n; output } ## logit.gibbs.R ## Bayesian logistic regression - Normal Prior ##------------------------------------------------------------------------------ ## I include this for a fair comparison with LogitFSF. draw.beta <- function(z, X, w, b.0=NULL, B.0=NULL, P.0=NULL) { ## z: N x 1 outcomes. ## X: N x P design matrix. ## b.0: prior mean for beta ## B.0: prior variance for beta ## P.0: prior precision for beta. ## FS-F use b to denote means and B to denote variances. N = nrow(X); P = ncol(X); if (is.null(b.0)) b.0 = rep(0.0, P); if (is.null(P.0)) P.0 = matrix(0.0, P, P); if (!is.null(B.0)) P.0 = solve(B.0); P.N = t(X) %*% (X * w) + P.0; ## S = solve(PC); ## chol2inv works better for larger P? S = chol2inv(chol(P.N)); m = S %*% (as.vector(z) + P.0 %*% b.0); beta = m + t(chol(S)) %*% rnorm(P); } ## draw.beta logit.gibbs.np.R <- function(y, X, n=rep(1, length(y)), b.0=NULL, B.0 = NULL, P.0 = NULL, samp=1000, burn=500, verbose=500) { ## X: n by p matrix ## y: n by 1 vector, avg response ## n: n by 1 vector, # of obs at distinct x ## DO NOT USE DEFAULT PRIOR y.prior=0.5; x.prior=colMeans(as.matrix(X)); n.prior=0.0; ## Combine data. ## new.data = logit.combine(y, X, n, y.prior, x.prior, n.prior); ## y = new.data$y; ## X = new.data$X; ## n = new.data$n; ## n.prior = 0.0; ## Don't combine. X = as.matrix(X); y = as.matrix(y); ## X = as.matrix(X); p = ncol(X) N = nrow(X) ## Default prior parameters. if (is.null(b.0)) b.0 = rep(0.0, p); if (is.null(P.0)) P.0 = matrix(0.0, p, p); if (!is.null(B.0)) P.0 = solve(B.0); ## Preprocess. alpha = drop((y-1/2)*n) Z = colSums(X*alpha) w = rep(0,N) ## w = w.known; beta = rep(0.0, p) out <- list(w = matrix(nrow=samp, ncol=N), beta = matrix(nrow=samp, ncol=p) ) start.time = proc.time() ## Sample for ( j in 1:(samp+burn) ) { if (j==burn+1) start.ess = proc.time(); ## draw w psi = drop(X%*%beta) w = rpg.devroye(N, n, psi); ## # draw beta - Joint Sample. ## PC = t(X) %*% (X * w) + P.0; ## ## S = solve(PC); ## chol2inv works better for larger P? ## S = chol2inv(chol(PC)); ## m = S %*% as.vector(Z); ## beta = m + t(chol(S)) %*% rnorm(p); beta = draw.beta(Z, X, w, b.0=b.0, P.0=P.0); # Record if we are past burn-in. if (j>burn) { out$w[j-burn,] <- w out$beta[j-burn,] <- beta } if (j %% verbose == 0) { print(paste("Iteration", j)); } } end.time = proc.time() out$total.time = end.time - start.time out$ess.time = end.time - start.ess ## ## Add new data to output. ## output$"y" = y; ## output$"X" = X; ## output$"n" = n; out } ## logit.gibbs.np.R ################################################################################ ## TESTING ## ################################################################################ #data = read.table("orings.dat",header=TRUE) #attach(data) #failure = 2*failure-1 ## x = c(53,56,57,63,66,67,68,69, 70,72,73, 75,76,78,79,80,81) ## y = c( 1, 1, 1, 0, 0, 0, 0, 0,3/4, 0, 0,1/2, 0, 0, 0, 0, 0) ## n = c( 1, 1, 1, 1, 1, 3, 1, 1, 4, 1, 1, 2, 2, 1, 1, 1, 1) ## ans = logit.MCMC(100000,cbind(1,x),y,n) ## hist(ans$beta[,1]) ## hist(ans$beta[,2]) ## mean(ans$beta[,1]) ## mean(ans$beta[,2])
/scratch/gouwar.j/cran-all/cranData/BayesLogit/R/LogitPG.R
# (C) Nicholas Polson, James Scott, Jesse Windle, 2012-2019 # This file is part of BayesLogit. # BayesLogit is free software: you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation, either version 3 of the License, or (at your option) any later # version. # BayesLogit is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # You should have received a copy of the GNU General Public License along with # BayesLogit. If not, see <https://www.gnu.org/licenses/>. ################################################################################ ## POLYAGAMMA ## ################################################################################ ## Draw PG(h, z) ##------------------------------------------------------------------------------ rpg.gamma <- function(num=1, h=1, z=0.0, trunc=200) { ## Check Parameters. if (sum(h<0)!=0) { stop("rpg.gamma: h must be greater than zero."); } if (trunc < 1) { stop("rpg.gamma: xtrunc must be > 0."); } x = rep(0, num); if (length(h) != num) { h = array(h, num); } if (length(z) != num) { z = array(z, num); } OUT = .C("rpg_gamma", x, h, z, as.integer(num), as.integer(trunc), PACKAGE="BayesLogit"); OUT[[1]] } ## Draw PG(n, z) where n is a natural number. ##------------------------------------------------------------------------------ rpg.devroye <- function(num=1, h=1, z=0.0) { n = h ## Check Parameters. if (any(n<0)) { stop("rpg.devroye: h must be greater than zero."); } x = rep(0, num); if (length(n) != num) { n = array(n, num); } if (length(z) != num) { z = array(z, num); } OUT = .C("rpg_devroye", x, as.integer(n), z, as.integer(num), PACKAGE="BayesLogit"); OUT[[1]] } ## Draw PG(h, z) where h is \geq 1. ##------------------------------------------------------------------------------ rpg.alt <- function(num=1, h=1, z=0.0) { ## Check Parameters. if (any(h<1)) { stop("rpg.alt: h must be >= 1."); } x = rep(0, num); if (length(h) != num) { h = array(h, num); } if (length(z) != num) { z = array(z, num); } OUT = .C("rpg_alt", x, h, z, as.integer(num), PACKAGE="BayesLogit"); OUT[[1]] } ## Draw PG(h, z) using SP approx where h is \geq 1. ##------------------------------------------------------------------------------ rpg.sp <- function(num=1, h=1, z=0.0) { ## Check Parameters. if (any(h<1)) { stop("rpg.sp: h must be >= 1."); } x = rep(0, num); iter = rep(0, num); if (length(h) != num) { h = array(h, num); } if (length(z) != num) { z = array(z, num); } ## Faster if we do not track iter. OUT = .C("rpg_sp", x, h, z, as.integer(num), as.integer(iter), PACKAGE="BayesLogit"); ## ## Tracking total iterations ## out = list() ## if (!track.iter) ## out = OUT[[1]] ## else ## out = list(samp=OUT[[1]], iter=OUT[[5]]) ## out OUT[[1]] } ## Draw PG(n, z) ##------------------------------------------------------------------------------ rpg <- function(num=1, h=1, z=0.0) { ## Check Parameters. if (any(h<=0)) { stop("rpg: h must be > 0."); } x = rep(0, num); if (length(h) != num) { h = array(h, num); } if (length(z) != num) { z = array(z, num); } ## Faster if we do not track iter. OUT = .C("rpg_hybrid", x, h, z, as.integer(num), PACKAGE="BayesLogit"); OUT[[1]] } ## OLD OLD OLD OLD OLD ----> ## ################################################################################ ## ## Utility ## ## ################################################################################ ## ## Check parameters to prevent an obvious error. ## ##------------------------------------------------------------------------------ ## check.parameters <- function(y, n, m0, P0, R.X, C.X, samp, burn) ## { ## ok = rep(TRUE, 8); ## ok[1] = all(y >= 0); ## ok[2] = all(n > 0); ## ok[3] = C.X==nrow(P0); ## ok[4] = C.X==ncol(P0); ## ok[5] = (length(y) == length(n) && length(y) == R.X); ## ok[6] = C.X==length(m0); ## ok[7] = (samp > 0); ## ok[8] = (burn >=0); ## ok[9] = all(y <= 1); ## if (!ok[1]) print("y must be >= 0."); ## if (!ok[9]) print("y is a proportion; it must be <= 1."); ## if (!ok[2]) print("n must be > 0."); ## if (!ok[3]) print(paste("col(X) != row(P0)", C.X, nrow(P0))); ## if (!ok[4]) print(paste("col(X) != col(P0)", C.X, ncol(P0))); ## if (!ok[5]) print(paste("Dimensions do not conform for y, X, and n.", ## "len(y) =", length(y), ## "dim(x) =", R.X, C.X, ## "len(n) =", length(n))); ## if (!ok[6]) print(paste("col(X) != length(m0)", C.X, length(m0))); ## if (!ok[7]) print("samp must be > 0."); ## if (!ok[8]) print("burn must be >=0."); ## ok = all(ok) ## } ## ## Combine ## ##------------------------------------------------------------------------------ ## logit.combine <- function(y, X, n=rep(1,length(y))) ## { ## X = as.matrix(X); ## N = dim(X)[1]; ## P = dim(X)[2]; ## m0 = matrix(0, nrow=P); ## P0 = matrix(0, nrow=P, ncol=P); ## ok = check.parameters(y, n, m0, P0, N, P, 1, 0); ## if (!ok) return(-1); ## ## Our combine_data function, written in C, uses t(X). ## tX = t(X); ## OUT = .C("combine", ## as.double(y), as.double(tX), as.double(n), ## as.integer(N), as.integer(P), ## PACKAGE="BayesLogit"); ## N = OUT[[4]]; ## y = array(as.numeric(OUT[[1]]), dim=c(N)); ## tX = array(as.numeric(OUT[[2]]), dim=c(P, N)); ## n = array(as.numeric(OUT[[3]]), dim=c(N)); ## list("y"=as.numeric(y), "X"=t(tX), "n"=as.numeric(n)); ## } ## ################################################################################ ## ## POSTERIOR INFERENCE ## ## ################################################################################ ## ## Posterior by Gibbs ## ##------------------------------------------------------------------------------ ## logit <- function(y, X, n=rep(1,length(y)), ## m0=rep(0, ncol(X)), P0=matrix(0, nrow=ncol(X), ncol=ncol(X)), ## samp=1000, burn=500) ## { ## ## In the event X is one dimensional. ## X = as.matrix(X); ## ## Combine data. We do this so that the auxiliary variable matches the ## ## data. ## new.data = logit.combine(y, X, n); ## y = new.data$y; ## X = new.data$X; ## n = new.data$n; ## ## Check that the data and priors are okay. ## N = dim(X)[1]; ## P = dim(X)[2]; ## ok = check.parameters(y, n, m0, P0, N, P, samp, burn); ## if (!ok) return(-1) ## ## Initialize output. ## output = list(); ## ## w = array(known.w, dim=c(N, samp)); ## ## beta = array(known.beta, dim=c(P , samp)); ## w = array(0.0, dim=c(N, samp)); ## beta = array(0.0, dim=c(P , samp)); ## ## Our Logit function, written in C, uses t(X). ## tX = t(X); ## OUT <- .C("gibbs", ## w, beta, ## as.double(y), as.double(tX), as.double(n), ## as.double(m0), as.double(P0), ## as.integer(N), as.integer(P), ## as.integer(samp), as.integer(burn), ## PACKAGE="BayesLogit"); ## N = OUT[[8]]; ## tempw = array( as.numeric(OUT[[1]]), dim=c(N, samp) ); ## output = list("w"=t(tempw), "beta"=t(OUT[[2]]), "y"=y, "X"=X, "n"=n); ## output ## } ## ## Posterior mode by EM ## ##------------------------------------------------------------------------------ ## logit.EM <- function(y, X, n=rep(1, length(y)), ## tol=1e-9, max.iter=100) ## { ## ## In the event X is one dimensional. ## X = as.matrix(X); ## ## Combine data. May speed things up. ## new.data = logit.combine(y, X, n); ## y = new.data$y; ## X = new.data$X; ## n = new.data$n; ## ## Check that the data and priors are okay. ## N = dim(X)[1]; ## P = dim(X)[2]; ## m0 = matrix(0, nrow=P); ## P0 = matrix(0, nrow=P, ncol=P); ## ok = check.parameters(y, n, m0, P0, N, P, 1, 0); ## if (!ok) return(-1); ## ## Initialize output. ## beta = array(0, P); ## ## Our Logit function, written in C, uses t(X). ## tX = t(X); ## OUT = .C("EM", ## beta, ## as.double(y), as.double(tX), as.double(n), ## as.integer(N), as.integer(P), ## as.double(tol), as.integer(max.iter), ## PACKAGE="BayesLogit"); ## list("beta"=OUT[[1]], "iter"=OUT[[8]]); ## } ## ################################################################################ ## ## Multinomial Case ## ## ################################################################################ ## ## Check parameters to prevent an obvious error. ## ##------------------------------------------------------------------------------ ## mult.check.parameters <- function(y, X, n, m.0, P.0, samp, burn) ## { ## ok = rep(TRUE, 6); ## ok[1] = all(y >= 0); ## ok[2] = all(n > 0); ## ok[3] = (nrow(y) == length(n) && nrow(y) == nrow(X)); ## ok[4] = (samp > 0); ## ok[5] = (burn >=0); ## ok[6] = all(rowSums(y) <= 1); ## ok[7] = (ncol(y)==ncol(m.0) && ncol(X)==nrow(m.0)); ## ok[8] = (ncol(X)==dim(P.0)[1] && ncol(X)==dim(P.0)[2] && ncol(y)==dim(P.0)[3]); ## if (!ok[1]) print("y must be >= 0."); ## if (!ok[6]) print("y[i,] are proportions and must sum <= 1."); ## if (!ok[2]) print("n must be > 0."); ## if (!ok[3]) print(paste("Dimensions do not conform for y, X, and n.", ## "dim(y) =", nrow(y), ncol(y), ## "dim(x) =", nrow(X), ncol(X), ## "len(n) =", length(n))); ## if (!ok[4]) print("samp must be > 0."); ## if (!ok[5]) print("burn must be >=0."); ## if (!ok[7]) print("m.0 does not conform."); ## if (!ok[8]) print("P.0 does not conform."); ## ok = all(ok) ## } ## ## Combine for multinomial logit. ## ##------------------------------------------------------------------------------ ## mlogit.combine <- function(y, X, n=rep(1,nrow(as.matrix(y)))) ## { ## X = as.matrix(X); ## y = as.matrix(y); ## N = dim(X)[1]; ## P = dim(X)[2]; ## J = dim(y)[2]+1; ## m.0=array(0, dim=c(ncol(X), ncol(y))); ## P.0=array(0, dim=c(ncol(X), ncol(X), ncol(y))); ## ok = mult.check.parameters(y, X, n, m.0, P.0, 1, 0); ## if (!ok) return(NA); ## ## Our combine_data function, written in C, uses t(X), t(y). ## ty = t(y); ## tX = t(X); ## OUT = .C("mult_combine", ## as.double(ty), as.double(tX), as.double(n), ## as.integer(N), as.integer(P), as.integer(J), ## PACKAGE="BayesLogit"); ## N = OUT[[4]]; ## ty = array(as.numeric(OUT[[1]]), dim=c(J-1, N)); ## tX = array(as.numeric(OUT[[2]]), dim=c(P, N)); ## n = array(as.numeric(OUT[[3]]), dim=c(N)); ## list("y"=t(ty), "X"=t(tX), "n"=as.numeric(n)); ## } ## ## Posterior for multinomial logistic regression ## ##------------------------------------------------------------------------------ ## mlogit <- function(y, X, n=rep(1,nrow(as.matrix(y))), ## m.0=array(0, dim=c(ncol(X), ncol(y))), ## P.0=array(0, dim=c(ncol(X), ncol(X), ncol(y))), ## samp=1000, burn=500) ## { ## ## In the event y or X is one dimensional. ## X = as.matrix(X); ## y = as.matrix(y); ## ## Combine data. We do this so that the auxiliary variable matches the ## ## data. ## new.data = mlogit.combine(y, X, n); ## if (!is.list(new.data)) return(NA); ## y = new.data$y; ## X = new.data$X; ## n = new.data$n; ## N = dim(X)[1]; ## P = dim(X)[2]; ## J = dim(y)[2]+1; ## ## Check that the data and priors are okay. ## ok = mult.check.parameters(y, X, n, m.0, P.0, samp, burn); ## if (!ok) return(NA) ## ## Initialize output. ## output = list(); ## ## w = array(known.w, dim=c(N, samp)); ## ## beta = array(known.beta, dim=c(P , samp)); ## w = array(0.0, dim=c(N, J-1, samp)); ## beta = array(0.0, dim=c(P, J-1, samp)); ## ## Our Logit function, written in C, uses t(X), t(y). ## tX = t(X); ## ty = t(y); ## OUT = .C("mult_gibbs", ## w, beta, ## as.double(ty), as.double(tX), as.double(n), ## as.double(m.0), as.double(P.0), ## as.integer(N), as.integer(P), as.integer(J), ## as.integer(samp), as.integer(burn), ## PACKAGE="BayesLogit"); ## N = OUT[[8]]; ## ## Transpose for standard output. ## w = array(0, dim=c(samp, N, J-1)); ## beta = array(0, dim=c(samp, P, J-1)); ## for (i in 1:samp) { ## w[i,,] = OUT[[1]][,,i] ## beta[i,,] = OUT[[2]][,,i] ## } ## output = list("w"=w, "beta"=beta, "y"=y, "X"=X, "n"=n); ## output ## }
/scratch/gouwar.j/cran-all/cranData/BayesLogit/R/LogitWrapper.R
# (C) Nicholas Polson, James Scott, Jesse Windle, 2012-2019 # This file is part of BayesLogit. # BayesLogit is free software: you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation, either version 3 of the License, or (at your option) any later # version. # BayesLogit is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # You should have received a copy of the GNU General Public License along with # BayesLogit. If not, see <https://www.gnu.org/licenses/>. ################################################################################ ## PG(1.0, Z) - FOLLOWING DEVROYE ## ################################################################################ TRUNC = 0.64 cutoff = 1 / TRUNC; ## pigauss - cumulative distribution function for Inv-Gauss(mu, lambda). ##------------------------------------------------------------------------------ pigauss <- function(x, mu, lambda) { Z = 1.0 / mu; b = sqrt(lambda / x) * (x * Z - 1); a = -1.0 * sqrt(lambda / x) * (x * Z + 1); y = exp(pnorm(b, log.p=TRUE)) + exp(2 * lambda * Z + pnorm(a, log.p=TRUE)); # y2 = 2 * pnorm(-1.0 / sqrt(x)); y } q.and.p <- function(Z) { fz = pi^2 / 8 + Z^2 / 2; p = (0.5 * pi) * exp( -1.0 * fz * TRUNC) / fz; q = 2 * exp(-1.0 * Z) * pigauss(TRUNC, 1.0/Z, 1.0); list("q"=q, "p"=p, "qdivp"=q/p); } mass.texpon <- function(Z) { x = TRUNC; fz = pi^2 / 8 + Z^2 / 2; b = sqrt(1.0 / x) * (x * Z - 1); a = -1.0 * sqrt(1.0 / x) * (x * Z + 1); x0 = log(fz) + fz * TRUNC; xb = x0 - Z + pnorm(b, log.p=TRUE); xa = x0 + Z + pnorm(a, log.p=TRUE); qdivp = 4 / pi * ( exp(xb) + exp(xa) ); 1.0 / (1.0 + qdivp); } mass.detail <- function(Z) { x = TRUNC; fz = pi^2 / 8 + Z^2 / 2; b = sqrt(1.0 / x) * (x * Z - 1); a = -1.0 * sqrt(1.0 / x) * (x * Z + 1); x0 = log(fz) + fz * TRUNC; xb = x0 - Z + pnorm(b, log.p=TRUE); xa = x0 + Z + pnorm(a, log.p=TRUE); qdivp = 4 / pi * ( exp(xb) + exp(xa) ); m = 1.0 / (1.0 + qdivp); p = cosh(Z) * 0.5 * pi * exp(-x0); q = p * (1/m - 1); out = list("qdivp"=qdivp, "m"=m, "p"=p, "q"=q, "c"=p+q, "qdivp2"=q/p); out } ## rtigauss - sample from truncated Inv-Gauss(1/abs(Z), 1.0) 1_{(0, TRUNC)}. ##------------------------------------------------------------------------------ rtigauss <- function(Z, R=TRUNC) { Z = abs(Z); mu = 1/Z; X = R + 1; if (mu > R) { alpha = 0.0; while (runif(1) > alpha) { ## X = R + 1 ## while (X > R) { ## X = 1.0 / rgamma(1, 0.5, rate=0.5); ## } E = rexp(2) while ( E[1]^2 > 2 * E[2] / R) { E = rexp(2) } X = R / (1 + R*E[1])^2 alpha = exp(-0.5 * Z^2 * X); } } else { while (X > R) { lambda = 1.0; Y = rnorm(1)^2; X = mu + 0.5 * mu^2 / lambda * Y - 0.5 * mu / lambda * sqrt(4 * mu * lambda * Y + (mu * Y)^2); if ( runif(1) > mu / (mu + X) ) { X = mu^2 / X; } } } X; } ## rigauss - sample from Inv-Gauss(mu, lambda). ##------------------------------------------------------------------------------ rigauss <- function(mu, lambda) { nu = rnorm(1); y = nu^2; x = mu + 0.5 * mu^2 * y / lambda - 0.5 * mu / lambda * sqrt(4 * mu * lambda * y + (mu*y)^2); if (runif(1) > mu / (mu + x)) { x = mu^2 / x; } x } ## Calculate coefficient n in density of PG(1.0, 0.0), i.e. J* from Devroye. ##------------------------------------------------------------------------------ a.coef <- function(n,x) { if ( x>TRUNC ) pi * (n+0.5) * exp( -(n+0.5)^2*pi^2*x/2 ) else (2/pi/x)^1.5 * pi * (n+0.5) * exp( -2*(n+0.5)^2/x ) } ## Samples from PG(n=1.0, psi=Z) ## ------------------------------------------------------------------------------ rpg.devroye.1 <- function(Z) { Z = abs(Z) * 0.5; ## PG(1,z) = 1/4 J*(1,Z/2) fz = pi^2 / 8 + Z^2 / 2; ## p = (0.5 * pi) * exp( -1.0 * fz * TRUNC) / fz; ## q = 2 * exp(-1.0 * Z) * pigauss(TRUNC, 1.0/Z, 1.0); num.trials = 0; total.iter = 0; while (TRUE) { num.trials = num.trials + 1; if ( runif(1) < mass.texpon(Z) ) { ## Truncated Exponential X = TRUNC + rexp(1) / fz } else { ## Truncated Inverse Normal X = rtigauss(Z) } ## C = cosh(Z) * exp( -0.5 * Z^2 * X ) ## Don't need to multiply everything by C, since it cancels in inequality. S = a.coef(0,X) Y = runif(1)*S n = 0 while (TRUE) { n = n + 1 total.iter = total.iter + 1; if ( n %% 2 == 1 ) { S = S - a.coef(n,X) if ( Y<=S ) break } else { S = S + a.coef(n,X) if ( Y>S ) break } } if ( Y<=S ) break } ## 0.25 * X list("x"=0.25 * X, "n"=num.trials, "total.iter"=total.iter) } ## Sample from PG(n, Z) using Devroye-like method. ## n is a natural number and z is a positive real. ##------------------------------------------------------------------------------ rpg.devroye.R <- function(num=1, h=1, z=0.0) { n = h z = array(z, num); n = array(n, num); total.trials = 0; x = rep(0, num); for (i in 1:num) { x[i] = 0; for (j in 1:n[i]) { ## x[i] = x[i] + rpg.devroye.1(z[i]) temp = rpg.devroye.1(z[i]); x[i] = x[i] + temp$x; total.trials = total.trials + temp$n; } } ## list("x"=x, "rate"=sum(n)/total.trials) x } ################################################################################ ## PG(1.0, Z) - ACCEPT/REJECT ## ################################################################################ ## NEED TO RENAME!!! ## rpg.alt.1 <- function(Z) ## { ## alpha = 0.0; ## while ( runif(1) > alpha ) { ## X = rpg.devroye.1(0); ## alpha = exp(-0.5 * (Z*0.5)^2 * X); ## } ## X ## } ## ## Sample PG(1.0, Z) using accept/reject. ## ##------------------------------------------------------------------------------ ## rpg.alt.R <- function(num=1, Z=0.0) ## { ## Z = array(Z, num); ## x = rep(0, num); ## for (i in 1:num) { ## x[i] = rpg.alt.1(Z[i]); ## } ## x ## } ################################################################################ ## PG(n, Z) - Sum of Gammas ## ################################################################################ ## Sample PG(n, z) using sum of Gammas representation. ##------------------------------------------------------------------------------ rpg.gamma.R <-function(num=1, h=1, z=0.0, trunc=200) { n = h w = rep(0, num); c.i = (1:trunc-1/2)^2 * pi^2 * 4 a.i = c.i + z^2; for(i in 1:num){ w[i] = 2.0 * sum(rgamma(trunc,n)/a.i) } w } ################################################################################ ## FOR PURPOSES OF TESTING ## ################################################################################ test.igauss <- function(Z, M=100) { x = c(); y = c(); for (i in 1:M) x[i] = rtigauss(Z); for (i in 1:M) { draw = TRUNC + 1; while (draw > TRUNC) { draw = rigauss(1/Z, 1.0); } y[i] = draw; } print(paste(mean(x), mean(y), sd(x), sd(y))); list("x"=x, "y"=y); } if (FALSE) { M = 10000; x2 = c() for ( i in 1:M ) x2[i] = rJstar(); x2 = x2/4.0 x3 = c() for ( i in 1:M ) x3[i] = rpg(2.0) x3 = x3/4.0 par(mfrow=c(1,2)) hist(x2, breaks=40, prob=T) hist(x3, breaks=40, prob=T) print(paste(mean(x2), mean(x3), sd(x2), sd(x3) )); }
/scratch/gouwar.j/cran-all/cranData/BayesLogit/R/PolyaGamma.R
## Copyright 2013 Nick Polson, James Scott, and Jesse Windle. ## This file is part of BayesLogit, distributed under the GNU General Public ## License version 3 or later and without ANY warranty, implied or otherwise. ## h14 = scan("~/Projects/RPackage/BayesLogit/Code/R/h1to4.txt") ## t14 = scan("~/Projects/RPackage/BayesLogit/Code/R/t1to4.txt") ## d14 = scan("~/Projects/RPackage/BayesLogit/Code/R/d1to4.txt") ## e14 = 1/d14 t14 = c(0.64, 0.68, 0.72, 0.75, 0.78, 0.8, 0.83, 0.85, 0.87, 0.89, 0.91, 0.93, 0.95, 0.96, 0.98, 1, 1.01, 1.03, 1.04, 1.06, 1.07, 1.09, 1.1, 1.12, 1.13, 1.15, 1.16, 1.17, 1.19, 1.2, 1.21, 1.23, 1.24, 1.25, 1.26, 1.28, 1.29, 1.3, 1.32, 1.33, 1.34, 1.35, 1.36, 1.38, 1.39, 1.4, 1.41, 1.42, 1.44, 1.45, 1.46, 1.47, 1.48, 1.5, 1.51, 1.52, 1.53, 1.54, 1.55, 1.57, 1.58, 1.59, 1.6, 1.61, 1.62, 1.63, 1.65, 1.66, 1.67, 1.68, 1.69, 1.7, 1.71, 1.72, 1.74, 1.75, 1.76, 1.77, 1.78, 1.79, 1.8, 1.81, 1.82, 1.84, 1.85, 1.86, 1.87, 1.88, 1.89, 1.9, 1.91, 1.92, 1.93, 1.95, 1.96, 1.97, 1.98, 1.99, 2, 2.01, 2.02, 2.03, 2.04, 2.05, 2.07, 2.08, 2.09, 2.1, 2.11, 2.12, 2.13, 2.14, 2.15, 2.16, 2.17, 2.18, 2.19, 2.21, 2.22, 2.23, 2.24, 2.25, 2.26, 2.27, 2.28, 2.29, 2.3, 2.31, 2.32, 2.33, 2.35, 2.36, 2.37, 2.38, 2.39, 2.4, 2.41, 2.42, 2.43, 2.44, 2.45, 2.46, 2.47, 2.48, 2.49, 2.51, 2.52, 2.53, 2.54, 2.55, 2.56, 2.57, 2.58, 2.59, 2.6, 2.61, 2.62, 2.63, 2.64, 2.65, 2.66, 2.68, 2.69, 2.7, 2.71, 2.72, 2.73, 2.74, 2.75, 2.76, 2.77, 2.78, 2.79, 2.8, 2.81, 2.82, 2.83, 2.84, 2.85, 2.87, 2.88, 2.89, 2.9, 2.91, 2.92, 2.93, 2.94, 2.95, 2.96, 2.97, 2.98, 2.99, 3, 3.01, 3.02, 3.03, 3.04, 3.06, 3.07, 3.08, 3.09, 3.1, 3.11, 3.12, 3.13, 3.14, 3.15, 3.16, 3.17, 3.18, 3.19, 3.2, 3.21, 3.22, 3.23, 3.24, 3.25, 3.27, 3.28, 3.29, 3.3, 3.31, 3.32, 3.33, 3.34, 3.35, 3.36, 3.37, 3.38, 3.39, 3.4, 3.41, 3.42, 3.43, 3.44, 3.45, 3.46, 3.47, 3.49, 3.5, 3.51, 3.52, 3.53, 3.54, 3.55, 3.56, 3.57, 3.58, 3.59, 3.6, 3.61, 3.62, 3.63, 3.64, 3.65, 3.66, 3.67, 3.68, 3.69, 3.71, 3.72, 3.73, 3.74, 3.75, 3.76, 3.77, 3.78, 3.79, 3.8, 3.81, 3.82, 3.83, 3.84, 3.85, 3.86, 3.87, 3.88, 3.89, 3.9, 3.91, 3.92, 3.93, 3.95, 3.96, 3.97, 3.98, 3.99, 4, 4.01, 4.02, 4.03, 4.04, 4.05, 4.06, 4.07, 4.08, 4.09, 4.1, 4.11, 4.12, 4.13) ################################################################################ ## J^* ## ################################################################################ p.ch.left <- function(t, h) { 2^h * pgamma(1/t, shape=0.5, rate=0.5*h^2, lower.tail=FALSE) } p.ch.right <- function(t, h) { (4/pi)^h * pgamma(t, shape=h, rate=pi^2 * 0.125, lower.tail=FALSE) } a.coef.alt <- function(n, x, h) { ## a_n(x,h). ## You could precompute lgamma(h), log(2). d.n = (2 * n + h) l.out = h * log(2) - lgamma(h) + lgamma(n+h) - lgamma(n+1) + log(d.n) - 0.5 * log(2 * pi * x^3) - 0.5 * d.n^2 / x; out = exp(l.out) out } c.2.coef <- function(n, x) { c.n = (n+1/2) * pi out = (1 - 1/(x*c.n^2)) * c.n^2 * x * exp(-c.n^2 * x / 2) if (n > 0) out = 2 * out out } pg.a.coef.alt <- function(n, x, h, z=0) { cosh(z/2)^h * exp(-0.5 * z^2 * x) * 4 * a.coef.alt(n, 4 * x, h) } jj.m1 <- function(b,z) { if (z > 1e-12) b * tanh(z) / z else b * (1 - (1/3) * z^2 + (2/15) * z^4 - (17/315) * z^6) } jj.m2 <- function(b, z) { if (z > 1e-12) (b+1) * b * (tanh(z)/z)^2 + b * ((tanh(z)-z)/z^3) else (b+1) * b * (1 - (1/3) * z^3 + (2/15) * z^4 - (17/315) * z^6)^2 + b * ((-1/3) + (2/15) * z - (17/315) * z^3); } pg.m1 <- function(b,z) { jj.m1(b,z/2) / 4 } pg.m2 <- function(b,z) { jj.m2(b,z/2) / 16 } ##------------------------------------------------------------------------------ ## SAMPLE TRUNCATED GAMMA ## ##------------------------------------------------------------------------------ ## A RELATIVELY EASY METHOD TO IMPLEMENT rltgamma.dagpunar.1 <- function(shape=1, rate=1, trnc=1) { ## y ~ Ga(shape, rate, trnc) ## x = y/t ## x ~ Ga(shape, rate t, 1) a = shape b = rate * trnc if (shape < 1) return(NA) if (shape == 1) return(rexp(1) / rate + trnc); d1 = b-a d3 = a-1 c0 = 0.5 * (d1 + sqrt(d1^2 + 4 * b)) / b x = 0 accept = FALSE while (!accept) { x = b + rexp(1) / c0 u = runif(1) l.rho = d3 * log(x) - x * (1-c0); l.M = d3 * log(d3 / (1-c0)) - d3 accept = log(u) <= (l.rho - l.M) } x = x / b y = trnc * x y } rltgamma.dagpunar <- function(num=1, shape=1, rate=1, trnc=1) { shape = array(shape, dim=num) rate = array(rate , dim=num) trnc = array(trnc, dim=num) y = rep(0, num) for (i in 1:num) y[i] = rltgamma.dagpunar.1(shape[i], rate[i], trnc[i]); y } rrtinvch2.ch.1 <- function(h, trnc) { R = 1 / (trnc * h^2) E = rexp(2) while ( (E[1]^2) > (2 * E[2] / R)) { ## cat("E", E[1], E[2], E[1]^2, 2*E[2] / R, "\n") E = rexp(2) } ## cat("E", E[1], E[2], "\n") ## W^2 = (1 + R*E[1])^2 / R is left truncated chi^2(1) I_{(R,\infty)}. ## So X is right truncated inverse chi^2(1). X = R / (1 + R*E[1])^2 X = h^2 * X X } rrtinvch2.ch <- function(num, h, trnc) { out = rep(0, num) for (i in 1:num) out[i] = rrtinvch2.ch.1(h, trnc) out } ##------------------------------------------------------------------------------ a.coef.alt.1 <- function(n,x, trnc) { if ( x>trnc ) pi * (n+0.5) * exp( -(n+0.5)^2*pi^2*x/2 ) else (2/pi/x)^1.5 * pi * (n+0.5) * exp( -2*(n+0.5)^2/x ) } g.tilde <- function(x, h, trnc) { if (x > trnc) (0.5 * pi)^h * x^(h-1) / gamma(h) * exp(-pi^2 / 8 * x) else 2^h * h * (2 * pi * x^3)^(-0.5) * exp(-0.5 * h^2 / x) } rch.1 <- function(h) { if (h < 1) return(NA); rate = pi^2 / 8 idx = floor((h-1) * 100) + 1; trnc = t14[idx]; num.trials = 0; total.iter = 0; p.l = p.ch.left (trnc, h); p.r = p.ch.right(trnc, h); p = p.r / (p.l + p.r); max.inner = 100 while (TRUE) { num.trials = num.trials + 1; if ( runif(1) < p ) { ## Left truncated gamma X = rltgamma.dagpunar.1(shape=h, rate=rate, trnc=trnc) } else { ## Right truncated inverse Chi^2 X = rrtinvch2.ch.1(h, trnc) } ## C = cosh(Z) * exp( -0.5 * Z^2 * X ) ## Don't need to multiply everything by C, since it cancels in inequality. S = a.coef.alt(0,X,h) ## B = a.coef.alt.1(0, X, trnc) D = g.tilde(X, h, trnc) Y = runif(1) * D n = 0 ## cat("B,C,left", B, C, X<trnc, "\n") a.n = S decreasing = FALSE while (n < max.inner) { n = n + 1 total.iter = total.iter + 1; a.prev = a.n a.n = a.coef.alt(n, X, h) ## b.n = a.coef.alt.1(n, X, trnc) decreasing = a.n < a.prev ## if (!decreasing) cat("n:", n, "; "); if ( n %% 2 == 1 ) { S = S - a.n ## B = B - b.n if ( Y<=S && decreasing) break } else { S = S + a.n ## B = B + b.n if ( Y>S && decreasing) break } } ## cat("S,B =", S, B, "\n") if ( Y<=S ) break } out = list("x"=X, "n"=num.trials, "total.iter"=total.iter) out } rch <- function(num, h) { h = array(h, num) x = rep(0, num) for (i in 1:num) { out = rch.1(h[i]) x[i] = out$x } x } ################################################################################ ## PLOTTING DENSITIES ## ################################################################################ if (FALSE) { ## a_n is absolutely summable. When h = 2 sum |a_n| = 0.5 as x -> infty. ## source("Ch.R") dx = 0.1 xgrid = seq(dx, 10, dx) y1 = xgrid y2 = xgrid n = length(xgrid) for (i in 1:n) { y1[i] = 0 y2[i] = 0 for (j in 0:200) { y1[i] = y1[i] + (-1)^j * a.coef.alt(j,xgrid[i],2) y2[i] = y2[i] + c.2.coef(j,xgrid[i]) } } plot(xgrid, y1) points(xgrid, y2, col=2) } ##------------------------------------------------------------------------------ ## PLOTTING PG DENSITY ## ##------------------------------------------------------------------------------ if (FALSE) { ## source("~/Projects/RPackage/BayesLogit/Code/R/Ch.R") dx = 0.01 xgrid = seq(dx, 3, dx) y1 = xgrid y2 = xgrid y3 = xgrid y4 = xgrid y5 = xgrid n = length(xgrid) for (i in 1:n) { y1[i] = 0 y2[i] = 0 y3[i] = 0 y4[i] = 0 y5[i] = 0 for (j in 0:200) { y1[i] = y1[i] + (-1)^j * pg.a.coef.alt(j,xgrid[i],1) y2[i] = y2[i] + (-1)^j * pg.a.coef.alt(j,xgrid[i],2) y3[i] = y3[i] + (-1)^j * pg.a.coef.alt(j,xgrid[i],3) y4[i] = y4[i] + (-1)^j * pg.a.coef.alt(j,xgrid[i],1, 2.0) y5[i] = y5[i] + (-1)^j * pg.a.coef.alt(j,xgrid[i],1, 4.0) } } ## png(filename="pg-dens.png", width=800, height=400) par(mfrow=c(1,2)) ymax = max(c(y1,y2,y3)) plot(xgrid, y1, type="l", ylim=c(0,ymax), main="Density of PG(b,0)", xlab="x", ylab="f(x|b,0)") lines(xgrid, y2, type="l", col=2, lty=2) lines(xgrid, y3, type="l", col=4, lty=4) legend("topright", legend=c("b=1", "b=2", "b=3"), col=c(1,2,4), lty=c(1,2,4)) ## hist(rpg.devroye(10000, 1, 0), add=TRUE, prob=TRUE, breaks=100, ## col=rgb(0, 0, 0, 16, maxColorValue=255)) ## hist(rpg.devroye(10000, 2, 0), add=TRUE, prob=TRUE, breaks=100, ## col=rgb(255, 0, 0, 16, maxColorValue=255)) ## hist(rpg.devroye(10000, 3, 0), add=TRUE, prob=TRUE, breaks=100, ## col=rgb(0, 0, 255, 16, maxColorValue=255)) ymax = max(c(y1,y4,y5)) plot(xgrid, y1, type="l", ylim=c(0,ymax), xlim=c(0,1), main=expression(paste("Density of PG(1,", psi, ")", sep="")), xlab="x", ylab=expression(paste("f(x|1,", psi, ")", sep=""))) lines(xgrid, y4, type="l", col=2, lty=2) lines(xgrid, y5, type="l", col=4, lty=4) legend("topright", legend=c(expression(paste(psi, "=0", sep="")), expression(paste(psi, "=2", sep="")), expression(paste(psi, "=4", sep=""))), col=c(1,2,4), lty=c(1,2,4)) ## hist(rpg.devroye(10000, 1, 0), add=TRUE, prob=TRUE, breaks=100, ## col=rgb(0, 0, 0, 16, maxColorValue=255)) ## hist(rpg.devroye(10000, 1, 2), add=TRUE, prob=TRUE, breaks=100, ## col=rgb(255, 0, 0, 16, maxColorValue=255)) ## hist(rpg.devroye(10000, 1, 4), add=TRUE, prob=TRUE, breaks=100, ## col=rgb(0, 0, 255, 16, maxColorValue=255)) dev.off() } ################################################################################ ## TILTED J^* ## ################################################################################ ## pigauss - cumulative distribution function for Inv-Gauss(mu, lambda). ##------------------------------------------------------------------------------ pigauss <- function(x, Z=1, lambda=1) { ## I believe this works when Z = 0 ## Z = 1/mu b = sqrt(lambda / x) * (x * Z - 1); a = sqrt(lambda / x) * (x * Z + 1) * -1.0; y = exp(pnorm(b, log.p=TRUE)) + exp(2 * lambda * Z + pnorm(a, log.p=TRUE)); # y2 = 2 * pnorm(-1.0 / sqrt(x)); y } p.tilt.left <- function(trnc, h, z) { out = 0 if (z == 0) { out = p.ch.left(trnc, h) } else { out = (2^h * exp(-z*h)) * pigauss(trnc, Z=z/h, h^2) } out } p.tilt.right <- function(trcn, h, z) { ## Note: this works when z=0 lambda.z = pi^2/8 + z^2/2 (pi/2/lambda.z)^h * pgamma(trcn, shape=h, rate=lambda.z, lower.tail=FALSE) } ## rigauss - sample from Inv-Gauss(mu, lambda). ##------------------------------------------------------------------------------ rigauss <- function(mu, lambda) { nu = rnorm(1); y = nu^2; x = mu + 0.5 * mu^2 * y / lambda - 0.5 * mu / lambda * sqrt(4 * mu * lambda * y + (mu*y)^2); if (runif(1) > mu / (mu + x)) { x = mu^2 / x; } x } rrtigauss.ch.1 <- function(h, z, trnc=1) { ## trnc is truncation point z = abs(z); mu = h/z; X = trnc + 1; if (mu > trnc) { alpha = 0.0; while (runif(1) > alpha) { X = rrtinvch2.ch.1(h, trnc) alpha = exp(-0.5 * z^2 * X); } ## cat("rtigauss.ch, part i:", X, "\n"); } else { while (X > trnc) { lambda = h^2; Y = rnorm(1)^2; X = mu + 0.5 * mu^2 / lambda * Y - 0.5 * mu / lambda * sqrt(4 * mu * lambda * Y + (mu * Y)^2); if ( runif(1) > mu / (mu + X) ) { X = mu^2 / X; } } ## cat("rtiguass, part ii:", X, "\n"); } X; } rrtigauss.ch <- function(num, h, z, trnc=1) { x = rep(0, num) for (i in 1:num) x[i] = rrtigauss.ch.1(h, z, trnc) x } rpg.alt.1 <- function(h, z) { z = z/2 if (h < 1 || h > 4) return(NA); rate = pi^2 / 8 + z^2 / 2 idx = floor((h-1) * 100) + 1; trnc = t14[idx]; p.l = p.tilt.left (trnc, h, z); p.r = p.tilt.right(trnc, h, z); p = p.r / (p.l + p.r); ## cat("prob.right:", p, "\n") num.trials = 0; total.iter = 0; max.outer = 1000 max.inner = 1000 while (num.trials < max.outer) { num.trials = num.trials + 1; uu = runif(1) if ( uu < p ) { ## Left truncated gamma X = rltgamma.dagpunar.1(shape=h, rate=rate, trnc=trnc) } else { ## Right truncated inverse Chi^2 ## Note: this sampler works when z=0. X = rrtigauss.ch.1(h, z, trnc) } ## C = cosh(Z) * exp( -0.5 * Z^2 * X ) ## Don't need to multiply everything by C, since it cancels in inequality. S = a.coef.alt(0,X,h) ## S = a.coef.alt.1(0, X, trnc) D = g.tilde(X, h, trnc) Y = runif(1) * D n = 0 ## cat("B,C,left", B, C, X<trnc, "\n") ## cat("test gt:", g.tilde(trnc * 0.1, h, trnc), "\n"); ## cat("X, Y, S, gt:", X, Y, S, D,"\n"); a.n = S decreasing = FALSE go = TRUE while (go && n < max.inner) { n = n + 1 total.iter = total.iter + 1; a.prev = a.n a.n = a.coef.alt(n, X, h) ## a.n = a.coef.alt.1(n, X, trnc) decreasing = a.n <= a.prev ## if (!decreasing) cat("n:", n, "; "); if ( n %% 2 == 1 ) { S = S - a.n ## B = B - b.n if ( Y<=S && decreasing) break ## if ( Y<=S && decreasing) return(0.25 * X) } else { S = S + a.n ## B = B + b.n if ( Y>S && decreasing) go = break } } ## cat("S,B =", S, B, "\n") ## Need to check max.outer if ( Y<=S ) break } X = 0.25 * X out = list("x"=X, "num.trials"=num.trials, "total.iter"=total.iter) out } ## rpg.alt.4 <- function(num, h, z) ## { ## if (h < 1) return(NA) ## z = array(z, num) ## h = array(h, num) ## out = list( ## draw = rep(0, num), ## num.trials = rep(0, num), ## total.iter = rep(0, num) ## ) ## for (i in 1:num) { ## draw = rpg.alt.1(h[i], z[i]) ## out$draw[i] = draw$x ## out$num.trials[i] = draw$num.trials ## out$total.iter[i] = draw$total.iter ## } ## out ## } rpg.alt.R <- function(num, h, z) { if (any(h < 1)) { return(NA) } h = array(h, num) z = array(z, num) n = floor( (h-1.) / 4. ) remain = h - 4. * n for (i in 1:num) { x = 0.0 for (j in 1:n[i]) { draw = rpg.alt.1(4.0, z[i]) x = x + draw$x } if (remain[i] > 4.0) { draw = rpg.alt.1(0.5 * remain[i], z[i]) + rpg.alt.1(0.5 * remain[i], z[i]) x = x + draw$x } else { draw = rpg.alt.1(remain[i], z[i]) x = x + draw$x } out$draw[i] = x out$num.trials[i] = draw$num.trials out$total.iter[i] = draw$total.iter } out$draw } ################################################################################ ## CHECK rpg ## ################################################################################ if (FALSE) { ## source("Ch.R") h = 2.3 z = 1.1 num = 20000 samp.1 = rpg.4(num, h, z) samp.2 = rpg.gamma(num, h, z) mean(samp.1$draw) mean(samp.2) mean(samp.1$total.iter) hist(samp.1$draw, prob=TRUE, breaks=100) hist(samp.2, prob=TRUE, add=TRUE, col="#22000022", breaks=100) } if (FALSE) { ## source("Ch.R") source("ManualLoad.R") nsamp = 10000 n = 1 z = 0 seed = sample.int(10000, 1) ## seed = 8922 set.seed(seed) samp.a = rpg.alt(nsamp, n, z) ## samp.d = rpg.devroye(nsamp, n, z) set.seed(seed) samp.4 = rpg.4(nsamp, n, z) mean(samp.a) ## mean(samp.d) mean(samp.4$draw) hist(samp.a, prob=TRUE, breaks=100) hist(samp.4$draw, prob=TRUE, breaks=100, col="#99000022", add=TRUE) h = 1.5 z = 0 set.seed(seed) samp.a = rpg.alt(nsamp, h, z) ## samp.g = rpg.gamma(nsamp, h, z) set.seed(seed) samp.4 = rpg.4(nsamp, h, z) mean(samp.a) ## mean(samp.g) mean(samp.4$draw) hist(samp.a, prob=TRUE, breaks=100) hist(samp.4$draw, prob=TRUE, breaks=100, col="#99000022", add=TRUE) } if (FALSE) { ## source("Ch.R") source("ManualLoad.R") reps = 10 nsamp = 10000 n = 4 z = 0 seed = sample.int(10000, 1) ## seed = 8922 set.seed(seed) start.a = proc.time() for (i in 1:reps) { samp.a = rpg(nsamp, n, z) } end.a = proc.time(); diff.a = end.a - start.a set.seed(seed) start.d = proc.time() for (i in 1:reps) { samp.d = rpg.devroye(nsamp, n, z) } end.d = proc.time() diff.d = end.d - start.d mean(samp.a) mean(samp.d) diff.a diff.d ## hist(samp.a, prob=TRUE, breaks=100) ## hist(samp.4$draw, prob=TRUE, breaks=100, col="#99000022", add=TRUE) h = 3.5 z = 0 set.seed(seed) start.a = proc.time() for (i in 1:reps) { samp.a = rpg(nsamp, h, z) } end.a = proc.time(); diff.a = end.a - start.a set.seed(seed) start.g = proc.time() for (i in 1:reps) { samp.g = rpg.gamma(nsamp, h, z) } end.g = proc.time() diff.g = end.g - start.g mean(samp.a) mean(samp.g) diff.a diff.g ## hist(samp.a, prob=TRUE, breaks=100) ## hist(samp.4$draw, prob=TRUE, breaks=100, col="#99000022", add=TRUE) } ################################################################################ ################################################################################ if (FALSE) { ## Preliminary: approximating using normal. ## source("Ch.R") dx = 0.1 xgrid = seq(dx, 30, dx) y1 = xgrid y2 = xgrid n = length(xgrid) h = 20 for (i in 1:n) { y1[i] = 0 y2[i] = 0 for (j in 0:200) { y1[i] = y1[i] + (-1)^j * a.coef.alt(j,xgrid[i],h) } } m1 = jj.m1(h, 0) m2 = jj.m2(h, 0) V = m2 - m1^2; y2 = dnorm(xgrid, m1, sqrt(V)); ## png(filename="pg-dens.png", width=800, height=400) par(mfrow=c(1,2)) ymax = max(c(y1,y2)) plot(xgrid, y1, type="l", ylim=c(0,ymax), main="Density of PG(b,0)", xlab="x", ylab="f(x|b,0)") lines(xgrid, y2, type="l", col=2, lty=2) }
/scratch/gouwar.j/cran-all/cranData/BayesLogit/R/PolyaGammaApproxAlt.R
# (C) Nicholas Polson, James Scott, Jesse Windle, 2012-2019 # This file is part of BayesLogit. # BayesLogit is free software: you can redistribute it and/or modify it under # the terms of the GNU General Public License as published by the Free Software # Foundation, either version 3 of the License, or (at your option) any later # version. # BayesLogit is distributed in the hope that it will be useful, but WITHOUT ANY # WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR # A PARTICULAR PURPOSE. See the GNU General Public License for more details. # You should have received a copy of the GNU General Public License along with # BayesLogit. If not, see <https://www.gnu.org/licenses/>. ## HERE I AM USING log laplace TRANSFORM. ## source("Ch.R") a.coef.sp <- function(n, x, h, z) { ## a_n(x,h). ## You could precompute lgamma(h), log(2). d.n = (2 * n + h) l.out = h * log(2) - lgamma(h) + lgamma(n+h) - lgamma(n+1) + log(d.n) - 0.5 * log(2 * pi * x^3) - 0.5 * d.n^2 / x - 0.5 * z^2 * x; out = cosh(z)^h * exp(l.out) out } utox.laplace <- function(u) { ## tanh(sqrt(u))/sqrt(u)) out = u gt.idc = u > 1e-6 lt.idc = u < -1e-6 md.idc = u <= 1e-6 & u >= -1e-6 r = sqrt(abs(u)) gt.val = r[gt.idc] out[gt.idc] = tanh(gt.val)/gt.val lt.val = r[lt.idc] out[lt.idc] = tan(lt.val)/ lt.val md.val = r[md.idc] out[md.idc] = 1 - (1/3) * md.val^2 + (2/15) * md.val^4 - (17/315) * md.val^6 out } utox.mgf <- function(u) { utox.laplace(-1*u) } k.laplace <- function(t,z=0) { s = 2*t + z^2; u = sqrt(abs(s)); out = log(cosh(u)) out[s<0] = log(cos(u[s<0])); ## out = ifelse(s >= 0, log(cosh(u)), log(cos(u))) out = log(cosh(z)) - out out } k1.laplace <- function(t,z=0) { u = 2*t+z^2 -1 * utox.laplace(u)^2 } k2.laplace <- function(t,z=0) { u = 2*t+z^2 a1 = k2.laplace(t,z); a2 = (1-utox.laplace(u)) / u a1 - a2 } k.mgf <- function(t,z=0) { k.laplace(-1*t,z) } k1.mgf <- function(t, z=0) { -1* k1.laplace(-1*t,z) } k2.mgf <- function(t,z=0) { k2.laplace(-1*t,z) } ################################################################################ log.cos.rt <- function(u) { r = sqrt(abs(u)) out = log(cosh(r)) out[u>0] = log(cos(r[u>0])); out } log.sin.rt <- function(u) { r = sqrt(abs(u)) out = log(sinh(r)) out[u>0] = log(sin(r[u>0])); out } u.dens <- function(u, n=1, z=0) { x = utox.mgf(u) out = 0.5 * cosh(z)^n * (0.5*n/pi)^0.5 * (x^2 + (1-x) / u)^0.5 * exp(-n * log.cos.rt(u) - n * 0.5 * (u+z^2) * x); out } u.dens.2 <- function(u, n=1, z=0) { x = utox.mgf(u) out = 0.5 * cosh(z)^n * (0.5*n/pi)^0.5 * (x^2 + (1-x) / u)^0.5 * exp(-n * (log.sin.rt(u) - log(sqrt(abs(u))) - log(x)) - n * 0.5 * (u+z^2) * x); out } u.a1 <- function(u, n=1, z=0) { x = 1 + 1/3 * u + 3/15 * u^2 out = 0.5 * cosh(z)^n * (0.5*n/pi)^0.5 * (x^2 + (1-x) / u)^0.5 * exp(-n * log.cos.rt(u) - n * 0.5 * (u+z^2) * x); out } u.a2 <- function(u, n=1, z=0) { x = 1 + 1/3 * u + 3/15 * u^2 out = 0.5 * cosh(z)^n * (0.5*n/pi)^0.5 * (x^2 + (1-x) / u)^0.5 * exp(-n * (log(pi/2) + log(x)) - n * 0.5 * (u+z^2) * x); out } x.dens.1 <- function(u, n=1, z=0) { x = utox.mgf(u) out = cosh(z)^n * (0.5*n/pi)^0.5 * (x^2 + (1-x) / u)^(-0.5) * exp(-n * log.cos.rt(u) - n * 0.5 * (u+z^2) * x); out } x.dens.2 <- function(u, n=1, z=0) { x = utox.mgf(u) out = cosh(z)^n * (0.5*n/pi)^0.5 * (x^2 + (1-x) / u)^(-0.5) * exp(-n * (log.sin.rt(u) - log(sqrt(abs(u))) - log(x)) - n * 0.5 * (u+z^2) * x); out } x.a1 <- function(u, n=1, z=0) { x = utox.mgf(u) out = cosh(z)^n * (0.5*n/pi)^0.5 * (x^2 + (1-x) / u)^(-0.5) * exp(-n * (log(1-u/6) - log(x)) - n * 0.5 * (u+z^2) * x); out } delta.val <- function(x) { ifelse(x < 1, 0.5 * (1-1/x), log(x)) } invert.x.1 <- function(x, xgrid, ugrid) { out = list("u"=NA) if (x > max(xgrid)) return(out) dxgrid = x - xgrid idx = which.min(abs(dxgrid)) inc = 0 if (dxgrid[idx] < 0) inc = 1 if (dxgrid[idx] > 0) inc = -1 dx = xgrid[idx+inc] - xgrid[idx] du = ugrid[idx+inc] - ugrid[idx] ap = ugrid[idx] if (inc!=0) ap = du / dx * (x - xgrid[idx]) + ap out$u = ap out$idx = idx out } invert.x <- function(x, xgrid, ugrid) { N = length(x) ap = rep(0, N) for (i in 1:N) { out = invert.x.1(x[i], xgrid, ugrid) ap[i] = out$u } ap } ################################################################################ approx1 <- function(x, h, z) { ## cosh(z)^h * (h/2/pi)^0.5 * (x^2 * (2 - x))^(-0.5) * cosh(1/x)^(-h) * exp(-0.5 * h / x) * exp(-0.5 * z^2 * h * x) (h/2/pi)^0.5 * x^(pi/2-1) ## * exp(-h*x*arccos(2/pi/x)^2) # * cosh(z)^h * exp(-0.5 * z^2 * h * x) } ################################################################################ if (FALSE) { par(mfrow=c(1,3)) umin = -10 umax = (pi/2)^2 - 0.5 du = 0.01 ugrid = seq(umin, umax, du) xgrid = utox.mgf(ugrid) plot(xgrid, ugrid) tgrid = seq(-10,10, 0.1) kgrid = k(tgrid, 0.0) ## plot(tgrid, kgrid) z = 0.0 n = 100.0 ugrid = seq(umin, umax, du) ## ugrid = 2*tgrid - z^2 tgrid = 0.5 * (ugrid + z^2) xgrid = utox.mgf(ugrid); kgrid = k.mgf(tgrid, z) term1 = (0.5 * n / pi)^(0.5) * (xgrid^2 + (1-xgrid) / ugrid)^(-0.5) term2 = exp(n * (kgrid - tgrid * xgrid)); ygrid = term1 * term2 plot(xgrid, ygrid) plot(xgrid, log(ygrid)) udens = (0.5 * n / pi)^(0.5) * (xgrid^2 + (1-xgrid) / ugrid)^(0.5) * exp(n * (kgrid - tgrid * xgrid)); plot(ugrid, udens) plot(ugrid, log(udens)) } if (FALSE) { ## Preliminary: approximating using normal. ## source("SaddlePointApprox.R") ## png("pg-sp-dens.png", width=600, height=300); par(mfrow=c(1,2)) z = 0.0 n = 10.0 umin = -100 umax = (pi/2)^2 - 0.5 du = 0.1 ugrid = seq(umin, umax, du) tgrid = 0.5 * (ugrid + z^2) xgrid = utox.mgf(ugrid); kgrid = k.mgf(tgrid, z) term1 = (0.5 * n / pi)^(0.5) * (xgrid^2 + (1-xgrid) / ugrid)^(-0.5) term2 = exp(n * (kgrid - tgrid * xgrid)); sp.approx = term1 * term2 ## plot(xgrid, sp.approx) y1 = xgrid y2 = xgrid N = length(xgrid) for (i in 1:N) { y1[i] = 0 y2[i] = 0 for (j in 0:200) { y1[i] = y1[i] + (-1)^j * a.coef.sp(j,xgrid[i] * n ,n, z) * n } } m1 = jj.m1(n, z) / n m2 = jj.m2(n, z) / n^2 V = m2 - m1^2; sv = sqrt(V) y2 = dnorm(xgrid, m1, sv); y3 = dt((xgrid - m1) / sv, 6) / sv ## y4 = approx1(xgrid, n, z) ## png(filename="pg-dens.png", width=800, height=400) ## par(mfrow=c(1,2)) ymax = max(c(sp.approx), na.rm=TRUE) plot(xgrid, y1, type="l", ylim=c(0,ymax), main="Density of JJ(b,z)", xlab="x", ylab="f(x|b,0)") ## lines(xgrid, y2, col=2, lty=2) ## lines(xgrid, y3, col=3, lty=3) ## lines(xgrid, y3, type="l", col=2, lty=2) lines(xgrid, sp.approx, col=4, lty=4) legend("topright", legend=c("J*", "S.P."), col=c(1,4), lty=c(1,4)) plot(xgrid, log(y1), ylim=c(log(min(sp.approx, na.rm=TRUE)),log(ymax)), type="l") lines(xgrid, log(sp.approx), col=4, lty=4) ## a0 = x.dens.2(ugrid, n, z) ## lines(xgrid, log(a0), col=2, lty=2) a1 = x.a1(ugrid, n, z) lines(xgrid, log(a1), col=2, lty=2) ## a2 = x.a2(ugrid, n, z) ## lines(xgrid, log(a2), col=3, lty=3) ## ---------- ymax = max(c(sp.approx), na.rm=TRUE) plot(xgrid/4*n, 4*y1/n, type="l", ylim=c(0,4*ymax/n), main=paste("Density of PG(", n, ")", sep=""), xlab="x", ylab="f") ## lines(xgrid, y2, col=2, lty=2) ## lines(xgrid, y3, col=3, lty=3) ## lines(xgrid, y3, type="l", col=2, lty=2) lines(xgrid/4*n, 4*sp.approx/n, col=4, lty=4) legend("topright", legend=c("PG", "S.P."), col=c(1,4), lty=c(1,4)) ##---------------------------------------------------------------------------- ## png("eta-phi-envelope.png", width=800, height=400) par(mfrow=c(1,2)) equigrid = seq(0, max(xgrid), 0.1) zerogrid = rep(0, length(equigrid)) deltaxgrid = delta.val(xgrid) plot(xgrid, (kgrid - tgrid * xgrid) - deltaxgrid, col=1, type="l", xlab="x", ylab="phi(x)-phi(1) scale", main="eta envelope") lines(xgrid, -1*deltaxgrid, col=1, lty=2) lines(xgrid, (kgrid - tgrid * xgrid), col=2, lty=3) lines(equigrid, zerogrid, col=2, lty=4) x.l = .75 x.r = 4/3 l.u = invert.x(x.l, xgrid, ugrid) u.r = invert.x(x.r, xgrid, ugrid) t.l = 0.5 * l.u t.r = 0.5 * u.r left.slope = -t.l - 0.5 / x.l^2 right.slope = -t.r - 1 / x.r l.int = k.mgf(t.l, z) - t.l * x.l - 0.5 * (1-1/x.l) r.int = k.mgf(t.r, z) - t.r * x.r - log(x.r) left.line = (xgrid - x.l) * left.slope + l.int right.line = (xgrid - x.r) * right.slope + r.int pw.line = -deltaxgrid ## left.cross = which.min(abs(left.line+deltaxgrid)) left.cross = which.min(abs(xgrid-1)) left.idx = 1:left.cross pw.line[left.idx] = left.line[1:left.cross] ## right.cross = which.min(abs(right.line+deltaxgrid)) right.cross = which.min(abs(xgrid-1)) right.idx = right.cross:length(pw.line) pw.line[right.idx] = right.line[right.idx] ## lines(xgrid, left.line, col=3) ## lines(xgrid, right.line, col=3) lines(xgrid, pw.line, col=3) legend("bottom", legend=c("eta(x)-phi(1)", "phi(x)-phi(1)", "-delta(x)", "eta envelope"), col=c(1,2,1,3), lty=c(1,3,2,1)) plot(xgrid, (kgrid - tgrid * xgrid), col=2, type="l", lty=1, xlab="x", ylab="phi(x)-phi(1) scale", main="phi envelope") lines(xgrid, pw.line + deltaxgrid, col=3) lines(equigrid, zerogrid, col=2, lty=4) legend("bottom", legend=c("phi(x)-phi(m)","phi envelope"), col=c(2,3), lty=c(3,1)) ##---------------------------------------------------------------------------- ## Key here for approximation. Use second order approx below. I think second ## is better because we then have x^n for large x. plot(xgrid, log.cos.rt(ugrid) + 0.5) plot(xgrid, log.sin.rt(ugrid) - 0.5 * log(abs(ugrid))) b1 = log.cos.rt(ugrid) + 0.5 * ugrid * xgrid b2 = log.sin.rt(ugrid) - 0.5 * log(abs(ugrid)) - log(xgrid) + 0.5 * ugrid * xgrid plot(xgrid, b1) plot(xgrid, b2) lines(xgrid, -log(xgrid), col=2) ## z = 0.0 b3 = log.sin.rt(ugrid) - 0.5 * log(abs(ugrid)) + 0.5 * ugrid * xgrid + 0.5 * z^2 * xgrid b4 = (xgrid^2 + (1-xgrid) / ugrid) plot(xgrid, b3) plot(xgrid, b4^-0.5) lines(xgrid, 1/xgrid, col=2) plot(xgrid, exp(b3)) plot(xgrid, b3) a = 1.09 + z^2 / 2 b = 0.175 iggrid = (a * xgrid + b / xgrid) - 1.3 lines(xgrid, iggrid, col=2) b5 = log.sin.rt(ugrid) - 0.5 * log(abs(ugrid)) plot(xgrid, b5) plot(xgrid, ugrid * xgrid) plot(xgrid, b4 / xgrid^2) plot(xgrid, b4 / xgrid^(3)) plot(xgrid, b4 / xgrid^(3)) plot(xgrid, b4) lines(xgrid, xgrid^3, col=2) plot(xgrid, -(1-xgrid)/ugrid) plot(xgrid, xgrid^3 - b4) ## Plotting in u. plot(ugrid, b4 / xgrid^2) plot(ugrid, b4 / xgrid^3) ## k3 b6 = 2 * xgrid * b4 - 0.5 * (1-xgrid) / tgrid^2 - 0.5 * b4 / tgrid b7 = b6 * xgrid plot(xgrid, b6) plot(xgrid, b7) b8 = -2 * b4 / xgrid^3 + b6 / xgrid^2 / b4 b9 = -3 * b4 / xgrid^4 + b6 / xgrid^3 / b4 plot(xgrid, b8) plot(xgrid, b9) b10 = -2 * b4^2 / xgrid^3 + b6 / xgrid^2 b11 = -3 * b4^2 / xgrid^4 + b6 / xgrid^3 plot(tgrid, b10) plot(tgrid, b11) b12 = cot(sqrt(2*tgrid)) ##---------------------------------------------------------------------------- ## plot(xgrid, sp.approx, col=4, type="l") udens1 = 0.5 * cosh(z)^n * (0.5 * n / pi)^(0.5) * (xgrid^2 + (1-xgrid) / ugrid)^(0.5) * exp(n * (kgrid - tgrid * xgrid)); udens2 = u.dens(ugrid, n, z) udens3 = u.a1(ugrid, n, z) udens4 = u.a2(ugrid, n, z) udens5 = u.dens.2(ugrid, n, z) plot(ugrid, udens1, type="l") lines(ugrid, udens2, col=2) lines(ugrid, udens3, col=3) lines(ugrid, udens4, col=4) lines(ugrid, udens5, col=5) blah1 = - log.cos.rt(ugrid) blah2 = log(pi/2) + log(xgrid) ##---------------------------------------------------------------------------- N = 10000 c.n = ((1:N)-1/2)^2 * pi^2 / 2 lw.fact <- function(u, n=1) { a = log(1 - outer(u, c.n, "/")); n * apply(a, 1, sum) } lwf.1 = lw.fact(ugrid, n) lwf.2 = n * log.cos.rt(ugrid) plot(ugrid, lwf.1) plot(ugrid, lwf.2) calc.psi <- function(c.n) { N = length(c.n) psi.0 = psi.1 = psi.2 = psi.3 = rep(0, N) psi.alt = psi.0 for (i in 1:N) { a.i = 1-c.n[i]/c.n[-i] b.i = 1 / (c.n[i] * a.i) psi.0[i] = log(c.n[i]) - sum(log(abs(a.i))) psi.1[i] = sum(b.i) psi.2[i] = sum(b.i^2) psi.3[i] = sum(b.i^3) psi.alt[i] = c.n[i] * prod(1/a.i) } phi.0 = exp(psi.0) phi.1 = phi.0 * psi.1 phi.2 = phi.0 * psi.1^2 + phi.0 * psi.2 phi.3 = phi.0 * psi.1^3 + 3 * phi.0 * psi.1 * psi.2 + phi.0 * psi.3 out <- list(psi.0=psi.0, psi.1=psi.1, psi.2=psi.2, psi.3=psi.3, psi.alt=psi.alt, phi.0=phi.0, phi.1=phi.1, phi.2=phi.2, phi.3=phi.3) out } the.psi = calc.psi(c.n) plot(c.n, exp(the.psi$psi.0)) }
/scratch/gouwar.j/cran-all/cranData/BayesLogit/R/SaddlePointApprox.R
## Copyright 2013 Nick Polson, James Scott, and Jesse Windle. ## This file is part of BayesLogit, distributed under the GNU General Public ## License version 3 or later and without ANY warranty, implied or otherwise. ################################################################################ a.coef.sp <- function(n, x, h, z=0.0) { ## a_n(x,h). ## You could precompute lgamma(h), log(2). d.n = (2 * n + h) l.out = h * log(2) - lgamma(h) + lgamma(n+h) - lgamma(n+1) + log(d.n) - 0.5 * log(2 * pi * x^3) - 0.5 * d.n^2 / x - 0.5 * z^2 * x; out = cosh(z)^h * exp(l.out) out } djstar <- function(xg, h, z, N=10) { a = outer(0:N, xg, function(n,x){ (-1)^n*a.coef.sp(n, x, h, z) }) s = apply(a, 2, cumsum) out = list("a"=a, "s"=s) } ################################################################################ y.func <- function(v) { ## tan(sqrt(v))/sqrt(v)) out = v gt.idc = v > 1e-6 lt.idc = v < -1e-6 md.idc = v <= 1e-6 & v >= -1e-6 r = sqrt(abs(v)) gt.val = r[gt.idc] out[gt.idc] = tan(gt.val)/gt.val lt.val = r[lt.idc] out[lt.idc] = tanh(lt.val)/ lt.val md.val = r[md.idc] out[md.idc] = 1 + (1/3) * md.val^2 + (2/15) * md.val^4 + (17/315) * md.val^6 out } v.approx <- function(y) { ifelse(y >= 1, atan(y * pi / 2)^2, -1 / y^2) } v.func.1 <- function(y) { if (y==1) return(list(root=0.0, f.root=1.0, iter=0, estim.prec=1e-16)) f <- function(v) { y - y.func(v) } lowerb = 0; upperb = 0; if (y > 1) upperb = (pi/2)^2 if (y < 1) lowerb = min(-5, v.approx(y/2)) out = uniroot(f, lower=lowerb, upper=upperb, maxiter=10000, tol=1e-8) out } ## v.func.1.alt <- function(y, lowerb, upperb) ## { ## if (y==1) return(list(root=0.0, f.root=1.0, iter=0, estim.prec=1e-16)) ## f <- function(v) { y - y.func(v) } ## out = uniroot(f, lower=lowerb, upper=upperb, maxiter=10000, tol=1e-8) ## out ## } v.secant.1 <- function(y, vb, va, tol=1e-8, maxiter=10) { ## Assumes increasing and convex function. yb = y.func(vb) ya = y.func(va) if (yb > ya) return(NA) iter = 0 ydiff = tol + 1 while(abs(ydiff) > tol && iter < maxiter) { iter = iter + 1 m = (ya - yb) / (va - vb) vstar = (y-yb) / m + vb; ystar = y.func(vstar) ydiff = y - ystar; if (ystar < y) { vb = vstar yb = ystar } else { va = vstar yb = ystar } ## cat("y, v, ydiff:", ystar, vstar, ydiff, "\n") } out = list(iter=iter, ydiff=ydiff, y=ystar, v=vstar) out } v.func <- function(y) { ## y inverse. N = length(y) a = rep(0,N) out = list(root=a, f.root=a, iter=a, estim.prec=a) for (i in 1:N) { temp = v.func.1(y[i]) out$root[i] = temp$root out$f.root[i] = temp$f.root out$iter[i] = temp$iter out$estim.prec[i] = temp$estim.prec } ## out = simplify2array(out) out$root } v.iterative <- function(y, ygrid, vgrid, dy) { ## Assume ygrid is equally spaced. ## Assume ygrid is increasing. N = length(ygrid) idx = floor((y-ygrid[1]) / dy) + 1 if (idx >= N || idx<1) return(NA) dv = vgrid[idx+1] - vgrid[idx] ## dy = ygrid[idx+1] - ygrid[idx] ## Just a single secant approximation ## vapprox = (dv / dy) * (y - ygrid[idx]) + vgrid[idx] vb = vgrid[idx] va = vgrid[idx+1] vout = v.secant.1(y, vb, va) vapprox = vout$v ## vout = v.func.1.alt(y, vb, va) ## vapprox = vout$root print(vout) vapprox } ################################################################################ ## if (FALSE) { ## dyl = 0.01 ## yleft = seq(0.1, 1, dyl) ## vleft = v.func(yleft) ## dyr = 0.01 ## yright = seq(1, 8, dyr) ## vright = v.func(yright) ## } ## v.table <- function(y) ## { ## out = 0 ## if (y <= 1) ## out = v.iterative(y, yleft, vleft, dyl) ## else ## out = v.iterative(y, yright, vright, dyr) ## if (is.na(out)) ## out = v.approx(y) ## out ## } ################################################################################ ## CALCULATE SADDLE POINT APPROXIMATION ## ################################################################################ x.left <- function(s) { ## tanh(sqrt(2s))/sqrt(2s) if (any(s<0)) { print("x.left: s must be >= 0.") return(NA) } s = sqrt(2*s) tanh(s) / s } x.right <- function(s) { ## tan(sqrt(2s))/sqrt(2s) if (any(s<0)) { print("x.left: s must be >= 0.") return(NA) } s = sqrt(2*s) tan(s) / s } cgf <- function(s, z) { v = 2*s + z^2; v = sqrt(abs(v)); out = log(cosh(v)) out[s>0] = log(cos(v[s>0])); ## out = ifelse(s >= 0, log(cosh(u)), log(cos(u))) out = log(cosh(z)) - out out } sp.approx.1 <- function(x, n=1, z=0) { ## v = v.table(x) v = v.func(x) u = v / 2 t = u + z^2/2 m = y.func(-z^2) temp = sqrt(abs(v)) phi = -log(cosh(temp)) phi[v>0] = -log(cos (temp[v>0])) phi = phi + log(cosh(z)) - t*x; K2 = x^2 + (1-x)/(2*u) K2[u<1e-5 & u>-1e-5] = x^2 - 1/3 - 2/15 * (2*u) spa = (0.5*n/pi)^0.5 * K2^-0.5 * exp(n*phi) out = list("spa"=spa, "phi"=phi, "K2"=K2, "t"=t, "x"=x, "u"=u, "m"=m); out } sp.approx.df <- function(x, n=1, z=0) { N = length(x) a = rep(0, N) df = data.frame("spa"=a, "phi"=a, "K2"=a, "t"=a, "x"=a, "u"=a, "m"=a); for (i in 1:N) { temp = sp.approx.1(x[i], n, z); df[i,] = as.numeric(temp) } df } sp.approx <- function(x, n=1, z=0) { N = length(x) spa = rep(0, N) for (i in 1:N) { temp = sp.approx.1(x[i], n, z) spa[i] = temp$spa } spa } ##------------------------------------------------------------------------------ ## UNIT TEST ## ##------------------------------------------------------------------------------ if (FALSE) { ## source("SaddlePointApprox.R"); source("SPSample.R") n = 10 z = 0 dx = 0.01 xgrid = seq(dx, 4, dx) spa = sp.approx(xgrid, n, z) plot(xgrid, spa, type="l") y1 = xgrid N = length(xgrid) for (i in 1:N) { y1[i] = 0 y2[i] = 0 for (j in 0:200) { y1[i] = y1[i] + (-1)^j * a.coef.sp(j,xgrid[i] * n ,n, z) * n } } lines(xgrid, y1, col=2) (1:N)[y1>spa] } ################################################################################ ## POINTS OF INTERSECTION ## ################################################################################ delta.func1 <- function(x, m=1) { val = ifelse(x >= m, log(x) - log(m), 0.5 * (1-1/x) - 0.5 * (1-1/m)) der = ifelse(x >= m, 1/x, 0.5 / x^2) out = data.frame(val=val, der=der) out } delta.func2 <- function(x, m=1) { val = ifelse(x >= m, log(x) - log(m), 0) der = ifelse(x >= m, 1/x, 0) out = data.frame(val=val, der=der) out } phi.func <- function(x, z) { v = v.func(x) u = v / 2 t = u + z^2/2 temp = sqrt(abs(v)) phi = -log(cosh(temp)) phi[v>0] = -log(cos (temp[v>0])) phi = phi + log(cosh(z)) - t*x; val = phi der = -t out = data.frame(val=val, der=der) out } phi.eta.delta <- function(x, z=0, m=1) { ## versions of phi, eta, delta phi = phi.func(x, z) ## phi$val = phi$val - phi.func(m,z)$val delta = delta.func1(x,m) eta = phi - delta out <- data.frame(phi=phi$val, eta=eta$val, delta=delta$val, phi.d=phi$der, eta.d=eta$der, delta.d=delta$der) ## out <- cbind(phi, eta, delta) out } tangent.lines.eta <- function(x, z=0, m=1) { phed = phi.eta.delta(x, z, m) slope = phed$eta.d icept = - phed$eta.d * x + phed$eta out = data.frame(slope=slope, icept=icept) out } ## Maybe remove find.ev.mode.1 <- function(z) { z = abs(z) m = y.func(-z^2) f <- function(v) { (v + z^2 - 1/m^2) * y.func(v) + 1 } check = z^2 * tanh(abs(z))^4; upperb = 0 lowerb = -4 if (check > 1) { lowerb = 0 upperb = check } out = uniroot(f, lower=-100, upper=0, maxiter=10000, tol=1e-8) x = y.func(out$root) x } find.ev.mode <- function(z) { N = length(z) x = rep(0, N) for (i in 1:N) { x[i] = find.ev.mode.1(z[i]) } x } ig.mode <- function(mu, lambda) { mu * ( (1+9*mu^2 / (4 * lambda))^0.5 - 1.5 * mu / lambda) } ##------------------------------------------------------------------------------ ## UNIT TEST ## ##------------------------------------------------------------------------------ if (FALSE) { n = 10 z = 1 dx = 0.001 xgrid = seq(dx*5, 2, dx) spa = sp.approx(xgrid, n, z) spa.m = sp.approx.1(xgrid[1], n, z) spa.m = sp.approx.1(spa.m$m, n, z) m = spa.m$m ## m = 1 ## m = spa.m$m + 0.1 kappa = 1.1 ## xl = m / kappa ## xl = find.ev.mode(z) ## xl = spa.m$m / 2 ## xl = spa.m$m ## xr = m * kappa xl = spa.m$m m = spa.m$m * 1.1 xr = spa.m$m * 1.15 phi.m = phi.func(m,z)$val ## plot(xgrid, sp.approx, type="l") phed = phi.eta.delta(xgrid, z, m) par(mfrow=c(1,2)) ## idxp = xgrid >= 0 idxp = xgrid<=1.5 & xgrid >=0.25 ## plot(xgrid[idxp], phed$phi[idxp], type="l") ## lines(xgrid[idxp], phed$eta[idxp], col=2) plot(xgrid[idxp], phed$eta[idxp], col=1, type="l", xlab="x", ylab="eta", main=paste("eta vs x, ", "n=", n, ", z=", z, sep="")) lines(xgrid[idxp], -phed$delta[idxp] + phi.m, col=3) tl = tangent.lines.eta(c(xl, xr, m), z, m) lines(xgrid[idxp], tl$slope[1]*xgrid[idxp] + tl$icept[1], col=4, lty=2) lines(xgrid[idxp], tl$slope[2]*xgrid[idxp] + tl$icept[2], col=5, lty=2) ## lines(xgrid[idxp], -0.5*z^2 * (xgrid[idxp]-m) + phi.m, col=6, lty=4) legend("bottomleft", legend=c("eta", "Left Ev.", "Right Ev.", "-delta"), col=c(1,4,5,3), lty=c(1,2,2,1)) abline(v=m, lty=3) plot(xgrid[idxp], phed$phi[idxp], type="l", xlab="x", ylab="phi", main=paste("phi vs x,", "n=", n, ", z=", z, sep="")) phi.ev1 = tl$slope[1]*xgrid + tl$icept[1] + phed$delta phi.ev2 = tl$slope[2]*xgrid + tl$icept[2] + phed$delta phi.ev3 = rep(phi.m, length(xgrid)) lines(xgrid[idxp], phi.ev1[idxp], col=4, lty=2) lines(xgrid[idxp], phi.ev2[idxp], col=5, lty=2) legend("bottomright", legend=c("phi", "Left Ev.", "Right Ev."), col=c(1,4,5), lty=c(1,2,2)) abline(v=m, lty=3) ##---------------------------------------------------------------------------- par(mfrow=c(1,2)) spa.df = sp.approx.df(xgrid, n, z) adj.phi = spa.df$phi - 0.5 * log(spa.df$K2) / n ra = c(spa.df$phi[idxp], adj.phi[idxp]); ymax = max(ra); ymin = min(ra); plot(xgrid[idxp], spa.df$phi[idxp], type="l", ylim=c(ymin, ymax)) lines(xgrid[idxp], adj.phi[idxp], col=2) lines(xgrid[idxp], phi.ev1[idxp], col=4) lines(xgrid[idxp], phi.ev2[idxp], col=5) lines(xgrid[idxp], phi.ev3[idxp], col=6) ## lines(xgrid, phed$phi, col=7) plot(xgrid[idxp], spa.df$spa[idxp], type="l") lines(xgrid[idxp], spa.df$spa[idxp] * spa.df$K2[idxp]^0.5, col=2) ## ---------------------------------------- plot(xgrid[idxp], exp(spa.df$phi[idxp]), type="l", ylab="exp(phi)", xlab="x") ##lines(xgrid[idxp], exp(adj.phi[idxp]), col=2) lines(xgrid[idxp], exp(phi.ev1[idxp]), col=4, lty=2) lines(xgrid[idxp], exp(phi.ev2[idxp]), col=5, lty=2) ## lines(xgrid[idxp], exp(phi.ev3[idxp]), col=6) ## lines(xgrid[idxp], spa.df$spa[idxp] * (0.5*n/pi)^-0.5 * spa.df$K2[idxp]^0.5, col=1, lty=2) idr = xgrid >= m ## idx.alt = which.max(idr) idl = xgrid <= m ar = max(xgrid[idr]^2 / spa.df$K2[idr]) al = max(xgrid[idl]^3 / spa.df$K2[idl]) ev3 = al^0.5 * xgrid^(-1.5) * (0.5*n/pi)^0.5 ## ev3[xgrid >= m] = ar^0.5 * xgrid[xgrid>=m]^(-1) * (0.5*n/pi)^0.5 ev3 = ev3 * exp(n * phi.m) ev1 = (al)^0.5 * exp(n*phi.ev1) * xgrid^-1.5 * (0.5*n/pi)^0.5 ev2 = (ar)^0.5 * exp(n*phi.ev2) * xgrid^-1 * (0.5*n/pi)^0.5 ev4 = exp(n*phi.ev1) * spa.df$K2^-0.5 * (0.5*n/pi)^0.5 ra = c(spa.df$spa[idxp], ev1[xgrid<m]); ymax = max(ra); ymin = min(ra); plot(xgrid[idxp], spa.df$spa[idxp], type="l", ylim=c(ymin, ymax),xlab="x", ylab="SP", main=paste("Saddlepoint Approximation ", "n=", n, ", z=",z, sep="")) ## lines(xgrid[idxp], (0.5*n/pi)^0.5 * exp(n*adj.phi[idxp]), col=3, lty=2) ## lines(xgrid[idxp], ev4[idxp], col=8, lty=2) lines(xgrid[idxp], ev1[idxp], col=4, lty=2) lines(xgrid[idxp], ev2[idxp], col=5, lty=2) ## lines(xgrid[idxp], ev3[idxp], col=6, lty=2) legend("topright", legend=c("SP Approx.", "Left Ev.", "Right Ev."), col=c(1,4,5), lty=c(1,2,2)) abline(v=m, lty=3) ## ---------------------------------------- plot(xgrid[idxp], spa.df$phi[idxp], type="l") lines(xgrid[idxp], spa.df$phi[idxp] - 1.5/n*log(xgrid[idxp]), col=2, lty=2) plot(xgrid[idxp], spa.df$K2[idxp]^-0.5) } ################################################################################ ## rrtigauss ## ################################################################################ ## pigauss - cumulative distribution function for Inv-Gauss(mu, lambda). ## NOTE: note using mu = mean, using Z = 1/mu. ##------------------------------------------------------------------------------ pigauss <- function(x, Z=1, lambda=1) { ## I believe this works when Z = 0 ## Z = 1/mu b = sqrt(lambda / x) * (x * Z - 1); a = sqrt(lambda / x) * (x * Z + 1) * -1.0; y = exp(pnorm(b, log.p=TRUE)) + exp(2 * lambda * Z + pnorm(a, log.p=TRUE)); # y2 = 2 * pnorm(-1.0 / sqrt(x)); y } rrtinvch2.1 <- function(scale, trnc=1) { R = trnc / scale ## E = rexp(2) ## while ( (E[1]^2) > (2 * E[2] / R)) { ## ## cat("E", E[1], E[2], E[1]^2, 2*E[2] / R, "\n") ## E = rexp(2) ## } ## ## cat("E", E[1], E[2], "\n") ## ## W^2 = (1 + R*E[1])^2 / R is left truncated chi^2(1) I_{(R,\infty)}. ## ## So X is right truncated inverse chi^2(1). ## X = R / (1 + R*E[1])^2 ## X = scale * X E = rtnorm(1, 0, 1, left=1/sqrt(R), right=Inf) X = scale / E^2 X } rrtinvch2 <- function(num, scale, trnc) { out = rep(0, num) for (i in 1:num) out[i] = rrtinvch2.1(scale, trnc) out } rrtigauss.ch2.1 <- function(mu, lambda, trnc) { iter = 0 alpha = 0.0; accept = FALSE while (!accept) { iter = iter + 1 X = rrtinvch2.1(lambda, trnc) alpha = exp(- 0.5*lambda/mu^2 * X) ## l.alpha = 0.5*lambda/mu^2 * (X-trnc) ## cat(X, alpha, "\n") accept = runif(1) < alpha } out = list(x=X, iter=iter) out } rrtigauss.ch2 <- function(num, mu, lambda, trnc) { x = rep(0, num) df = data.frame(x=x, iter=x) for (i in 1:num) { temp = rrtigauss.ch2.1(mu, lambda, trnc) df[i,] = as.numeric(temp) } df } rrtigauss.reject <- function(num, mu, lambda, trnc) { x = rep(0, num) df = data.frame(x=x, iter=x) for (i in 1:num) { accept = FALSE iter = 0 while(!accept) { iter = iter + 1 draw = rigauss.1(mu, lambda) accept = draw < trnc } df[i,] = c(draw, iter) } df } rrtigauss.1 <- function(mu, lambda, trnc=1) { ## trnc is truncation point accept = FALSE X = trnc + 1; if (trnc < mu) { alpha = 0.0; while (!accept) { X = rrtinvch2.1(lambda, trnc) ## alpha = exp(0.5*lambda/mu^2 * (X - trnc)) l.alpha = - 0.5*lambda/mu^2 * X ## cat(X, alpha, "\n") accept = log(runif(1)) < l.alpha } ## cat("rtigauss.ch, part i:", X, "\n"); } else { ## trnc >= mu while (X > trnc) { Y = rnorm(1)^2; X = mu + 0.5 * mu^2 / lambda * Y - 0.5 * mu / lambda * sqrt(4 * mu * lambda * Y + (mu * Y)^2); if ( runif(1) > mu / (mu + X) ) { X = mu^2 / X; } } ## cat("rtiguass, part ii:", X, "\n"); } X; } rrtigauss <- function(num, mu, lambda, trnc=1) { x = rep(0, num) for (i in 1:num) x[i] = rrtigauss.1(mu, lambda, trnc) x } rigauss.1 <- function(mu, lambda) { nu = rnorm(1); y = nu^2; x = mu + 0.5 * mu^2 * y / lambda - 0.5 * mu / lambda * sqrt(4 * mu * lambda * y + (mu*y)^2); if (runif(1) > mu / (mu + x)) { x = mu^2 / x; } x } rigauss <- function(num, mu, lambda) { x = rep(0, num) for (i in 1:num) x[i] = rigauss.1(mu, lambda) x } ##------------------------------------------------------------------------------ ## UNIT TEST ## ##------------------------------------------------------------------------------ if (FALSE) { ## source("SPSample.R") lambda = 5 mu = 2 draw1 = rigauss(10000, mu, lambda) draw2 = rrtigauss(5000, mu, lambda, mu/2) draw3 = rrtigauss(5000, mu, lambda, mu*2) draw8 = rrtigauss.ch2(5000, mu, lambda, mu/2) par(mfrow=c(1,2)) hist(draw1[draw1<=mu/2], prob=TRUE) hist(draw2, prob=TRUE, add=TRUE, col="#88000088") summary(draw1[draw1<=mu/2]) summary(draw2) summary(draw8$x) hist(draw1[draw1<=mu*2], prob=TRUE) hist(draw3, prob=TRUE, add=TRUE, col="#88000088") summary(draw1[draw1<=mu*2]) summary(draw3) trnc = 2 lambda = 100 draw4 = lambda / rchisq(20000, 1) draw5 = rrtinvch2(10000, lambda, trnc) hist(draw4[draw4<=trnc], prob=TRUE) hist(draw5, prob=TRUE, add=TRUE, col="#88000088") summary(draw4[draw4<=trnc]) summary(draw5) draw6 = rrtigauss(5000, 1/sqrt(2*rl), n, 1) draw7 = rigauss(100000, 1/sqrt(2*rl), n) hist(draw6, prob=TRUE) hist(draw7[draw7<=1], prob=TRUE, add=TRUE, col="#88000088") } ################################################################################ ## SP SAMPLER ## ################################################################################ if (FALSE) { m = 1 kappa = 1.1 xl = m/kappa xr = m*kappa ul = 0.5 * v.func(xl) ur = 0.5 * v.func(xr) delta = delta.func1(c(xl,xr)) tl = tangent.lines.eta(c(xl,xr), 0) rlb = -tl$slope[1] rrb = -tl$slope[2] ilb = cgf(ul, 0) - delta$val[1] + delta$der[1] * xl irb = cgf(ur, 0) - delta$val[2] + delta$der[2] * xr ## alphal = 3/2 ## alphar = 3/2 alpha = 3/2 inflate = alpha^0.5 } ##------------------------------------------------------------------------------ ## Generate sample for J^*(n,z) using SP approx. sp.sampler.1 <- function(n=1, z=0, maxiter=100) { if (n < 1) stop("sp.sampler.1: n must be >= 1.") ## rl = 0.5 * z^2 - tl$slope[1] ## rr = 0.5 * z^2 - tl$slope[2] xl = y.func(-z^2) mid = xl * 1.1 xr = xl * 1.2 ## xl = mid / kappa ## xr = mid * kappa ## cat("xl, md, xr:", xl, mid, xr, "\n"); v.mid = v.func(mid) K2.mid = mid^2 + (1-mid) / v.mid al = mid^3 / K2.mid ar = mid^2 / K2.mid ## cat("vmd, K2md, al, ar:", v.mid, K2.mid, al, ar, "\n"); tl = tangent.lines.eta(c(xl,xr), z, mid) rl = -tl$slope[1] rr = -tl$slope[2] il = tl$icept[1] ir = tl$icept[2] ## rl = rlb + 0.5 * z^2 ## rr = rrb + 0.5 * z^2 ## il = log(cosh(z)) + ilb ## ir = log(cosh(z)) + irb ## cat("rl, rr, il, ir:", rl, rr, il, ir, "\n") ## term1 = al^0.5 ## term2 = exp(-n*sqrt(2*rl) + n * il + 0.5*n - 0.5*n*(1-1/mid)) ## term3 = pigauss(mid, Z=sqrt(2*rl), lambda=n) ## cat("l terms 1-3:", term1, term2, term3, "\n") wl = al^0.5 * exp(-n*sqrt(2*rl) + n * il + 0.5*n - 0.5*n*(1-1/mid)) * pigauss(mid, Z=sqrt(2*rl), lambda=n) ## lcn = 0.5 * log(0.5 * n / pi) ## term1 = ar^0.5 ## term2 = exp(lcn) ## term3 = exp(-n * log(n * rr) + n * ir - n * log(mid)) ## term4 = gamma(n) ## term5 = pgamma(mid, shape=n, rate=n*rr, lower=FALSE) ## cat("r terms 1-5:", term1, term2, term3, term4, term5, "\n") # old method wr = ar^0.5 * (0.5*n/pi)^0.5 * exp(-n * log(n * rr) + n * ir - n * log(mid)) * gamma(n) * pgamma(mid, shape=n, rate=n*rr, lower.tail=FALSE) wt = wl + wr pl = wl / wt ## cat("wl, wr, lcn:", wl, wr, lcn, "\n") go = TRUE iter = 0 X = 2 FX = 0 while (go && iter<maxiter) { iter = iter + 1 if (wt*runif(1) < wl) { ## sample left X = rrtigauss.1(mu=1/sqrt(2*rl), lambda=n, trnc=mid) ## while (X > mid) X = rigauss.1(1/sqrt(2*rl), n) phi.ev = n * (-rl * X + il) + 0.5 * n * ( (1-1/X) - (1-1/mid)) FX = al^0.5 * (0.5 * n / pi)^0.5 * X^(-1.5) * exp(phi.ev); } else { ## sample right X = rltgamma.dagpunar.1(shape=n, rate=n*rr, trnc=mid) phi.ev = n * (-rr * X + ir) + n * (log(X) - log(mid)) FX = ar^0.5 * (0.5 * n / pi)^0.5 * exp(phi.ev) / X; } spa = sp.approx.1(X, n, z) ## cat("FX, phi.ev, spa, phi", FX, phi.ev, spa$spa, spa$phi,"\n") if (FX*runif(1) < spa$spa) go = FALSE } out = list(x=X, iter=iter, wl=wl, wr=wr) out } sp.sampler <- function(num, n=1, z=0, return.df=FALSE) { x = rep(0, num) df = data.frame(x=x, iter=x) for (i in 1:num) { temp = sp.sampler.1(n,z) df$x[i] = temp$x df$iter[i] = temp$iter } if (!return.df) df = df$x df } rpg.sp.R <- function(num=1, h=1, z=0) { n = h z = 0.5 * z; x = rep(0, num) df = data.frame(x=x, iter=x) for (i in 1:num) { temp = sp.sampler.1(n,z) df$x[i] = temp$x df$iter[i] = temp$iter } df$x = 0.25 * n * df$x # if (!return.df) df = df$x df$x } ##------------------------------------------------------------------------------ ## UNIT TEST ## ##------------------------------------------------------------------------------ if (FALSE) { ## source("SPSample.R") n = 10 z = 10 dx = 0.001 xgrid = seq(dx*5, 0.2, dx) spa = sp.approx(xgrid, n, z) spa.m = sp.approx.1(xgrid[1], n, z) spa.m = sp.approx.1(spa.m$m, n, z) xl = spa.m$m mid = spa.m$m * 1.1 xr = spa.m$m * 1.2 wla = sum(spa[xgrid<mid]) * dx wra = sum(spa[xgrid>=mid]) * dx d1 = sp.sampler.1(n,z) d1$wl / (d1$wl + d1$wr) wla / (wla + wra) df = sp.sampler(5000, n, z, TRUE) tl = tl = tangent.lines.eta(c(xl,xr), z) rl = -tl$slope[1] rr = -tl$slope[2] draw.ig = rrtigauss(2000, mu=1/sqrt(2*rl), lambda=n, trnc=1) draw.ga = rltgamma.dagpunar(2000, shape=n, rate=n*rr, trnc=1) draw.jj = 4 * rpg(2000, n, 2*z) / n par(mfrow=c(1,3)) plot(xgrid, spa) hist(df$x, prob=TRUE, add=TRUE, breaks=20) hist(draw.jj, prob=TRUE, add=TRUE, col="#22000022") plot(xgrid[xgrid<mid], spa[xgrid<mid] / (wla / (wla+wra))) hist(df$x[df$x<mid], prob=TRUE, add=TRUE) hist(draw.ig, prob=TRUE, col="#00004444", add=TRUE) plot(xgrid[xgrid>=mid], spa[xgrid>=mid] / (wra / (wla+wra))) hist(df$x[df$x>=mid], prob=TRUE, add=TRUE) hist(draw.ga, prob=TRUE, col="#22000022", add=TRUE) c(mean(draw.jj), var(draw.jj)) c(mean(df$x), var(df$x)) ##---------------------------------------------------------------------------- } ################################################################################ ################################################################################ if (FALSE) { ## source("SPSample.R") source("ManualLoad.R") nsamp = 5000 n = 1 z = 0 seed = sample.int(10000, 1) ## seed = 8922 set.seed(seed) samp.1 = rpg.sp.R(nsamp, n, z) ## samp.d = rpg.devroye(nsamp, n, z) set.seed(seed) samp.4 = rpg.sp(nsamp, n, z) mean(samp.1) ## mean(samp.d) mean(samp.4) } if (FALSE) { ## source("SPSample.R") source("ManualLoad.R") nsamp = 100000 n = 100 z = 1 start.time = proc.time() samp.sp = rpg.sp(nsamp, n, z, track.iter=TRUE) time.sp = proc.time() - start.time start.time = proc.time() samp.ga = rpg.gamma(nsamp, n, z) time.ga = proc.time() - start.time start.time = proc.time() samp.dv = rpg.devroye(nsamp, n, z) time.dv = proc.time() - start.time start.time = proc.time() samp.R = rpg.sp.R(2, n, z) time.R = proc.time() - start.time time.sp time.ga time.dv time.R summary(samp.sp$samp) summary(samp.ga) summary(samp.dv) summary(samp.R ) } if (FALSE) { ygrid = exp(seq(-4,4,0.1) * log(2)) vgrid = v.func(ygrid) plot(vgrid, ygrid) write(ygrid, "y.txt", sep=",") write(vgrid, "v.txt", sep=",") } if (FALSE) { ## source("SPSample.R") source("ManualLoad.R") nsamp = 10000 ntrial = 2 ## n.seq = c(1, 10, 50, 100) n.seq = c(1,2,3,4,10,12,14,16,18,20,30,40,50,100) z.seq = c(0.0, 0.1, 0.5, 1, 2, 10) n.len = length(n.seq) z.len = length(z.seq) out = array(0, dim=c(4, n.len, z.len, ntrial)); sum.stat = array(0, dim=c(6, 4, n.len, z.len)); temp.time = rep(0, 4) id = c("sp", "ga", "dv", "al") dimnames(out)[[1]] = id dimnames(out)[[2]] = paste("n", n.seq, sep="") dimnames(out)[[3]] = paste("z", z.seq, sep="") for (zdx in 1:z.len) { for (ndx in 1:n.len) { n = n.seq[ndx] z = z.seq[zdx] cat("z=", z, "n=", n, "\n") for (i in 1:ntrial) { start.time = proc.time() samp.sp = rpg.sp(nsamp, n, z, track.iter=FALSE) time.sp = proc.time() - start.time temp.time[1] = time.sp[1] start.time = proc.time() samp.ga = rpg.gamma(nsamp, n, z) time.ga = proc.time() - start.time temp.time[2] = time.ga[1] start.time = proc.time() samp.dv = rpg.devroye(nsamp, n, z) time.dv = proc.time() - start.time temp.time[3] = time.dv[1] start.time = proc.time() samp.al = rpg.alt(nsamp, n, z) time.al = proc.time() - start.time temp.time[4] = time.al[1] out[,ndx,zdx,i] = temp.time } sum.stat[,1,ndx,zdx] = summary(samp.sp) sum.stat[,2,ndx,zdx] = summary(samp.ga) sum.stat[,3,ndx,zdx] = summary(samp.dv) sum.stat[,4,ndx,zdx] = summary(samp.al) } } } if (FALSE) { ## FIND BEST METHOD which.min.n <- function(x, n=1) { ## Put in increasing order. N = length(x) out = order(x) out[n] } min.n <- function(x, n=1) { N = length(x) out = order(x) x[out[n]] } ave.time = apply(out, c(1,2,3), mean) best.idx = apply(ave.time, c(2,3), which.min) second.idx = apply(ave.time, c(2,3), function(x){which.min.n(x,2)}) best.time = apply(ave.time, c(2,3), min) second.time = apply(ave.time, c(2,3), function(x){min.n(x,2)}) ##---------------------------------------------------------------------------- ## MAKE TABLES write.table(apply(best.idx, c(1,2), function(n){id[n]}), file="best.idx.table", sep=" & ", eol=" \\\\\n") ## write.table(apply(second.idx, c(1,2), function(n){id[n]}), ## file="second.idx.table", sep=" & ", eol=" \\\\\n"); write.table(round(best.time, 3), file="best.time.table", sep=" & ", eol=" \\\\\n") write.table(round(ave.time[3,,]/best.time, 2), file="devroye.to.best.table", sep=" & ", eol="\\\\\n"); speed.up = ave.time[3,,]/best.time plot(n.seq, speed.up[,1], type="l", col=1, main="S1 Time / Best Time", xlab="n", ylab="Ratio") for (zdx in 2:z.len) { lines(n.seq, speed.up[,zdx], col=zdx) } legend("bottomright", legend=paste("c =", 2*z.seq), col=1:z.len, lty=1) } ################################################################################ if (FALSE) { ## TEST HYBRID METHOD ## source("SPSample.R") source("ManualLoad.R") nsamp = 10000 n = 20 z = 1 start.time = proc.time() ## samp.sp = rpg.sp(nsamp, n, z, track.iter=FALSE) time.sp = proc.time() - start.time start.time = proc.time() samp.al = rpg.alt(nsamp, n, z) time.al = proc.time() - start.time start.time = proc.time() samp.dv = rpg.devroye(nsamp, n, z) time.dv = proc.time() - start.time start.time = proc.time() samp.hy = rpg(nsamp, n, z) time.hy = proc.time() - start.time ## What is the hit on time. time.sp time.al time.dv time.hy ## summary(samp.sp) summary(samp.al) summary(samp.dv) summary(samp.hy) } ################################################################################ ## Kullback-Liebler Divergence ## ################################################################################ if (FALSE) { ## source("ManualLoad.R") ## source("SPSample.R") dx = 0.01 xgrid = seq(dx, 4, dx) h = 2 z = 1 N = 10 M = 10000 n.seq = c(1,2,3,4,10,12,14,16,18,20,30,40,50,100) z.seq = c(0.0, 0.1, 0.5, 1, 2, 10) len.n = length(n.seq) len.z = length(z.seq) kl = matrix(0, nrow=len.n, ncol=len.z) dimnames(kl)[[1]] = paste("n", n.seq, sep="") dimnames(kl)[[2]] = paste("z", z.seq, sep="") for (j in 1:len.z) { for (i in 1:len.n) { h = n.seq[i] z = z.seq[j] cat("Working on h =", h, "z =", z, "\n") out = djstar(xgrid, h, z, N) spa = sp.approx(xgrid/h, h, z)/h jgrid = out$s[3,] plot(xgrid, jgrid, type="l") lines(xgrid, spa, col=2) X = rpg.alt(M, h, z) out = djstar(X, h, z, N) spad = sp.approx(X/h, h, z)/h jd = out$s[3,] ratio = spad / jd kl[i, j] = mean(log(ratio)*ratio) } } }
/scratch/gouwar.j/cran-all/cranData/BayesLogit/inst/dev/R/PolyaGammaApproxSP.R
# "Shape" parameter and method used: # Devroye: 1 or 2 # Alt: 1-13, but not 1 or 2 # SP: 13-170 # Normal: 170 and above library("BayesLogit") rpg.sp.R(1, 100, 12) rpg.sp(1, 100, 12) jj.m1 <- function(b,z) { if (z > 1e-12) b * tanh(z) / z else b * (1 - (1/3) * z^2 + (2/15) * z^4 - (17/315) * z^6) } jj.m2 <- function(b, z) { if (z > 1e-12) (b+1) * b * (tanh(z)/z)^2 + b * ((tanh(z)-z)/z^3) else (b+1) * b * (1 - (1/3) * z^3 + (2/15) * z^4 - (17/315) * z^6)^2 + b * ((-1/3) + (2/15) * z - (17/315) * z^3); } pg.m1 <- function(b,z) { jj.m1(b,z/2) / 4 } pg.m2 <- function(b,z) { jj.m2(b,z/2) / 16 } nsamp = 2000 out = list() bseq = c(1, 3, 12, 36, 100) zseq = c(0., 1., 2., 4., 12.) lenb = length(bseq) lenz = length(zseq) # zseq = 0.0 count = 0 for (j in 1:lenz) { for (i in 1:lenb) { count = count + 1 b = bseq[i] z = zseq[j] cat("Working on b =", b, ", z =", z, "\n") samp = list() samp[["gamma.R"]] = rpg.gamma.R(nsamp, b, z, trunc=1000) samp[["devroye.R"]] = rpg.devroye.R(nsamp, b, z) ## samp[["alt.R"]] = rpg.alt.R(nsamp, b, z) samp[["sp.R"]] = rpg.sp.R(nsamp, b, z) samp[["gamma.C"]] = rpg.gamma(nsamp, b, z, trunc=1000) samp[["devroye.C"]] = rpg.devroye(nsamp, b, z) ## samp[["alt.C"]] = rpg.alt(nsamp, b, z) samp[["sp.C"]] = rpg.sp(nsamp, b, z) samp[["rpg.C"]] = rpg(nsamp, b, z) m1 = pg.m1(b,z) m2 = pg.m2(b,z) m1.samp = sapply(samp, mean) m2.samp = sapply(samp, function(x){mean(x^2)}) out[[count]] = list() out[[count]]$param = c(b=b, z=z) out[[count]]$moments = c(m1=m1, m2=m2) out[[count]]$moments.samp = data.frame(m1=m1.samp, m2=m2.samp) out[[count]]$samp = samp out[[count]]$error = data.frame(m1=m1-m1.samp, m2=m2-m2.samp) out[[count]]$rel.error = data.frame(m1=(m1-m1.samp)/m1, m2=(m2-m2.samp)/m2) } } # Compare for (i in 1:(lenz*lenb)) { print(out[[i]]$param) print(out[[i]]$moments) ## print(out[[i]]$error) print(out[[i]]$rel.error) } # Plot qq plots par(mfrow=c(5,6)) par(mar=c(2, 2, 1, 0)) j = 1 for (i in 1:lenb) { k = (j-1)*lenb + i with(out[[k]]$samp, qqplot(gamma.R, devroye.R)) with(out[[k]], title(main=paste("devroye.R vs. gamma.R", "b:", param[1], "z:", param[2]), cex.main=0.5)) with(out[[k]]$samp, qqplot(gamma.R, sp.R)) with(out[[k]], title(main=paste("sp.R vs. gamma.R", "b:", param[1], "z:", param[2]), cex.main=0.5)) with(out[[k]]$samp, qqplot(gamma.R, gamma.C)) with(out[[k]], title(main=paste("gamma.C vs. gamma.R", "b:", param[1], "z:", param[2]), cex.main=0.5)) with(out[[k]]$samp, qqplot(gamma.R, devroye.C)) with(out[[k]], title(main=paste("devroye.C vs. gamma.R", "b:", param[1], "z:", param[2]), cex.main=0.5)) with(out[[k]]$samp, qqplot(gamma.R, sp.C)) with(out[[k]], title(main=paste("sp.C vs. gamma.R", "b:", param[1], "z:", param[2]), cex.main=0.5)) with(out[[k]]$samp, qqplot(gamma.R, rpg.C)) with(out[[k]], title(main=paste("rpg.C vs. gamma.R", "b:", param[1], "z:", param[2]), cex.main=0.5)) }
/scratch/gouwar.j/cran-all/cranData/BayesLogit/inst/dev/R/diagnostic.R
## Copyright 2013 Nick Polson, James Scott, and Jesse Windle. ## This file is part of BayesLogit, distributed under the GNU General Public ## License version 3 or later and without ANY warranty, implied or otherwise. pigauss <- function(x, mu, lambda=1.0) { Z = 1.0 / mu; b = sqrt(lambda / x) * (x * Z - 1); a = -1.0 * sqrt(lambda / x) * (x * Z + 1); y = exp(pnorm(b, log.p=TRUE)) + exp(2 * lambda * Z + pnorm(a, log.p=TRUE)); # y2 = 2 * pnorm(-1.0 / sqrt(x)); y } rtigauss <- function(Z, R=Inf) { Z = abs(Z); mu = 1/Z; X = R + 1; if (mu > R) { alpha = 0.0; while (runif(1) > alpha) { ## X = R + 1 ## while (X > R) { ## X = 1.0 / rgamma(1, 0.5, rate=0.5); ## } E = rexp(2) while ( E[1]^2 > 2 * E[2] / R) { E = rexp(2) } X = R / (1 + R*E[1])^2 alpha = exp(-0.5 * Z^2 * X); } } else { while (X > R) { lambda = 1.0; Y = rnorm(1)^2; X = mu + 0.5 * mu^2 / lambda * Y - 0.5 * mu / lambda * sqrt(4 * mu * lambda * Y + (mu * Y)^2); if ( runif(1) > mu / (mu + X) ) { X = mu^2 / X; } } } X; } rtigauss.2 <- function(Z, R=Inf) { mu = 1/Z; t = R; tx = (t-mu)^2 / (mu^2 * t); tn = sqrt(tx); X = 0; if (mu > R) { Z = rtnorm(1, mean=0, sd=1, lower=tn); Y = Z^2; X = mu + 0.5 * mu^2 * Y + 0.5 * mu * sqrt(4 * mu * Y + (mu * Y)^2); X = mu^2 / X; } else { Z = rtnorm(1, mean=0, sd=1, lower=-1*tn); Y = Z^2; X2 = mu + 0.5 * mu^2 * Y + 0.5 * mu * sqrt(4 * mu * Y + (mu * Y)^2); X1 = mu + 0.5 * mu^2 * Y - 0.5 * mu * sqrt(4 * mu * Y + (mu * Y)^2); X = X1 if ( Z < tn ) { if ( runif(1) > mu / (mu + X1) ) { X = X2; } } else { ## if ( runif(1) > mu / (mu + X1) ) { ## X = -0.01; ## } } } X } rtigauss.3 <- function(Z, R=Inf) { mu = 1/Z; t = R; tx = (t-mu)^2 / (mu^2 * t); tl = min(tx, mu^2 / tx); tn = sqrt(tx); X = 0; if (mu > R) { Z = rtnorm(1, mean=0, sd=1, lower=tn); Y = Z^2; X = mu + 0.5 * mu^2 * Y + 0.5 * mu * sqrt(4 * mu * Y + (mu * Y)^2); X = mu^2 / X; } else { U = runif(1); ## Take the left side. if (U < pigauss(tl, mu) / pigauss(t,mu)) { Y = rnorm(1)^2; X = mu + 0.5 * mu^2 * Y - 0.5 * mu * sqrt(4 * mu * Y + (mu * Y)^2); } ## Take the right side. else { Z = rtnorm(1, mean=0, sd=1, lower=-1*tn, upper=tn); Y = Z^2; X = mu + 0.5 * mu^2 * Y + 0.5 * mu * sqrt(4 * mu * Y + (mu * Y)^2); } } X } N = 10000; Z = 1.0; R = 1.0; samp.1 = rep(0, N); samp.2 = rep(0, N); samp.3 = rep(0, N); for (i in 1:N) { samp.1[i] = rtigauss(Z, R); samp.2[i] = rtigauss.2(Z, R); samp.3[i] = rtigauss.3(Z, R); } par(mfrow=c(1,3)); hist(samp.1, breaks=20) hist(samp.2, breaks=20) hist(samp.3, breaks=20)
/scratch/gouwar.j/cran-all/cranData/BayesLogit/inst/dev/R/rtigauss.R
#' Buhaugetal_2009_JCR #' #'Subsetted version of survival database extracted from \href{http://bit.ly/2Q1Igo9}{Buhaug et al. (2009)}. #'It has precisely dated duration data of internal conflict as well as geographic data. #'Variables Y, Y0 and C were later added by \href{http://bit.ly/38eDsnG}{Bagozzi et al. (2019)}. #'It is used to estimate the Bayesian Misclassified Failure (MF) Weibull model #'presented in \href{http://bit.ly/38eDsnG}{Bagozzi et al. (2019)}. #' #' #'\describe{ #' \item{lndistx}{log conflict-capital distance.} #' \item{confbord}{conflict zone at border.} #' \item{borddist}{confbord * lndistx centred.} #' \item{figcapdum}{rebel fighting capacity at least moderate.} #' \item{lgdp_onset}{gdp capita in onset year.} #' \item{sip2l_onset}{Gates et al. (2006) SIP code (1 year lag) for the onset year.} #' \item{pcw}{post cold war period, 1989+.} #' \item{frst}{percentage of forest in conflict zone.} #' \item{mt}{percentage of mountains in conflict zone.} #' \item{Y}{conflict duration.} #' \item{Y0}{elapsed time since inception to Y (t-1).} #' \item{C}{censoring variable.} #' \item{coupx}{coup d'etat, except if overlapping with other gov't conflict (PHI 1989).} #' } #' @docType data #' @keywords datasets #' @name Buhaugetal_2009_JCR #' @usage data(Buhaugetal_2009_JCR) #' @format A data frame with 1562 rows and 13 variables #' @source Buhaug, Halvard, Scott Gates, and Päivi Lujala (2009), Geography, rebel capability, and the duration of civil conflict, Journal of Conflict Resolution 53(4), 544 - 569. "Buhaugetal_2009_JCR"
/scratch/gouwar.j/cran-all/cranData/BayesMFSurv/R/Buhaugetal_2009_JCR.R
# Generated by using Rcpp::compileAttributes() -> do not edit by hand # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 llikWeibull <- function(Y, eXB, alpha, C, lambda) { .Call('_BayesMFSurv_llikWeibull', PACKAGE = 'BayesMFSurv', Y, eXB, alpha, C, lambda) } llikWeibull2 <- function(Y, Y0, eXB, alpha, C, lambda) { .Call('_BayesMFSurv_llikWeibull2', PACKAGE = 'BayesMFSurv', Y, Y0, eXB, alpha, C, lambda) }
/scratch/gouwar.j/cran-all/cranData/BayesMFSurv/R/RcppExports.R
#' @useDynLib BayesMFSurv #' @importFrom stats dgamma runif as.formula model.frame model.matrix model.response na.omit na.pass #' @import grDevices #' @import graphics #' @import RcppArmadillo #' @importFrom Rcpp sourceCpp #' @importFrom MCMCpack riwish #' @importFrom coda mcmc #' @importFrom mvtnorm rmvnorm dmvnorm #' @export #' @title mfsurv #' @description \code{mfsurv} fits a parametric Bayesian MF model via Markov Chain Monte Carlo (MCMC) to estimate the misclassification in the first stage #' and the hazard in the second stage. #' @param formula a formula in the form Y ~ X1 + X2... | C ~ Z1 + Z2 ... where Y is the duration until failure or censoring, and C is a binary indicator of observed failure. #' @param Y0 the elapsed time since inception until the beginning of time period (t-1). #' @param data list object of data. #' @param N number of MCMC iterations. #' @param burn burn-ins to be discarded. #' @param thin thinning to prevent autocorrelation of chain of samples by only taking the n-th values. #' @param w size of the slice in the slice sampling for (betas, gammas, lambda). The default is c(1,1,1). This value may be changed by the user to meet one's needs. #' @param m limit on steps in the slice sampling. The default is 10. This value may be changed by the user to meet one's needs. #' @param form type of parametric model distribution to be used. Options are "Exponential" or "Weibull". The default is "Weibull". #' @param na.action a function indicating what should happen when NAs are included in the data. Options are "na.omit" or "na.fail". The default is "na.omit". #' @return mfsurv returns an object of class \code{"mfsurv"}. #' #' A \code{"mfsurv"} object has the following elements: #' \item{Y}{the vector of `Y'.} #' \item{Y0}{the vector of `Y0'.} #' \item{C}{the vector of `C'.} #' \item{X}{matrix X's variables.} #' \item{Z}{the vector of `Z'.} #' \item{betas}{data.frame, X.intercept and X variables.} #' \item{gammas}{data.frame, Z.intercept and Z variables.} #' \item{lambda}{integer.} #' \item{post}{integer.} #' \item{iterations}{number of MCMC iterations.} #' \item{burn_in}{burn-ins to be discarded.} #' \item{thinning}{integer.} #' \item{betan}{integer, length of posterior sample for betas.} #' \item{gamman}{integer, length of posterior sample for gammas.} #' \item{distribution}{character, type of distribution.} #' \item{call}{the call.} #' \item{formula}{description for the model to be estimated.} #' @examples #' set.seed(95) #' bgl <- Buhaugetal_2009_JCR #' bgl <- subset(bgl, coupx == 0) #' bgl <- na.omit(bgl) #' Y <- bgl$Y #' X <- as.matrix(cbind(1, bgl[,1:7])) #' C <- bgl$C #' Z1 <- matrix(1, nrow = nrow(bgl)) #' Y0 <- bgl$Y0 #' model1 <- mfsurv(Y ~ X | C ~ Z1, Y0 = Y0, #' N = 50, #' burn = 20, #' thin = 15, #' w = c(0.1, .1, .1), #' m = 5, #' form = "Weibull", #' na.action = 'na.omit') #' @export mfsurv <-function(formula, Y0, data = list(), N, burn, thin, w = c(1,1,1), m = 10, form = c("Weibull", "Exponential"), na.action=c("na.omit","na.fail")){ if (missing(na.action)){na.action <- "na.omit"} if (missing(Y0))warning("Y0: elapsed time since inception missing") if (missing(N))warning("N: number of iterations missing") if (missing(burn))warning("burn: number of burn-ins missing") if (missing(thin))warning("thin: thinning interval missing") equations<-as.character(formula) formula1 <- paste(strsplit(equations[2], "|", fixed = TRUE)[[1]][1],sep="") formula2 <- paste(strsplit(equations[2], "|", fixed = TRUE)[[1]][2],equations[1],equations[3],sep="") mf1 <- model.frame(formula = as.formula(formula1), data = data, na.action = na.pass) mf2 <- model.frame(formula = as.formula(formula2), data = data, na.action = na.pass) X <- model.matrix(attr(mf1, "terms"), data = mf1) Z <- model.matrix(attr(mf2, "terms"), data = mf2) C <- model.response(mf2) Y <- model.response(mf1) dataset <- as.data.frame(cbind(Y,C,X,Z)) if(na.action == "na.omit"){ dataset <- data.frame(na.omit(dataset)) Y <- as.matrix(dataset[,1], ncol = 1) C <- as.matrix(dataset[,2], ncol = 1) X <- data.frame(dataset[,3:(ncol(X)+2)]) names(X) <- c("X.intercept", colnames(X[,2:ncol(X)])) Z <- data.frame(dataset[,(ncol(X)+3):(ncol(X)+ncol(Z)+2)]) names(Z) <- c("Z.intercept", colnames(Z[,2:ncol(Z)])) Y0 <- as.numeric(Y0) N <- as.numeric(N) burn <- as.numeric(burn) thin <- as.numeric(thin) m <- as.numeric(m) w <- as.vector(w) form <- as.character(form) na.action <- na.action est <- bayes.mfsurv.default(Y, Y0, C, X, Z, N, burn, thin, w, m, form, na.action) est$call <- match.call() est$formula <- formula est }else{ if(na.action == "na.fail"){ if(all(is.numeric(dataset))){ Y <- as.matrix(dataset[,1], ncol = 1) C <- as.matrix(dataset[,2], ncol = 1) X <- data.frame(dataset[,3:(ncol(X)+2)]) names(X) <- c("X.intercept", colnames(X[,2:ncol(X)])) Z <- data.frame(dataset[,(ncol(X)+3):(ncol(X)+ncol(Z)+2)]) names(Z) <- c("Z.intercept", colnames(Z[,2:ncol(Z)])) Y0 <- as.numeric(Y0) dataset$Y0 <- Y0 N <- as.numeric(N) burn <- as.numeric(burn) thin <- as.numeric(thin) m <- as.numeric(m) w <- as.vector(w) form <- form na.action <- na.action est <- bayes.mfsurv.default(Y, Y0, C, X, Z, N, burn, thin, w, m, form, na.action) est$call <- match.call() est$formula <- formula est }else{ Y <- as.numeric(Y) Y0 <- as.numeric(Y0) C <- as.numeric(C) X <- as.matrix(X) Z <- as.matrix(Z) if(any(is.na(Y))) warning("Time indicator contains missing values") if(any(is.na(C))) warning("Censoring indicator contains missing values") if(any(is.na(X))) warning("Explanatory variable(s) in misclassification stage contains missing values") if(any(is.na(Z))) warning("Explanatory variable(s) in survival stagef contains missing values") } } } } #' @title mfsurv.stats #' @description A function to calculate the deviance information criterion (DIC) for fitted model objects of class \code{mfsurv} #' for which a log-likelihood can be obtained, according to the formula \emph{DIC = -2 * (L - P)}, #' where \emph{L} is the log likelihood of the data given the posterior means of the parameter and #' \emph{P} is the estimate of the effective number of parameters in the model. #' @param object an object of class \code{mfsurv}, the output of \code{mfsurv()}. #' @return list. #' @examples #' set.seed(95) #' bgl <- Buhaugetal_2009_JCR #' bgl <- subset(bgl, coupx == 0) #' bgl <- na.omit(bgl) #' Y <- bgl$Y #' X <- as.matrix(cbind(1, bgl[,1:7])) #' C <- bgl$C #' Z1 <- matrix(1, nrow = nrow(bgl)) #' Y0 <- bgl$Y0 #' model1 <- mfsurv(Y ~ X | C ~ Z1, Y0 = Y0, #' N = 50, #' burn = 20, #' thin = 15, #' w = c(0.1, .1, .1), #' m = 5, #' form = "Weibull", #' na.action = 'na.omit') #' #' mfsurv.stats(model1) #' @export mfsurv.stats = function(object){ #Calculate L X <- as.matrix(object$X) Z <- as.matrix(object$Z) Y <- as.matrix(object$Y) Y0 <- as.matrix(object$Y0) C <- as.matrix(object$C) data <- as.data.frame(cbind(object$Y, object$Y0, object$C, object$X, object$Z)) theta_post = cbind(object$gammas, object$betas, object$lambda) theta_hat = apply(theta_post, 2, mean) L = llFun(theta_hat,Y, Y0,C,X,Z,data)$llik #Calculate P S = nrow(theta_post) #S = number of iterations #Add up the log likelihoods of each iteration llSum = 0 sum1 = 0 sum2 = 0 sum3 = 0 #l <- as.matrix(NA, nrow=S, ncol=1) for (s in 1:S) { theta_s = as.matrix(theta_post[s,]) ll <- llFun(theta_s,Y,Y0,C,X,Z,data) llSum <- llSum + ll$llik sum1 <- sum1 + ll$one sum2 <- sum2 + ll$two sum3 <- sum3 + ll$three #l[s,] <- llFun(theta_s,Y,Y0,C,X,Z,data) } P = 2 * (L - (1 / S * llSum)) #Calculate DIC DIC <- -2 * (L - P) all <- sum1/S finite <- sum2/S small <- sum3/S #Return the results list(DIC = DIC, Loglik = L) } #' @title mfsurv.summary() #' @description Returns a summary of a mfsurv object via \code{\link[coda]{summary.mcmc}}. #' @param object an object of class \code{mfsurv}, the output of \code{\link{mfsurv}}. #' @param parameter one of three parameters of the mfsurv output. Indicate either "betas", "gammas" or "lambda". #' @return list. Empirical mean, standard deviation and quantiles for each variable. #' @examples #' set.seed(95) #' bgl <- Buhaugetal_2009_JCR #' bgl <- subset(bgl, coupx == 0) #' bgl <- na.omit(bgl) #' Y <- bgl$Y #' X <- as.matrix(cbind(1, bgl[,1:7])) #' C <- bgl$C #' Z1 <- matrix(1, nrow = nrow(bgl)) #' Y0 <- bgl$Y0 #' model1 <- mfsurv(Y ~ X | C ~ Z1, Y0 = Y0, #' N = 50, #' burn = 20, #' thin = 15, #' w = c(0.1, .1, .1), #' m = 5, #' form = "Weibull", #' na.action = 'na.omit') #' #' mfsurv.summary(model1, "betas") #' @export mfsurv.summary <- function(object, parameter = c("betas", "gammas", "lambda")){ if (parameter == "betas"){ sum <- summary(mcmc(object$betas)) return(sum) } if (parameter == "gammas"){ sum <- summary(mcmc(object$gammas)) return(sum) } if (parameter == "lambda"){ sum <- summary(mcmc(object$lambda)) return(sum) } class(res) <- "summary.bayesmf" res }
/scratch/gouwar.j/cran-all/cranData/BayesMFSurv/R/bayes.mfsurv.R
# @title betas.slice.sampling2 # @description slice sampling for betas # @param Sigma.b variance estimate of betas # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param X covariates for betas # @param betas current value of betas # @param alpha probability of true censoring # @param C censoring indicator # @param lambda current value of lambda # @param w size of the slice in the slice sampling # @param m limit on steps in the slice sampling # @param form type of parametric model (Exponential or Weibull) # @return One sample update using slice sampling betas.slice.sampling2 <- function(Sigma.b, Y, Y0, X, betas, alpha, C, lambda, w, m, form){ p1 = length(betas) for (p in sample(1:p1, p1, replace = FALSE)) { betas[p] = univ.betas.slice.sampling2(betas[p], p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, w, m, form = form) } return(betas) } # @title univ.betas.slice.sampling2 # @description univariate slice sampling for betas.p # @param betas.p current value of the pth element of betas # @param p pth element # @param Sigma.b variance estimate of betas # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param X covariates for betas # @param betas current value of betas # @param alpha probability of true censoring # @param C censoring indicator # @param lambda current value of lambda # @param w size of the slice in the slice sampling # @param m limit on steps in the slice sampling # @param lower lower bound on support of the distribution # @param upper upper bound on support of the distribution # @param form type of parametric model (Exponential or Weibull) # @return One sample update using slice sampling univ.betas.slice.sampling2 <- function(betas.p, p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, w, m, lower = -Inf, upper = +Inf, form){ b0 = betas.p b.post0 = betas.post2(b0, p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, form) if (exp(b.post0) > 0) { b.post0 = log(runif(1, 0, exp(b.post0)))} u = runif(1, 0, w) L = b0 - u R = b0 + (w - u) if (is.infinite(m)) { repeat { if (L <= lower) break if (betas.post2(L, p, Sigma.b, Y, Y0,X, betas, alpha, C, lambda, form) <= b.post0) break L = L - w } repeat { if (R >= upper) break if (betas.post2(R, p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, form) <= b.post0) break R = R + w } } else if (m > 1) { J = floor(runif(1, 0, m)) K = (m - 1) - J while (J > 0) { if (L <= lower) break if (betas.post2(L, p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, form) <= b.post0) break L = L - w J = J - 1 } while (K > 0) { if (R >= upper) break if (betas.post2(R, p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, form) <= b.post0) break R = R + w K = K - 1 } } if (L < lower) { L = lower } if (R > upper) { R = upper } repeat { b1 = runif(1, L, R) b.post1 = betas.post2(b1, p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, form) if (b.post1 >= b.post0) break if (b1 > b0) { R = b1 } else { L = b1 } } return(b1) } # @title gammas.slice.sampling2 # @description slice sampling for gammas # @param Sigma.g variance estimate of gammas # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param eXB exponentiated vector of covariates times betas # @param Z covariates for gammas # @param gammas current value of gammas # @param C censoring indicator # @param lambda current value of lambda # @param w size of the slice in the slice sampling # @param m limit on steps in the slice sampling # @param form type of parametric model (Exponential or Weibull) # @return One sample update using slice sampling gammas.slice.sampling2 <- function(Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, w, m, form){ p2 = length(gammas) for (p in sample(1:p2, p2, replace = FALSE)) { gammas[p] = univ.gammas.slice.sampling2(gammas[p], p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, w, m, form = form) } return(gammas) } # @title univ.gammas.slice.sampling2 # @description univariate slice sampling for gammas.p # @param gammas.p current value of the pth element of gammas # @param p pth element # @param Sigma.g variance estimate of gammas # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param eXB exponentiated vector of covariates times betas # @param Z covariates for gammas # @param gammas current value of gammas # @param C censoring indicator # @param lambda current value of lambda # @param w size of the slice in the slice sampling # @param m limit on steps in the slice sampling # @param lower lower bound on support of the distribution # @param upper upper bound on support of the distribution # @param form type of parametric model (Exponential or Weibull) # @return One sample update using slice sampling univ.gammas.slice.sampling2 <- function(gammas.p, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, w, m, lower = -Inf, upper = +Inf, form){ g0 = gammas.p g.post0 = gammas.post2(g0, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, form) if (exp(g.post0) > 0) { g.post0 = log(runif(1, 0, exp(g.post0)))} u = runif(1, 0, w) L = g0 - u R = g0 + (w - u) if (is.infinite(m)) { repeat { if (L <= lower) break if (gammas.post2(L, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, form) <= g.post0) break L = L - w } repeat { if (R >= upper) break if (gammas.post2(R, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, form) <= g.post0) break R = R + w } } else if (m > 1) { J = floor(runif(1, 0, m)) K = (m - 1) - J while (J > 0) { if (L <= lower) break if (gammas.post2(L, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, form) <= g.post0) break L = L - w J = J - 1 } while (K > 0) { if (R >= upper) break if (gammas.post2(R, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, form) <= g.post0) break R = R + w K = K - 1 } } if (L < lower) { L = lower } if (R > upper) { R = upper } repeat { g1 = runif(1, L, R) g.post1 = gammas.post2(g1, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, form) if (g.post1 >= g.post0) break if (g1 > g0) { R = g1 } else { L = g1 } } return(g1) } # @title lambda.slice.sampling2 # @description univariate slice sampling for lambda # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param eXB exponentiated vector of covariates times betas # @param alpha probability of true censoring # @param C censoring indicator # @param lambda current value of lambda # @param w size of the slice in the slice sampling # @param m limit on steps in the slice sampling # @param lower lower bound on support of the distribution # @param upper upper bound on support of the distribution # @return One sample update using slice sampling lambda.slice.sampling2 <- function(Y, Y0, eXB, alpha, C, lambda, w, m, lower = 0.01, upper = +Inf){ l0 = lambda l.post0 = lambda.post2(Y, Y0, eXB, alpha, C, l0) if (exp(l.post0) > 0) { l.post0 = log(runif(1, 0, exp(l.post0)))} u = runif(1, 0, w) L = l0 - u R = l0 + (w - u) if (is.infinite(m)) { repeat { if (L <= lower) break if (lambda.post2(Y, Y0, eXB, alpha, C, L) <= l.post0) break L = L - w } repeat { if (R >= upper) break if (lambda.post2(Y, Y0, eXB, alpha, C, R) <= l.post0) break R = R + w } } else if (m > 1) { J = floor(runif(1, 0, m)) K = (m - 1) - J while (J > 0) { if (L <= lower) break if (lambda.post2(Y, Y0, eXB, alpha, C, L) <= l.post0) break L = L - w J = J - 1 } while (K > 0) { if (R >= upper) break if (lambda.post2(Y, Y0, eXB, alpha, C, R) <= l.post0) break R = R + w K = K - 1 } } if (L < lower) { L = lower } if (R > upper) { R = upper } repeat { l1 = runif(1, L, R) l.post1 = lambda.post2(Y, Y0, eXB, alpha, C, l1) if (l.post1 >= l.post0) break if (l1 > l0) { R = l1 } else { L = l1 } } return(l1) } # @title betas.post2 # @description log-posterior distribution of betas with pth element fixed as betas.p # @param betas.p current value of the pth element of betas # @param p pth element # @param Sigma.b variance estimate of betas # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param X covariates for betas # @param betas current value of betas # @param alpha probability of true censoring # @param C censoring indicator # @param lambda current value of lambda # @param form type of parametric model (Exponential or Weibull) # @return log- posterior density of betas betas.post2 <- function(betas.p, p, Sigma.b, Y, Y0, X, betas, alpha, C, lambda, form){ betas[p] = betas.p if (form %in% "Weibull") { eXB = exp(X %*% betas) } else { eXB = exp(X %*% betas) } lprior = dmvnorm(betas, rep(0, length(betas)), Sigma.b, log = TRUE) lpost = llikWeibull2(Y, Y0, eXB, alpha, C, lambda) + lprior return(lpost) } # @title gammas.post2 # @description log-posterior distribution of gammas with pth element fixed as gammas.p # @param gammas.p current value of the pth element of gammas # @param p pth element # @param Sigma.g variance estimate of gammas # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param eXB exponentiated vector of covariates times betas # @param Z covariates for gammas # @param gammas current value of gammas # @param C censoring indicator # @param lambda current value of lambda # @param form type of parametric model (Exponential or Weibull) # @return log- posterior density of betas gammas.post2 <- function(gammas.p, p, Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, form){ gammas[p] = gammas.p if (form %in% "Weibull") { alpha = 1 / (1 + exp(-Z %*% gammas)) } else { alpha = 1 / (1 + exp(-Z %*% gammas)) } lprior = dmvnorm(gammas, rep(0, length(gammas)), Sigma.g, log = TRUE) lpost = llikWeibull2(Y, Y0, eXB, alpha, C, lambda) + lprior return(lpost) } # @title lambda.post2 # @description log-posterior distribution of lambda # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param eXB exponentiated vector of covariates times betas # @param alpha probability of true censoring # @param C censoring indicator # @param lambda current value of lambda # @param a shape parameter of gammas prior # @param b scale parameter of gammas prior # @return log- posterior density of betas lambda.post2 <- function(Y, Y0, eXB, alpha, C, lambda, a = 1, b = 1){ lprior = dgamma(lambda, a, b, log = TRUE) lpost = llikWeibull2(Y, Y0, eXB, alpha, C, lambda) + lprior return(lpost) } jointpost = function(Y, Y0, X, Z, betas, Sigma.b, gammas, Sigma.g, alpha, C, lambda, a = 1, b = 1, form){ if (form %in% "Weibull") { alpha = 1 / (1 + exp(-Z %*% gammas)) } else { alpha = 1 / (1 + exp(-Z %*% gammas)) } if (form %in% "Weibull") { eXB = exp(X %*% betas) } else { eXB = exp(X %*% betas) } lprior0 = dmvnorm(betas, rep(0, length(betas)), Sigma.b, log = TRUE) lprior1 = dmvnorm(gammas, rep(0, length(gammas)), Sigma.g, log = TRUE) lprior2 = dgamma(lambda, a, b, log = TRUE) lpost = llikWeibull2(Y, Y0, eXB, alpha, C, lambda) + lprior0 + lprior1 + lprior2 return(lpost) } # @title bayes.mfsurv.est # @description Raw form of Markov Chain Monte Carlo (MCMC) to run Bayesian parametric MF model. For user-friendly formula-oriented command, use \code{ \link{mfsurv}}. # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param C the censoring (or failure) dependent variable for the misclassification stage. # @param X covariates for the survival stage. # @param Z covariates for the # @param N number of MCMC iterations. # @param burn burn-ins to be discarded. # @param thin thinning to prevent autocorrelation of chain of samples by only taking the n-th values. # @param w size of the slice in the slice sampling for (betas, gammas, lambda). The default is c(1,1,1). This value may be changed by the user to meet one's needs. # @param m limit on steps in the slice sampling. The default is 10. This value may be changed by the user to meet one's needs. # @param form type of parametric model distribution to be used. Options are "Exponential" or "Weibull". The default is "Weibull". # @param na.action a function indicating what should happen when NAs are included in the data. Options are "na.omit" or "na.fail". The default is "na.omit". bayes.mfsurv.est <- function(Y, Y0, C, X, Z, N, burn, thin, w, m, form, na.action){ # na.action na.action <- na.action p1 = dim(X)[2] p2 = dim(Z)[2] # initial values betas = rep(0, p1) gammas = rep(0, p2) lambda = 1 if (form %in% "Weibull") { alpha = 1 / (1 + exp(-Z %*% gammas)) } else { alpha = 1 / (1 + exp(-Z %*% gammas)) } Sigma.b = 10 * p1 * diag(p1) Sigma.g = 10 * p2 * diag(p2) betas.samp = matrix(NA, nrow = (N - burn) / thin, ncol = p1) gammas.samp = matrix(NA, nrow = (N - burn) / thin, ncol = p2) lambda.samp = rep(NA, (N - burn) / thin) jointpost.samp = rep(NA, (N - burn) / thin) for (iter in 1:N) { if (iter %% 1000 == 0) print(iter) if (iter > burn) { Sigma.b = riwish(1 + p1, betas %*% t(betas) + p1 * diag(p1)) Sigma.g = riwish(1 + p2, gammas %*% t(gammas) + p2 * diag(p2)) } betas = betas.slice.sampling2(Sigma.b, Y, Y0, X, betas, alpha, C, lambda, w[1], m, form = form) eXB = exp(X %*% betas) gammas = gammas.slice.sampling2(Sigma.g, Y, Y0, eXB, Z, gammas, C, lambda, w[2], m, form = form) if (form %in% "Weibull") { alpha = 1 / (1 + exp(-Z %*% gammas)) } else { alpha = 1 / (1 + exp(-Z %*% gammas)) } if (form %in% "Weibull") { lambda = lambda.slice.sampling2(Y, Y0, eXB, alpha, C, lambda, w[3], m) } if (iter > burn & (iter - burn) %% thin == 0) { betas.samp[(iter - burn) / thin, ] = betas gammas.samp[(iter - burn) / thin, ] = gammas lambda.samp[(iter - burn) / thin] = lambda jointpost.samp[(iter - burn) / thin] = jointpost(Y, Y0, X, Z, betas, Sigma.b, gammas, Sigma.g, alpha, C, lambda, form = form) } } betas = as.data.frame(betas.samp) gammas = as.data.frame(gammas.samp) lambda = lambda.samp post = jointpost.samp names(betas) <- colnames(X) names(gammas) <- colnames(Z) Y <- as.matrix(Y) Y0 <- as.matrix(Y0) C <- as.matrix(C) X <- as.matrix(X) Z <- as.matrix(Z) return(list(Y = Y, Y0 = Y0, C = C, X = X, Z = Z, betas = betas, gammas = gammas, lambda = lambda, post = post, iterations = N, burn_in = burn, thinning = thin, betan = nrow(betas), gamman = nrow(gammas), distribution = form)) } # @title bayes.mfsurv.default # @description Fit a parametric Bayesian MF model via Markov Chain Monte Carlo (MCMC) to estimate the misclassification in the first stage # and the hazard in the second stage. # @param Y the time (duration) dependent variable for the survival stage (t). # @param Y0 the elapsed time since inception until the beginning of time period (t-1). # @param C the censoring (or failure) dependent variable for the misclassification stage (t-1). # @param X covariates for the survival stage. # @param Z covariates for the misclassification stage. # @param N number of MCMC iterations. # @param burn burn-ins to be discarded. # @param thin thinning to prevent autocorrelation of chain of samples by only taking the n-th values. # @param w size of the slice in the slice sampling for (betas, gammas, lambda). The default is c(1,1,1). This value may be changed by the user to meet one's needs. # @param m limit on steps in the slice sampling. The default is 10. This value may be changed by the user to meet one's needs. # @param form type of parametric model distribution to be used. Options are "Exponential" or "Weibull". The default is "Weibull". # @param na.action a function indicating what should happen when NAs are included in the data. Options are "na.omit" or "na.fail". The default is "na.omit". bayes.mfsurv.default<-function(Y, Y0, C, X, Z, N, burn, thin, w, m, form, na.action){ Y <- as.numeric(Y) Y0 <- as.numeric(Y0) C <- as.numeric(C) X <- as.matrix(X) Z <- as.matrix(Z) N <- as.numeric(N) burn <- as.numeric(burn) thin <- as.numeric(thin) m <- as.numeric(m) w <- as.vector(w) form <- as.character(form) na.action <- na.action est <- bayes.mfsurv.est(Y, Y0, C, X, Z, N, burn, thin, w, m, form, na.action) est$call <- match.call() class(est) <- "mfsurv" est } # @title llFun # @description A function to calculate the log-likelihood of the data. # @param est a matrix of posterior distribution sample such as posterior mean or the full chain of posteior samples. # @param Y a matrix the time (duration) dependent variable for the survival stage (t). # @param Y0 a matrix of the elapsed time since inception until the beginning of time period (t-1). # @param C a matrix of the censoring (or failure) dependent variable for the misclassification stage. # @param X a matrix of covariates for the survival stage. # @param Z a matrix of covariates for the misclassification stage. # @param data a data frame that contains the Y, Y0, C, X, and Z variables. llFun <- function(est,Y,Y0,C,X,Z,data){ #Note the extra variable Y0 passed to the time varying MF Weibull n <- nrow(data) llik <- matrix(0, nrow = n, ncol = 1) gamma <- est[1:ncol(Z)] beta <- est[(ncol(Z)+1):(length(est)-1)] p <- est[length(est)] p <- exp(p) XB <- X%*%beta ZG <- Z%*%gamma phi <- 1/(1+exp(-(ZG/p))) llik <- C*(log((1-phi)+phi*exp(XB/p)*p*((exp(XB/p)*Y)^(p-1))*exp(-(exp(XB/p)*Y)^p))/exp(-(exp(XB/p)*Y0)^p))+(1-C)*(log(phi)+-(exp(XB/p)*Y)^p--(exp(XB/p)*Y0)^p) one <- nrow(llik) llik <- subset(llik, is.finite(llik)) two <- nrow(llik) llik <- subset(llik, llik[,1] > -1000) three <- nrow(llik) llik <- -1*sum(llik) list(llik = llik, one = one, two = two, three = three) }
/scratch/gouwar.j/cran-all/cranData/BayesMFSurv/R/utils.R
#' @keywords internal #' @aliases BayesMallows-package NULL #' #' @references #' #' \insertRef{sorensen2020}{BayesMallows} #' "_PACKAGE" ## usethis namespace: start #' @useDynLib BayesMallows, .registration = TRUE ## usethis namespace: end NULL #' @importFrom Rdpack reprompt #' @importFrom Rcpp sourceCpp #' @importFrom stats aggregate #' @importFrom utils head NULL
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/BayesMallows.R
# Generated by using Rcpp::compileAttributes() -> do not edit by hand # Generator token: 10BE3573-1514-4C36-9D1C-5A225CD40393 abind <- function(x, y) { .Call(`_BayesMallows_abind`, x, y) } #' Asymptotic Approximation of Partition Function #' #' Compute the asymptotic approximation of the logarithm of the partition function, #' using the iteration algorithm of \insertCite{mukherjee2016;textual}{BayesMallows}. #' #' @param alpha_vector A numeric vector of alpha values. #' @param n_items Integer specifying the number of items. #' @param metric One of \code{"footrule"} and \code{"spearman"}. #' @param K Integer. #' @param n_iterations Integer specifying the number of iterations. #' @param tol Stopping criterion for algorithm. The previous matrix is subtracted #' from the updated, and if the maximum absolute relative difference is below \code{tol}, #' the iteration stops. #' #' @return A vector, containing the partition function at each value of alpha. #' @keywords internal #' #' @references \insertAllCited{} #' asymptotic_partition_function <- function(alpha_vector, n_items, metric, K, n_iterations = 1000L, tol = 1e-9) { .Call(`_BayesMallows_asymptotic_partition_function`, alpha_vector, n_items, metric, K, n_iterations, tol) } get_rank_distance <- function(rankings, rho, metric) { .Call(`_BayesMallows_get_rank_distance`, rankings, rho, metric) } compute_importance_sampling_estimate <- function(alpha_vector, n_items, metric = "footrule", nmc = 1e4L) { .Call(`_BayesMallows_compute_importance_sampling_estimate`, alpha_vector, n_items, metric, nmc) } get_expected_distance <- function(alpha, n_items, metric, pfun_values) { .Call(`_BayesMallows_get_expected_distance`, alpha, n_items, metric, pfun_values) } get_partition_function <- function(alpha, n_items, metric, pfun_values) { .Call(`_BayesMallows_get_partition_function`, alpha, n_items, metric, pfun_values) } #' Sample from the Mallows distribution. #' #' Sample from the Mallows distribution with arbitrary distance metric using #' a Metropolis-Hastings algorithm. #' #' @param rho0 Vector specifying the latent consensus ranking. #' @param alpha0 Scalar specifying the scale parameter. #' @param n_samples Integer specifying the number of random samples to generate. #' @param burnin Integer specifying the number of iterations to discard as burn-in. #' @param thinning Integer specifying the number of MCMC iterations to perform #' between each time a random rank vector is sampled. #' @param leap_size Integer specifying the step size of the leap-and-shift proposal distribution. #' @param metric Character string specifying the distance measure to use. Available #' options are \code{"footrule"} (default), \code{"spearman"}, \code{"cayley"}, \code{"hamming"}, #' \code{"kendall"}, and \code{"ulam"}. #' #' @keywords internal #' #' @references \insertAllCited{} #' rmallows <- function(rho0, alpha0, n_samples, burnin, thinning, leap_size = 1L, metric = "footrule") { .Call(`_BayesMallows_rmallows`, rho0, alpha0, n_samples, burnin, thinning, leap_size, metric) } run_mcmc <- function(data, model_options, compute_options, priors, initial_values, pfun_values, pfun_estimate, verbose = FALSE) { .Call(`_BayesMallows_run_mcmc`, data, model_options, compute_options, priors, initial_values, pfun_values, pfun_estimate, verbose) } run_smc <- function(data, new_data, model_options, smc_options, compute_options, priors, initial_values, pfun_values, pfun_estimate) { .Call(`_BayesMallows_run_smc`, data, new_data, model_options, smc_options, compute_options, priors, initial_values, pfun_values, pfun_estimate) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/RcppExports.R
#' @title Get Acceptance Ratios #' @description Extract acceptance ratio from Metropolis-Hastings #' algorithm used by [compute_mallows()] or by the move step in #' [update_mallows()] and [compute_mallows_sequentially()]. Currently the #' function only returns the values, but it will be refined in the future. If #' burnin is not set in the call to [compute_mallows()], the acceptance ratio #' for all iterations will be reported. Otherwise the post burnin acceptance #' ratio is reported. For the SMC method the acceptance ratios apply to all #' iterations, since no burnin is needed in here. #' #' @param model_fit A model fit. #' @param ... Other arguments passed on to other methods. Currently not used. #' #' @export #' @example /inst/examples/get_acceptance_ratios_example.R #' #' @family posterior quantities #' get_acceptance_ratios <- function(model_fit, ...) { UseMethod("get_acceptance_ratios") } #' @export #' @rdname get_acceptance_ratios get_acceptance_ratios.BayesMallows <- function(model_fit, ...) { model_fit$acceptance_ratios } #' @export #' @rdname get_acceptance_ratios get_acceptance_ratios.SMCMallows <- function(model_fit, ...) { model_fit$acceptance_ratios }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/acceptance_ratio.R
# Translation to R of C++ and Python code found here # https://www.geeksforgeeks.org/all-topological-sorts-of-a-directed-acyclic-graph/ all_topological_sorts <- function(graph, n_items, env, path = integer(), discovered = rep(FALSE, n_items)) { flag <- FALSE for (i in seq_len(n_items)) { if (attr(graph, "indegree")[[i]] == 0 && !discovered[[i]]) { attr(graph, "indegree")[graph[[i]]] <- attr(graph, "indegree")[graph[[i]]] - 1 path <- c(path, i) discovered[[i]] <- TRUE all_topological_sorts(graph, n_items, env, path, discovered) attr(graph, "indegree")[graph[[i]]] <- attr(graph, "indegree")[graph[[i]]] + 1 path <- path[-length(path)] discovered[[i]] <- FALSE flag <- TRUE } } if (length(path) == n_items) { assign("x", c(get("x", envir = env), list(path)), envir = env) assign("num", get("num", envir = env) + 1, envir = env) } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/all_topological_sorts.R
#' Trace Plots from Metropolis-Hastings Algorithm #' #' `assess_convergence` provides trace plots for the parameters of the Mallows #' Rank model, in order to study the convergence of the Metropolis-Hastings #' algorithm. #' #' @param model_fit A fitted model object of class `BayesMallows` returned from #' [compute_mallows()] or an object of class `BayesMallowsMixtures` returned #' from [compute_mallows_mixtures()]. #' #' @param parameter Character string specifying which parameter to plot. #' Available options are `"alpha"`, `"rho"`, `"Rtilde"`, `"cluster_probs"`, or #' `"theta"`. #' #' @param items The items to study in the diagnostic plot for `rho`. Either a #' vector of item names, corresponding to `model_fit$data$items` or a vector of #' indices. If NULL, five items are selected randomly. Only used when #' `parameter = "rho"` or `parameter = "Rtilde"`. #' #' @param assessors Numeric vector specifying the assessors to study in the #' diagnostic plot for `"Rtilde"`. #' #' @param ... Other arguments passed on to other methods. Currently not used. #' #' @export #' @family diagnostics #' #' @example /inst/examples/assess_convergence_example.R assess_convergence <- function(model_fit, ...) { UseMethod("assess_convergence") } #' @export #' @rdname assess_convergence assess_convergence.BayesMallows <- function( model_fit, parameter = c("alpha", "rho", "Rtilde", "cluster_probs", "theta"), items = NULL, assessors = NULL, ...) { parameter <- match.arg( parameter, c("alpha", "rho", "Rtilde", "cluster_probs", "theta") ) if (parameter == "alpha") { trace_alpha(model_fit$alpha, FALSE) } else if (parameter == "rho") { trace_rho(model_fit, items) } else if (parameter == "Rtilde") { trace_rtilde(model_fit, items, assessors) } else if (parameter == "cluster_probs") { m <- model_fit$cluster_probs m$n_clusters <- model_fit$n_clusters trace_cluster_probs(m) } else if (parameter == "theta") { trace_theta(model_fit) } } #' @export #' @rdname assess_convergence assess_convergence.BayesMallowsMixtures <- function( model_fit, parameter = c("alpha", "cluster_probs"), items = NULL, assessors = NULL, ...) { parameter <- match.arg(parameter, c("alpha", "cluster_probs")) if (parameter == "alpha") { m <- do.call(rbind, lapply(model_fit, function(x) { x$alpha$cluster <- as.character(x$alpha$cluster) x$alpha$n_clusters <- x$n_clusters x$alpha })) trace_alpha(m, TRUE) } else if (parameter == "cluster_probs") { m <- do.call(rbind, lapply(model_fit, function(x) { x$cluster_probs$cluster <- as.character(x$cluster_probs$cluster) x$cluster_probs$n_clusters <- x$n_clusters x$cluster_probs })) trace_cluster_probs(m) } } trace_alpha <- function(m, clusters) { p <- ggplot2::ggplot(m, ggplot2::aes( x = .data$iteration, y = .data$value, group = interaction(.data$chain, .data$cluster), color = .data$cluster, linetype = .data$chain )) + ggplot2::geom_line() + ggplot2::xlab("Iteration") + ggplot2::ylab(expression(alpha)) + ggplot2::labs(color = "Cluster") + ggplot2::labs(linetype = "Chain") if (clusters) { p <- p + ggplot2::theme(legend.position = "none") + ggplot2::facet_wrap(ggplot2::vars(.data$n_clusters), labeller = ggplot2::as_labeller(cluster_labeler_function), scales = "free_y" ) } return(p) } trace_rho <- function(model_fit, items, clusters = model_fit$n_clusters > 1) { if (is.null(items) && model_fit$data$n_items > 5) { message("Items not provided by user. Picking 5 at random.") items <- sample.int(model_fit$data$n_items, 5) } else if (is.null(items) && model_fit$data$n_items > 0) { items <- seq.int(from = 1, to = model_fit$data$n_items) } else if (!is.null(items)) { if (is.numeric(items) && length(setdiff(items, seq_len(model_fit$data$n_items))) > 0) { stop("numeric items vector must contain indices between 1 and the number of items") } if (is.character(items) && length(setdiff(items, model_fit$data$items) > 0)) { stop("unknown items provided") } } if (!is.character(items)) { items <- model_fit$data$items[items] } df <- model_fit$rho[model_fit$rho$item %in% items, , drop = FALSE] p <- ggplot2::ggplot( df, ggplot2::aes( x = .data$iteration, y = .data$value, color = .data$item ) ) + ggplot2::geom_line() + ggplot2::theme(legend.title = ggplot2::element_blank()) + ggplot2::xlab("Iteration") + ggplot2::ylab(expression(rho)) if (clusters) { p <- p + ggplot2::facet_wrap(ggplot2::vars(.data$cluster)) } else { p <- p + ggplot2::facet_wrap( ggplot2::vars(.data$chain), labeller = ggplot2::as_labeller(function(x) paste("Chain", x)) ) } return(p) } trace_rtilde <- function(model_fit, items, assessors, ...) { if (!model_fit$save_aug) { stop("Please rerun with compute_mallows with save_aug = TRUE") } if (is.null(items) && model_fit$data$n_items > 5) { message("Items not provided by user. Picking 5 at random.") items <- sample.int(model_fit$data$n_items, 5) } else if (is.null(items) && model_fit$data$n_items > 0) { items <- seq.int(from = 1, to = model_fit$data$n_items) } if (is.null(assessors) && model_fit$data$n_assessors > 5) { message("Assessors not provided by user. Picking 5 at random.") assessors <- sample.int(model_fit$data$n_assessors, 5) } else if (is.null(assessors) && model_fit$data$n_assessors > 0) { assessors <- seq.int(from = 1, to = model_fit$data$n_assessors) } else if (!is.null(assessors)) { if (length(setdiff(assessors, seq(1, model_fit$data$n_assessors, 1))) > 0) { stop("assessors vector must contain numeric indices between 1 and the number of assessors") } } if (is.factor(model_fit$augmented_data$item) && is.numeric(items)) { items <- levels(model_fit$augmented_data$item)[items] } df <- model_fit$augmented_data[ model_fit$augmented_data$assessor %in% assessors & model_fit$augmented_data$item %in% items, , drop = FALSE ] df$assessor <- as.factor(df$assessor) levels(df$assessor) <- paste("Assessor", levels(df$assessor)) df$chain <- as.factor(df$chain) levels(df$chain) <- paste("Chain", levels(df$chain)) ggplot2::ggplot(df, ggplot2::aes(x = .data$iteration, y = .data$value, color = .data$item)) + ggplot2::geom_line() + ggplot2::facet_wrap(ggplot2::vars(.data$assessor, .data$chain)) + ggplot2::theme(legend.title = ggplot2::element_blank()) + ggplot2::xlab("Iteration") + ggplot2::ylab("Rtilde") } trace_cluster_probs <- function(m) { ggplot2::ggplot(m, ggplot2::aes( x = .data$iteration, y = .data$value, color = .data$cluster )) + ggplot2::geom_line() + ggplot2::theme(legend.position = "none") + ggplot2::xlab("Iteration") + ggplot2::ylab(expression(tau[c])) + ggplot2::facet_wrap(ggplot2::vars(.data$n_clusters), labeller = ggplot2::as_labeller(cluster_labeler_function), scales = "free_y" ) } trace_theta <- function(model_fit) { if (is.null(model_fit$theta) || length(model_fit$theta) == 0) { stop("Theta not available. Run compute_mallows with error_model = 'bernoulli'.") } p <- ggplot2::ggplot(model_fit$theta, ggplot2::aes(x = .data$iteration, y = .data$value)) + ggplot2::xlab("Iteration") + ggplot2::ylab(expression(theta)) + ggplot2::geom_line() return(p) } cluster_labeler_function <- function(n_clusters) { paste(n_clusters, ifelse(n_clusters == 1, "cluster", "clusters")) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/assess_convergence.R
#' @title Assign Assessors to Clusters #' #' @description Assign assessors to clusters by finding the cluster with highest #' posterior probability. #' #' @param model_fit An object of type `BayesMallows`, returned from #' [compute_mallows()]. #' #' @param soft A logical specifying whether to perform soft or hard clustering. #' If `soft=TRUE`, all cluster probabilities are returned, whereas if #' `soft=FALSE`, only the maximum a posterior (MAP) cluster probability is #' returned, per assessor. In the case of a tie between two or more cluster #' assignments, a random cluster is taken as MAP estimate. #' #' @param expand A logical specifying whether or not to expand the rowset of #' each assessor to also include clusters for which the assessor has 0 a #' posterior assignment probability. Only used when `soft = TRUE`. Defaults to #' `FALSE`. #' #' @return A dataframe. If `soft = FALSE`, it has one row per assessor, and #' columns `assessor`, `probability` and `map_cluster`. If `soft = TRUE`, it #' has `n_cluster` rows per assessor, and the additional column `cluster`. #' #' @export #' #' @family posterior quantities #' #' @examples #' # Fit a model with three clusters to the simulated example data #' set.seed(1) #' mixture_model <- compute_mallows( #' data = setup_rank_data(cluster_data), #' model_options = set_model_options(n_clusters = 3), #' compute_options = set_compute_options(nmc = 5000, burnin = 1000) #' ) #' #' head(assign_cluster(mixture_model)) #' head(assign_cluster(mixture_model, soft = FALSE)) #' assign_cluster <- function( model_fit, soft = TRUE, expand = FALSE) { if (is.null(burnin(model_fit))) { stop("Please specify the burnin with 'burnin(model_fit) <- value'.") } df <- model_fit$cluster_assignment[ model_fit$cluster_assignment$iteration > burnin(model_fit), , drop = FALSE ] df <- aggregate( list(count = df$iteration), list(assessor = df$assessor, cluster = df$value), FUN = length, drop = !expand ) df$count[is.na(df$count)] <- 0 df <- do.call(rbind, lapply(split(df, f = df$assessor), function(x) { x$probability <- x$count / sum(x$count) x$count <- NULL x })) map <- do.call(rbind, lapply(split(df, f = df$assessor), function(x) { x <- x[x$probability == max(x$probability), , drop = FALSE] x <- x[1, , drop = FALSE] # in case of ties x$map_cluster <- x$cluster x$cluster <- x$probability <- NULL x })) df <- merge(df, map, by = "assessor") if (!soft) { df <- df[df$cluster == df$map_cluster, , drop = FALSE] df$cluster <- NULL } return(df) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/assign_cluster.R
#' @title Set the burnin #' @description Set or update the burnin of a model #' computed using Metropolis-Hastings. #' #' @param model An object of class `BayesMallows` returned from #' [compute_mallows()] or an object of class `BayesMallowsMixtures` returned #' from [compute_mallows_mixtures()]. #' @param ... Optional arguments passed on to other methods. Currently not used. #' @param value An integer specifying the burnin. If `model` is of class #' `BayesMallowsMixtures`, a single value will be assumed to be the burnin #' for each model element. Alternatively, `value` can be specified as an #' integer vector of the same length as `model`, and hence a separate burnin #' can be set for each number of mixture components. #' #' @export #' @return An object of class `BayesMallows` with burnin set. #' #' @family modeling #' #' @example /inst/examples/burnin_example.R #' `burnin<-` <- function(model, ..., value) UseMethod("burnin<-") #' @export #' @rdname burnin-set `burnin<-.BayesMallows` <- function(model, ..., value) { if (inherits(model, "SMCMallows")) { stop("Cannot set burnin for SMC model.") } validate_integer(value) if (value >= model$compute_options$nmc) { stop("Burnin cannot be larger than the number of Monte Carlo samples.") } # Workaround as long as we have the deprecation notice for `$<-` class(model) <- "list" model$compute_options$burnin <- value class(model) <- "BayesMallows" model } #' @export #' @rdname burnin-set `burnin<-.BayesMallowsMixtures` <- function(model, ..., value) { for (v in value) validate_integer(v) if (length(value) == 1) value <- rep(value, length(model)) if (length(value) != length(model)) stop("Wrong number of entries in value.") for (i in seq_along(model)) burnin(model[[i]]) <- value[[i]] model } #' @title See the burnin #' @description #' See the current burnin value of the model. #' #' @param model A model object. #' @param ... Optional arguments passed on to other methods. Currently not used. #' #' @export #' @return An integer specifying the burnin, if it exists. Otherwise `NULL`. #' #' @family modeling #' #' @example /inst/examples/burnin_example.R #' burnin <- function(model, ...) UseMethod("burnin") #' @rdname burnin #' @export burnin.BayesMallows <- function(model, ...) { model$compute_options$burnin } #' @rdname burnin #' @export burnin.BayesMallowsMixtures <- function(model, ...) { lapply(model, burnin) } #' @rdname burnin #' @export burnin.SMCMallows <- function(model, ...) 0
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/burnin.R
#' @title Compute Consensus Ranking #' @description Compute the consensus ranking using either cumulative #' probability (CP) or maximum a posteriori (MAP) consensus #' \insertCite{vitelli2018}{BayesMallows}. For mixture models, the consensus #' is given for each mixture. Consensus of augmented ranks can also be #' computed for each assessor, by setting `parameter = "Rtilde"`. #' #' @param model_fit A model fit. #' @param type Character string specifying which consensus to compute. Either #' `"CP"` or `"MAP"`. Defaults to `"CP"`. #' @param parameter Character string defining the parameter for which to compute #' the consensus. Defaults to `"rho"`. Available options are `"rho"` and #' `"Rtilde"`, with the latter giving consensus rankings for augmented ranks. #' @param assessors When `parameter = "rho"`, this integer vector is used to #' define the assessors for which to compute the augmented ranking. Defaults #' to `1L`, which yields augmented rankings for assessor 1. #' @param ... Other arguments passed on to other methods. Currently not used. #' #' @references \insertAllCited{} #' @export #' @example /inst/examples/compute_consensus_example.R #' #' @family posterior quantities #' compute_consensus <- function(model_fit, ...) { UseMethod("compute_consensus") } #' @export #' @rdname compute_consensus compute_consensus.BayesMallows <- function( model_fit, type = c("CP", "MAP"), parameter = c("rho", "Rtilde"), assessors = 1L, ...) { if (is.null(burnin(model_fit))) { stop("Please specify the burnin with 'burnin(model_fit) <- value'.") } type <- match.arg(type, c("CP", "MAP")) parameter <- match.arg(parameter, c("rho", "Rtilde")) if (parameter == "Rtilde" && !inherits(model_fit$augmented_data, "data.frame")) { stop("For augmented ranks, please refit model with option 'save_aug = TRUE'.") } if (parameter == "rho") { df <- model_fit$rho[model_fit$rho$iteration > burnin(model_fit), , drop = FALSE] if (type == "CP") { df <- cpc_bm(df) } else if (type == "MAP") { df <- cpm_bm(df) } } else if (parameter == "Rtilde") { df <- model_fit$augmented_data[ model_fit$augmented_data$iteration > burnin(model_fit) & model_fit$augmented_data$assessor %in% assessors, , drop = FALSE ] names(df)[names(df) == "assessor"] <- "cluster" class(df) <- c("consensus_BayesMallows", "tbl_df", "tbl", "data.frame") df <- if (type == "CP") { df <- cpc_bm(df) } else if (type == "MAP") { df <- cpm_bm(df) } if ("cluster" %in% names(df)) { df$assessor <- as.numeric(df$cluster) df$cluster <- NULL } } row.names(df) <- NULL as.data.frame(df) } #' @export #' @rdname compute_consensus compute_consensus.SMCMallows <- function( model_fit, type = c("CP", "MAP"), parameter = "rho", ...) { parameter <- match.arg(parameter, "rho") model_fit$compute_options$burnin <- 0 model_fit$compute_options$nmc <- model_fit$n_particles NextMethod("compute_consensus") } # Internal function for finding CP consensus. find_cpc <- function(group_df, group_var = "cluster") { # Declare the result dataframe before adding rows to it result <- data.frame( cluster = character(), ranking = numeric(), item = character(), cumprob = numeric() ) n_items <- max(group_df$value) group_df$cumprob[is.na(group_df$cumprob)] <- 0 for (i in seq(from = 1, to = n_items, by = 1)) { # Filter out the relevant rows tmp_df <- group_df[group_df$value == i, , drop = FALSE] # Remove items in result tmp_df <- tmp_df[!interaction(tmp_df[c("cluster", "item")]) %in% interaction(result[c("cluster", "item")]), ] if (nrow(tmp_df) >= 1) { # Keep the max only. This filtering must be done after the first filter, # since we take the maximum among the filtered values tmp_df <- do.call( rbind, lapply(split(tmp_df, f = tmp_df[group_var]), function(x) { x[x$cumprob == max(x$cumprob), ] }) ) # Add the ranking tmp_df$ranking <- i # Select the columns we want to keep, and put them in result result <- rbind( result, tmp_df[, c("cluster", "ranking", "item", "cumprob"), drop = FALSE] ) } } return(result) } aggregate_cp_consensus <- function(df) { # Convert items and cluster to character, since factor levels are not needed in this case df$item <- as.character(df$item) df$cluster <- as.character(df$cluster) df <- aggregate( list(n = df$iteration), by = list( item = as.character(df$item), cluster = as.character(df$cluster), value = df$value ), FUN = length ) # Arrange according to value, per item and cluster do.call(rbind, lapply(split(df, f = ~ item + cluster), function(x) { x <- x[order(x$value), ] x$cumprob <- cumsum(x$n) / sum(x$n) x })) } aggregate_map_consensus <- function(df, n_samples) { # Group by everything except iteration, and count the unique combinations df <- aggregate(list(n = df$iteration), df[, setdiff(names(df), "iteration")], FUN = length ) # Keep only the maximum per cluster df <- do.call(rbind, lapply(split(df, f = df$cluster), function(x) { x$n_max <- max(x$n) x[x$n == x$n_max, , drop = FALSE] })) # Compute the probability df$probability <- df$n / n_samples df$n_max <- df$n <- NULL df } cpc_bm <- function(df) { df <- aggregate_cp_consensus(df) df <- find_cpc(df) df[order(df$cluster, df$ranking), ] } cpm_bm <- function(df) { n_samples <- length(unique(df$iteration)) # Reshape to get items along columns df <- stats::reshape(as.data.frame(df), direction = "wide", idvar = c("chain", "cluster", "iteration"), timevar = "item" ) df$chain <- NULL names(df) <- gsub("^value\\.", "", names(df)) df <- aggregate_map_consensus(df, n_samples) # Now collect one set of ranks per cluster df$id <- seq_len(nrow(df)) df <- stats::reshape(as.data.frame(df), direction = "long", varying = setdiff(names(df), c("cluster", "probability", "id")), v.names = "map_ranking", timevar = "item", idvar = c("cluster", "probability", "id"), times = setdiff(names(df), c("cluster", "probability", "id")) ) rownames(df) <- NULL df$id <- NULL # Sort according to cluster and ranking df[order(df$cluster, df$map_ranking), c("cluster", "map_ranking", "item", "probability"), drop = FALSE ] }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/compute_consensus.R
#' Preference Learning with the Mallows Rank Model #' #' @description Compute the posterior distributions of the parameters of the #' Bayesian Mallows Rank Model, given rankings or preferences stated by a set #' of assessors. #' #' The `BayesMallows` package uses the following parametrization of the #' Mallows rank model \insertCite{mallows1957}{BayesMallows}: #' #' \deqn{p(r|\alpha,\rho) = \frac{1}{Z_{n}(\alpha)} \exp\left\{\frac{-\alpha}{n} #' d(r,\rho)\right\}} #' #' where \eqn{r} is a ranking, \eqn{\alpha} is a scale parameter, \eqn{\rho} #' is the latent consensus ranking, \eqn{Z_{n}(\alpha)} is the partition #' function (normalizing constant), and \eqn{d(r,\rho)} is a distance function #' measuring the distance between \eqn{r} and \eqn{\rho}. We refer to #' \insertCite{vitelli2018;textual}{BayesMallows} for further details of the Bayesian #' Mallows model. #' #' `compute_mallows` always returns posterior distributions of the latent #' consensus ranking \eqn{\rho} and the scale parameter \eqn{\alpha}. Several #' distance measures are supported, and the preferences can take the form of #' complete or incomplete rankings, as well as pairwise preferences. #' `compute_mallows` can also compute mixtures of Mallows models, for #' clustering of assessors with similar preferences. #' #' @param data An object of class "BayesMallowsData" returned from #' [setup_rank_data()]. #' #' @param model_options An object of class "BayesMallowsModelOptions" returned #' from [set_model_options()]. #' #' @param compute_options An object of class "BayesMallowsComputeOptions" #' returned from [set_compute_options()]. #' #' @param priors An object of class "BayesMallowsPriors" returned from #' [set_priors()]. #' #' @param initial_values An object of class "BayesMallowsInitialValues" returned #' from [set_initial_values()]. #' #' @param pfun_estimate Object returned from [estimate_partition_function()]. #' Defaults to \code{NULL}, and will only be used for footrule, Spearman, or #' Ulam distances when the cardinalities are not available, cf. #' [get_cardinalities()]. #' #' @param verbose Logical specifying whether to print out the progress of the #' Metropolis-Hastings algorithm. If `TRUE`, a notification is printed #' every 1000th iteration. Defaults to `FALSE`. #' #' @param cl Optional cluster returned from [parallel::makeCluster()]. If #' provided, chains will be run in parallel, one on each node of `cl`. #' #' @return An object of class BayesMallows. #' #' @references \insertAllCited{} #' #' @export #' @importFrom rlang .data #' #' @family modeling #' #' @example /inst/examples/compute_mallows_example.R #' @example /inst/examples/label_switching_example.R #' compute_mallows <- function( data, model_options = set_model_options(), compute_options = set_compute_options(), priors = set_priors(), initial_values = set_initial_values(), pfun_estimate = NULL, verbose = FALSE, cl = NULL) { validate_class(data, "BayesMallowsData") validate_class(model_options, "BayesMallowsModelOptions") validate_class(compute_options, "BayesMallowsComputeOptions") validate_class(priors, "BayesMallowsPriors") validate_class(initial_values, "BayesMallowsInitialValues") validate_rankings(data) validate_preferences(data, model_options) validate_rankings(data) validate_initial_values(initial_values, data) pfun_values <- extract_pfun_values( model_options$metric, data$n_items, pfun_estimate ) if (is.null(cl)) { lapplyfun <- lapply chain_seq <- 1 } else { lapplyfun <- prepare_cluster(cl, c( "data", "model_options", "compute_options", "priors", "initial_values", "pfun_values", "pfun_estimate", "verbose" )) chain_seq <- seq_along(cl) } fits <- lapplyfun(X = chain_seq, FUN = function(i) { if (length(initial_values$alpha_init) > 1) { initial_values$alpha_init <- initial_values$alpha_init[[i]] } run_mcmc( data = data, model_options = model_options, compute_options = compute_options, priors = priors, initial_values = initial_values, pfun_values = pfun_values, pfun_estimate = pfun_estimate, verbose = verbose ) }) fit <- tidy_mcmc(fits, data, model_options, compute_options) fit$pfun_values <- pfun_values fit$pfun_estimate <- pfun_estimate fit$priors <- priors class(fit) <- "BayesMallows" return(fit) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/compute_mallows.R
#' Compute Mixtures of Mallows Models #' #' Convenience function for computing Mallows models with varying numbers of #' mixtures. This is useful for deciding the number of mixtures to use in the #' final model. #' #' @param n_clusters Integer vector specifying the number of clusters to use. #' @inheritParams compute_mallows #' #' #' @return A list of Mallows models of class `BayesMallowsMixtures`, with #' one element for each number of mixtures that was computed. This object can #' be studied with [plot_elbow()]. #' #' @details #' The `n_clusters` argument to [set_model_options()] is ignored #' when calling `compute_mallows_mixtures`. #' #' #' @family modeling #' #' @export #' #' @example /inst/examples/compute_mallows_mixtures_example.R #' compute_mallows_mixtures <- function( n_clusters, data, model_options = set_model_options(), compute_options = set_compute_options(), priors = set_priors(), initial_values = set_initial_values(), pfun_estimate = NULL, verbose = FALSE, cl = NULL) { stopifnot(is.null(cl) || inherits(cl, "cluster")) if (is.null(cl)) { lapplyfun <- lapply } else { lapplyfun <- prepare_cluster(cl, c( "data", "model_options", "compute_options", "priors", "initial_values", "pfun_estimate", "verbose" )) } models <- lapplyfun(n_clusters, function(x) { model_options$n_clusters <- x compute_mallows( data = data, model_options = model_options, compute_options = compute_options, priors = priors, initial_values = initial_values, verbose = verbose ) }) class(models) <- "BayesMallowsMixtures" return(models) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/compute_mallows_mixtures.R
#' @title Estimate the Bayesian Mallows Model Sequentially #' #' @description Compute the posterior distributions of the parameters of the #' Bayesian Mallows model using sequential Monte Carlo. This is based on the #' algorithms developed in #' \insertCite{steinSequentialInferenceMallows2023;textual}{BayesMallows}. #' This function differs from [update_mallows()] in that it takes all the data #' at once, and uses SMC to fit the model step-by-step. Used in this way, SMC #' is an alternative to Metropolis-Hastings, which may work better in some #' settings. In addition, it allows visualization of the learning process. #' #' @param data A list of objects of class "BayesMallowsData" returned from #' [setup_rank_data()]. Each list element is interpreted as the data belonging #' to a given timepoint. #' @param initial_values An object of class "BayesMallowsPriorSamples" returned #' from [sample_prior()]. #' @param model_options An object of class "BayesMallowsModelOptions" returned #' from [set_model_options()]. #' @param smc_options An object of class "SMCOptions" returned from #' [set_smc_options()]. #' @param compute_options An object of class "BayesMallowsComputeOptions" #' returned from [set_compute_options()]. #' @param priors An object of class "BayesMallowsPriors" returned from #' [set_priors()]. #' #' @param pfun_estimate Object returned from [estimate_partition_function()]. #' Defaults to \code{NULL}, and will only be used for footrule, Spearman, or #' Ulam distances when the cardinalities are not available, cf. #' [get_cardinalities()]. #' #' @return An object of class BayesMallowsSequential. #' #' @details This function is very new, and plotting functions and other tools #' for visualizing the posterior distribution do not yet work. See the examples #' for some workarounds. #' #' #' @references \insertAllCited{} #' @export #' #' @family modeling #' #' @example /inst/examples/compute_mallows_sequentially_example.R #' compute_mallows_sequentially <- function( data, initial_values, model_options = set_model_options(), smc_options = set_smc_options(), compute_options = set_compute_options(), priors = set_priors(), pfun_estimate = NULL) { validate_class(initial_values, "BayesMallowsPriorSamples") if (!is.list(data) | !all(vapply(data, inherits, logical(1), "BayesMallowsData"))) { stop("data must be a list of BayesMallowsData objects.") } if (any( vapply(data, function(x) { is.null(x$user_ids) || length(x$user_ids) == 0 }, logical(1)) )) { stop("User IDs must be set.") } pfun_values <- extract_pfun_values(model_options$metric, data[[1]]$n_items, pfun_estimate) alpha_init <- sample(initial_values$alpha, smc_options$n_particles, replace = TRUE) rho_init <- initial_values$rho[, sample(ncol(initial_values$rho), smc_options$n_particles, replace = TRUE)] ret <- run_smc( data = flush(data[[1]]), new_data = data, model_options = model_options, smc_options = smc_options, compute_options = compute_options, priors = priors, initial_values = list(alpha_init = alpha_init, rho_init = rho_init, aug_init = NULL), pfun_values = pfun_values, pfun_estimate = pfun_estimate ) class(ret) <- "SMCMallows" ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/compute_mallows_sequentially.R
#' Frequency distribution of the ranking sequences #' #' @description Construct the frequency distribution of the distinct ranking #' sequences from the dataset of the individual rankings. This can be of #' interest in itself, but also used to speed up computation by providing #' the `observation_frequency` argument to [compute_mallows()]. #' #' @param rankings A matrix with the individual rankings in each row. #' @return Numeric matrix with the distinct rankings in each row and the #' corresponding frequencies indicated in the last `(n_items+1)`-th #' column. #' @export #' @family rank functions #' #' @example /inst/examples/compute_observation_frequency_example.R #' compute_observation_frequency <- function(rankings) { if (!is.matrix(rankings)) stop("rankings must be a matrix") rankings[is.na(rankings)] <- 0 counts <- table(apply(rankings, 1, paste, collapse = ",")) ret <- cbind( do.call( rbind, lapply(strsplit(names(counts), split = ","), as.numeric) ), as.numeric(counts) ) ret[ret == 0] <- NA ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/compute_observation_frequency.R
# #' Compute Posterior Intervals #' #' Compute posterior intervals of parameters of interest. #' #' @param model_fit A model object. #' @param parameter Character string defining which parameter to compute #' posterior intervals for. One of `"alpha"`, `"rho"`, or #' `"cluster_probs"`. Default is `"alpha"`. #' @param level Decimal number in \eqn{[0,1]} specifying the confidence level. #' Defaults to `0.95`. #' @param decimals Integer specifying the number of decimals to include in #' posterior intervals and the mean and median. Defaults to `3`. #' @param ... Other arguments. Currently not used. #' #' @details This function computes both the Highest Posterior Density Interval (HPDI), #' which may be discontinuous for bimodal distributions, and #' the central posterior interval, which is simply defined by the quantiles of the posterior #' distribution. #' #' @references \insertAllCited{} #' #' @example /inst/examples/compute_posterior_intervals_example.R #' #' @export #' @family posterior quantities compute_posterior_intervals <- function(model_fit, ...) { UseMethod("compute_posterior_intervals") } #' @export #' @rdname compute_posterior_intervals compute_posterior_intervals.BayesMallows <- function( model_fit, parameter = c("alpha", "rho", "cluster_probs"), level = 0.95, decimals = 3L, ...) { if (is.null(burnin(model_fit))) { stop("Please specify the burnin with 'burnin(model_fit) <- value'.") } parameter <- match.arg(parameter, c("alpha", "rho", "cluster_probs")) stopifnot(level > 0 && level < 1) posterior_data <- model_fit[[parameter]][ model_fit[[parameter]]$iteration > burnin(model_fit), , drop = FALSE ] if (parameter == "alpha" || parameter == "cluster_probs") { posterior_split <- split(posterior_data, f = posterior_data$cluster) posterior_intervals <- do.call(rbind, lapply(posterior_split, function(x) { data.frame( parameter = parameter, cluster = unique(x$cluster), mean = format(round(mean(x$value), decimals), nsmall = decimals), median = format(round(stats::median(x$value), decimals), nsmall = decimals ), hpdi = compute_continuous_hpdi(x$value, level, decimals), central_interval = compute_central_interval(x$value, level, decimals) ) })) } else if (parameter == "rho") { posterior_split <- split( posterior_data, f = list(posterior_data$item, posterior_data$cluster) ) posterior_intervals <- do.call(rbind, lapply(posterior_split, function(x) { data.frame( parameter = parameter, cluster = unique(x$cluster), item = unique(x$item), mean = round(mean(x$value), 0), median = round(stats::median(x$value), 0), hpdi = compute_discrete_hpdi(x, level), central_interval = compute_central_interval(x$value, level, 0) ) })) } if (model_fit$n_clusters == 1) posterior_intervals$cluster <- NULL row.names(posterior_intervals) <- NULL posterior_intervals } #' @export #' @rdname compute_posterior_intervals compute_posterior_intervals.SMCMallows <- function( model_fit, parameter = c("alpha", "rho"), level = 0.95, decimals = 3L, ...) { model_fit$compute_options$burnin <- 0 parameter <- match.arg(parameter, c("alpha", "rho")) NextMethod("compute_posterior_intervals") } compute_central_interval <- function(values, level, decimals) { central <- unique( stats::quantile(values, probs = c((1 - level) / 2, level + (1 - level) / 2), names = FALSE ) ) interval <- format(round(central, decimals), nsmall = decimals) paste0("[", paste(trimws(interval), collapse = ","), "]") } # This function is derived from HDInterval::hdiVector # Copyright: Juat Ngumbang, Mike Meredith, and John Kruschke compute_continuous_hpdi <- function(values, level, decimals) { n <- length(values) values <- sort(values) lower <- values[1:(n - floor(n * level))] upper <- values[(floor(n * level) + 1):n] ind <- which.min(upper - lower) hpdi <- format(round(c(lower[ind], upper[ind]), decimals), nsmall = decimals) paste0("[", paste(trimws(hpdi), collapse = ","), "]") } compute_discrete_hpdi <- function(x, level) { pct_dat <- aggregate( iteration ~ value, data = x, FUN = function(y) { length(y) / nrow(x) } ) pct_dat <- pct_dat[order(pct_dat$iteration, decreasing = TRUE), ] pct_dat$cumprob <- cumsum(pct_dat$iteration) maxind <- min(which(pct_dat$cumprob >= level)) hpdi <- sort(pct_dat$value[seq(from = 1, to = maxind)]) contiguous_regions <- split(hpdi, cummax(c(1, diff(hpdi)))) hpdi <- vapply(contiguous_regions, function(r) { paste0("[", paste(unique(range(r)), collapse = ","), "]") }, character(1)) paste(hpdi, collapse = "") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/compute_posterior_intervals.R
#' Distance between a set of rankings and a given rank sequence #' #' @description Compute the distance between a matrix of rankings and a rank #' sequence. #' @param rankings A matrix of size \eqn{N \times n_{items}} of #' rankings in each row. Alternatively, if \eqn{N} equals 1, `rankings` #' can be a vector. #' @param rho A ranking sequence. #' @param metric Character string specifying the distance measure to use. #' Available options are `"kendall"`, `"cayley"`, `"hamming"`, #' `"ulam"`, `"footrule"` and `"spearman"`. #' @param observation_frequency Vector of observation frequencies of length \eqn{N}, or of length 1, #' which means that all ranks are given the same weight. Defaults to 1. #' @return A vector of distances according to the given `metric`. #' @export #' #' @details The implementation of Cayley distance is based on a `C++` #' translation of `Rankcluster::distCayley()` \insertCite{Grimonprez2016}{BayesMallows}. #' #' @references \insertAllCited #' @family rank functions #' #' @example /inst/examples/compute_rank_distance_example.R compute_rank_distance <- function( rankings, rho, metric = c("footrule", "spearman", "cayley", "hamming", "kendall", "ulam"), observation_frequency = 1) { metric <- match.arg(metric, c( "footrule", "spearman", "cayley", "hamming", "kendall", "ulam" )) if (!is.matrix(rankings)) rankings <- matrix(rankings, nrow = 1) stopifnot(length(observation_frequency) == 1 || length(observation_frequency) == nrow(rankings)) if (length(observation_frequency) == 1) { observation_frequency <- rep(observation_frequency, nrow(rankings)) } as.numeric( get_rank_distance(rankings = t(rankings), rho = rho, metric = metric) * observation_frequency ) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/compute_rank_distance.R
#' True ranking of the weights of 20 potatoes. #' #' @family datasets #' @references \insertRef{liu2019}{BayesMallows} "potato_true_ranking" #' @title Potato weights assessed visually #' #' @description #' Result of ranking potatoes by weight, where the assessors were only allowed #' to inspected the potatoes visually. 12 assessors ranked 20 potatoes. #' #' @family datasets #' @references \insertRef{liu2019}{BayesMallows} "potato_visual" #' @title Potato weights assessed by hand #' #' @description #' Result of ranking potatoes by weight, where the assessors were #' allowed to lift the potatoes. 12 assessors ranked 20 potatoes. #' #' @family datasets #' @references \insertRef{liu2019}{BayesMallows} "potato_weighing" #' Beach preferences #' #' Example dataset from \insertCite{vitelli2018}{BayesMallows}, Section 6.2. #' #' @family datasets #' @references \insertAllCited{} "beach_preferences" #' Sushi rankings #' #' Complete rankings of 10 types of sushi from 5000 assessors #' \insertCite{kamishima2003}{BayesMallows}. #' #' @family datasets #' @references \insertAllCited{} "sushi_rankings" #' Simulated clustering data #' #' Simulated dataset of 60 complete rankings of five items, with three #' different clusters. #' #' @family datasets "cluster_data" #' @title Simulated intransitive pairwise preferences #' #' @description Simulated dataset based on the [potato_visual] data. Based on #' the rankings in [potato_visual], all n-choose-2 = 190 pairs of items were #' sampled from each assessor. With probability .9, the pairwise #' preference was in agreement with [potato_visual], and with probability .1, #' they were in disagreement. Hence, the data generating mechanism was a #' Bernoulli error model \insertCite{crispino2019}{BayesMallows} with #' \eqn{\theta=0.1}. #' #' @family datasets "bernoulli_data"
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/data.R
#' @title Estimate Partition Function #' #' @description #' Estimate the logarithm of the partition function of the Mallows rank model. #' Choose between the importance sampling algorithm described in #' \insertCite{vitelli2018}{BayesMallows} and the IPFP algorithm for computing #' an asymptotic approximation described in #' \insertCite{mukherjee2016}{BayesMallows}. Note that exact partition functions #' can be computed efficiently for Cayley, Hamming and Kendall distances with #' any number of items, for footrule distances with up to 50 items, Spearman #' distance with up to 20 items, and Ulam distance with up to 60 items. This #' function is thus intended for the complement of these cases. See #' [get_cardinalities()] for details. #' #' @param method Character string specifying the method to use in order to #' estimate the logarithm of the partition function. Available options are #' `"importance_sampling"` and `"asymptotic"`. #' #' @param alpha_vector Numeric vector of \eqn{\alpha} values over which to #' compute the importance sampling estimate. #' #' @param n_items Integer specifying the number of items. #' #' @param metric Character string specifying the distance measure to use. #' Available options are `"footrule"` and `"spearman"` when `method = #' "asymptotic"` and in addition `"cayley"`, `"hamming"`, `"kendall"`, and #' `"ulam"` when `method = "importance_sampling"`. #' #' @param n_iterations Integer specifying the number of iterations to use. When #' `method = "importance_sampling"`, this is the number of Monte Carlo samples #' to generate. When `method = "asymptotic"`, on the other hand, it represents #' the number of iterations of the IPFP algorithm. #' #' @param K Integer specifying the parameter \eqn{K} in the asymptotic #' approximation of the partition function. Only used when `method = #' "asymptotic"`. Defaults to 20. #' #' @param cl Optional computing cluster used for parallelization, returned from #' [parallel::makeCluster()]. Defaults to `NULL`. Only used when `method = #' "importance_sampling"`. #' #' #' @return A matrix with two column and number of rows equal the degree of the #' fitted polynomial approximating the partition function. The matrix can be #' supplied to the `pfun_estimate` argument of [compute_mallows()]. #' #' #' @export #' #' @references \insertAllCited{} #' #' @example /inst/examples/estimate_partition_function_example.R #' @family partition function #' estimate_partition_function <- function( method = c("importance_sampling", "asymptotic"), alpha_vector, n_items, metric, n_iterations, K = 20, cl = NULL) { degree <- min(10, length(alpha_vector)) method <- match.arg(method, c("importance_sampling", "asymptotic")) if (method == "importance_sampling") { metric <- match.arg(metric, c( "footrule", "spearman", "cayley", "hamming", "kendall", "ulam" )) if (!is.null(cl)) { n_iterations_vec <- count_jobs_per_cluster(n_iterations, length(cl)) parallel::clusterExport(cl, c("alpha_vector", "n_items", "metric"), envir = environment() ) parallel::clusterSetRNGStream(cl) estimates <- parallel::parLapply(cl, n_iterations_vec, function(x) { compute_importance_sampling_estimate( alpha_vector = alpha_vector, n_items = n_items, metric = metric, nmc = x ) }) log_z <- rowMeans(do.call(cbind, estimates)) } else { log_z <- as.numeric( compute_importance_sampling_estimate( alpha_vector = alpha_vector, n_items = n_items, metric = metric, nmc = n_iterations ) ) } # Compute the estimate at each discrete alpha value estimate <- data.frame(alpha = alpha_vector, log_z = log_z) } else if (method == "asymptotic") { metric <- match.arg(metric, c("footrule", "spearman")) estimate <- data.frame( alpha = alpha_vector, log_z = as.numeric( asymptotic_partition_function( alpha_vector = alpha_vector, n_items = n_items, metric = metric, K = K, n_iterations = n_iterations ) ) ) } power <- seq(from = 0, to = degree, by = 1) form <- stats::as.formula(paste( "log_z ~ 0 + ", paste("I( alpha^", power, ")", collapse = "+") )) matrix(c(power, stats::lm(form, data = estimate)$coefficients), ncol = 2) } extract_pfun_values <- function(metric, n_items, pfun_estimate) { tryCatch( prepare_partition_function(metric, n_items), error = function(e) { if (is.null(pfun_estimate)) { stop( "Exact partition function not known. Please provide an ", "estimate in argument pfun_estimate." ) } else { return(NULL) } } ) } prepare_partition_function <- function(metric, n_items) { if (metric %in% c("cayley", "hamming", "kendall")) { return(NULL) } tryCatch( return(as.matrix(get_cardinalities(n_items, metric))), error = function(e) { stop( "Partition function not available. ", "Please compute an estimate using estimate_partition_function()." ) } ) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/estimate_partition_function.R
#' Expected value of metrics under a Mallows rank model #' #' @description Compute the expectation of several metrics under the Mallows #' rank model. #' @param alpha Non-negative scalar specifying the scale (precision) parameter #' in the Mallows rank model. #' @param n_items Integer specifying the number of items. #' @param metric Character string specifying the distance measure to use. #' Available options are `"kendall"`, `"cayley"`, `"hamming"`, `"ulam"`, #' `"footrule"`, and `"spearman"`. #' #' @return A scalar providing the expected value of the `metric` under the #' Mallows rank model with distance specified by the `metric` argument. #' @export #' #' @family rank functions #' #' @example /inst/examples/expected_dist_example.R compute_expected_distance <- function( alpha, n_items, metric = c("footrule", "spearman", "cayley", "hamming", "kendall", "ulam")) { metric <- match.arg(metric, c( "footrule", "spearman", "cayley", "hamming", "kendall", "ulam" )) validate_integer(n_items) validate_positive(n_items) if (alpha < 0) { stop("alpha must be a non-negative value") } pfun_values <- prepare_partition_function(metric, n_items) get_expected_distance(alpha, n_items, metric, pfun_values) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/expected_dist.R
generate_constraints <- function(preferences, n_items, cl = NULL) { if (is.null(preferences)) { return(list()) } stopifnot(is.null(cl) || inherits(cl, "cluster")) # Turn the preferences dataframe into a list of dataframes, # one list element per assessor constraints <- split( preferences[, c("bottom_item", "top_item"), drop = FALSE], preferences$assessor ) if (is.null(cl)) { lapply(constraints, constraint_fun, n_items) } else { parallel::parLapply(cl = cl, X = constraints, fun = constraint_fun, n_items) } } constraint_fun <- function(x, n_items) { # Find out which items are constrained constrained_items <- unique(c(x[["bottom_item"]], x[["top_item"]])) # Now we must complete the dataframe with the items that do not appear items_above <- merge(x[, c("bottom_item", "top_item"), drop = FALSE], expand.grid(bottom_item = seq(from = 1, to = n_items, by = 1)), by = "bottom_item", all = TRUE ) # Split it into a list, with one element per bottom_item items_above <- split(items_above, items_above$bottom_item) # For each item, find which items are ranked above it items_above <- lapply(items_above, function(x) { res <- unique(x[["top_item"]]) res <- res[!is.na(res)] }) # Now we must complete the dataframe with the items that do not appear items_below <- merge(x[, c("bottom_item", "top_item"), drop = FALSE], expand.grid(top_item = seq(from = 1, to = n_items, by = 1)), by = "top_item", all = TRUE ) # Split it into a list, with one element per bottom_item items_below <- split(items_below, items_below$top_item) # For each item, find which items are ranked above it items_below <- lapply(items_below, function(x) { res <- unique(x[["bottom_item"]]) res <- res[!is.na(res)] }) return( list( constrained_items = constrained_items, items_above = items_above, items_below = items_below ) ) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/generate_constraints.R
splitpref <- function(preferences) { split( preferences[, c("bottom_item", "top_item"), drop = FALSE], preferences$assessor ) } generate_initial_ranking <- function( preferences, n_items, cl = NULL, shuffle_unranked = FALSE, random = FALSE, random_limit = 8L) { UseMethod("generate_initial_ranking") } #' @export generate_initial_ranking.BayesMallowsTransitiveClosure <- function( preferences, n_items, cl = NULL, shuffle_unranked = FALSE, random = FALSE, random_limit = 8L) { stopifnot(is.null(cl) || inherits(cl, "cluster")) if (n_items > random_limit && random) { stop(paste( "Number of items exceeds the limit for generation of random permutations,\n", "modify the random_limit argument to override this.\n" )) } prefs <- splitpref(preferences) if (is.null(cl)) { do.call(rbind, lapply( prefs, function(x, y, sr, r) create_ranks(as.matrix(x), y, sr, r), n_items, shuffle_unranked, random )) } else { do.call(rbind, parallel::parLapply( cl = cl, X = prefs, fun = function(x, y, sr, r) create_ranks(as.matrix(x), y, sr, r), n_items, shuffle_unranked, random )) } } #' @export generate_initial_ranking.BayesMallowsIntransitive <- function( preferences, n_items, cl = NULL, shuffle_unranked = FALSE, random = FALSE, random_limit = 8L) { n_assessors <- length(unique(preferences$assessor)) rankings <- replicate(n_assessors, sample(x = n_items, size = n_items), simplify = "numeric" ) rankings <- matrix(rankings, ncol = n_items, nrow = n_assessors, byrow = TRUE) } create_ranks <- function(mat, n_items, shuffle_unranked, random) { if (!random) { g <- igraph::graph_from_edgelist(as.matrix(mat)) g <- as.integer(igraph::topo_sort(g)) all_items <- seq(from = 1, to = n_items, by = 1) if (!shuffle_unranked) { # Add unranked elements outside of the range at the end g_final <- c(g, setdiff(all_items, g)) } else { ranked_items <- unique(c(mat)) unranked_items <- setdiff(all_items, ranked_items) # Indices of ranked elements in final vector idx_ranked <- sort(sample(length(all_items), length(ranked_items))) g_final <- rep(NA, n_items) g_final[idx_ranked] <- g[g %in% ranked_items] g_final[is.na(g_final)] <- unranked_items[sample(length(unranked_items))] } # Convert from ordering to ranking return(create_ranking(rev(g_final))) } else { graph <- list() for (i in seq_len(n_items)) { graph[[i]] <- unique(mat[mat[, "top_item"] == i, "bottom_item"]) } indegree_init <- rep(0, n_items) indegree <- table(unlist(graph)) indegree_init[as.integer(names(indegree))] <- indegree attr(graph, "indegree") <- indegree_init e1 <- new.env() assign("x", list(), envir = e1) assign("num", 0L, envir = e1) all_topological_sorts(graph, n_items, e1) return(get("x", envir = e1)[[sample(get("num", envir = e1), 1)]]) } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/generate_initial_ranking.R
generate_transitive_closure <- function(preferences, cl = NULL) { if (is.null(preferences)) { return(NULL) } stopifnot(is.null(cl) || inherits(cl, "cluster")) if (!is.numeric(preferences$assessor)) { stop("assessor column in preferences must be numeric") } prefs <- splitpref(preferences) if (is.null(cl)) { lapplyfun <- lapply } else { lapplyfun <- function(X, FUN, ...) { parallel::parLapply(cl = cl, X = X, fun = FUN, ...) } } prefs <- lapplyfun(seq_along(prefs), function(i) { cbind( assessor = as.numeric(names(prefs)[[i]]), .generate_transitive_closure(as.matrix(prefs[[i]])) ) }) prefs <- do.call(rbind.data.frame, prefs) # Check if there are any inconsistencies check <- merge(prefs, prefs, by.x = c("assessor", "bottom_item", "top_item"), by.y = c("assessor", "top_item", "bottom_item") ) if (nrow(check) > 0) { class(preferences) <- c("BayesMallowsIntransitive", class(preferences)) return(preferences) } else { class(prefs) <- c("BayesMallowsTransitiveClosure", class(prefs)) return(prefs) } } .generate_transitive_closure <- function(mat) { # This line was an answer to StackOverflow question 51794127 my_set <- do.call(sets::set, apply(mat, 1, sets::as.tuple)) r <- relations::endorelation(graph = my_set) tc <- relations::transitive_closure(r) incidence <- relations::relation_incidence(tc) new_mat <- which(incidence == 1, arr.ind = TRUE) row_inds <- as.numeric(gsub("[^0-9]+", "", rownames(incidence))) result <- data.frame( bottom_item = row_inds[new_mat[, 1, drop = FALSE]], top_item = row_inds[new_mat[, 2, drop = FALSE]] ) return(result) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/generate_transitive_closure.R
#' @title Get cardinalities for each distance #' #' @description The partition function for the Mallows model can be defined in a #' computationally efficient manner as #' \deqn{Z_{n}(\alpha) = \sum_{d_{n} \in #' \mathcal{D}_{n}} N_{m,n} e^{-(\alpha/n) d_{m}}}. #' In this equation, \eqn{\mathcal{D}_{n}} a set containing all possible #' distances at the given number of items, and \eqn{d_{m}} is on element of #' this set. Finally, \eqn{N_{m,n}} is the number of possible configurations #' of the items that give the particular distance. See #' \insertCite{irurozki2016;textual}{BayesMallows}, #' \insertCite{vitelli2018;textual}{BayesMallows}, and #' \insertCite{crispino2023;textual}{BayesMallows} for details. #' #' For footrule distance, the cardinalities come from entry A062869 in the #' On-Line Encyclopedia of Integer Sequences (OEIS) #' \insertCite{oeis}{BayesMallows}. For Spearman distance, they come from #' entry A175929, and for Ulam distance from entry A126065. #' #' @param n_items Number of items. #' @param metric Distance function, one of "footrule", "spearman", or "ulam". #' #' @return A dataframe with two columns, `distance` which contains each distance #' in the support set at the current number of items, i.e., \eqn{d_{m}}, and #' `value` which contains the number of values at this particular distances, #' i.e., \eqn{N_{m,n}}. #' @export #' #' @references \insertAllCited{} #' #' @example inst/examples/get_cardinalities_example.R #' @family partition function get_cardinalities <- function( n_items, metric = c("footrule", "spearman", "ulam")) { metric <- match.arg(metric, c("footrule", "spearman", "ulam")) if (metric == "footrule") { if (n_items > length(footrule_cardinalities)) { stop("Not available for requested number of items.") } as.data.frame(footrule_cardinalities[[n_items]]) } else if (metric == "spearman") { if (n_items > length(spearman_cardinalities)) { stop("Not available for requested number of items.") } as.data.frame(spearman_cardinalities[[n_items]]) } else if (metric == "ulam") { if (n_items > length(ulam_cardinalities)) { stop("Not available for requested number of items.") } as.data.frame(ulam_cardinalities[[n_items]]) } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/get_cardinalities.R
#' Likelihood and log-likelihood evaluation for a Mallows mixture model #' #' @description Compute either the likelihood or the log-likelihood value of the #' Mallows mixture model parameters for a dataset of complete rankings. #' @param rho A matrix of size `n_clusters x n_items` whose rows are #' permutations of the first n_items integers corresponding to the modal #' rankings of the Mallows mixture components. #' @param alpha A vector of `n_clusters` non-negative scalar specifying the #' scale (precision) parameters of the Mallows mixture components. #' @param weights A vector of `n_clusters` non-negative scalars specifying #' the mixture weights. #' @param metric Character string specifying the distance measure to use. #' Available options are `"kendall"`, `"cayley"`, `"hamming"`, #' `"ulam"`, `"footrule"`, and `"spearman"`. #' @param rankings A matrix with observed rankings in each row. #' @param observation_frequency A vector of observation frequencies (weights) to apply to #' each row in `rankings`. This can speed up computation if a large #' number of assessors share the same rank pattern. Defaults to `NULL`, #' which means that each row of `rankings` is multiplied by 1. If #' provided, `observation_frequency` must have the same number of elements as there #' are rows in `rankings`, and `rankings` cannot be `NULL`. #' @param log A logical; if TRUE, the log-likelihood value is returned, #' otherwise its exponential. Default is `TRUE`. #' #' @return The likelihood or the log-likelihood value corresponding to one or #' more observed complete rankings under the Mallows mixture rank model with #' distance specified by the `metric` argument. #' @export #' #' @example inst/examples/get_mallows_loglik_example.R #' @family rank functions #' get_mallows_loglik <- function( rho, alpha, weights, metric = c("footrule", "spearman", "cayley", "hamming", "kendall", "ulam"), rankings, observation_frequency = NULL, log = TRUE) { metric <- match.arg(metric, c( "footrule", "spearman", "cayley", "hamming", "kendall", "ulam" )) if (!is.matrix(rankings)) rankings <- matrix(rankings, nrow = 1) if (!is.null(observation_frequency)) { if (nrow(rankings) != length(observation_frequency)) { stop( "observation_frequency must be ", "of same length as the number of rows in rankings" ) } } else { observation_frequency <- rep(1, nrow(rankings)) } if (!is.matrix(rho)) rho <- matrix(rho, nrow = 1) n_clusters <- length(weights) n_items <- ncol(rankings) N <- sum(observation_frequency) pfun_values <- prepare_partition_function(metric, n_items) loglik <- vapply( X = seq_len(n_clusters), FUN = function(g) { -(alpha[g] / n_items * sum(get_rank_distance( rankings = t(rankings), rho = rho[g, ], metric = metric ) * observation_frequency) + N * get_partition_function( alpha = alpha[g], n_items = n_items, metric = metric, pfun_values )) * weights[[g]] }, FUN.VALUE = numeric(1) ) if (!log) { exp(sum(loglik)) } else { sum(loglik) } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/get_mallows_loglik.R
#' Get transitive closure #' #' A simple method for showing any transitive closure computed by #' [setup_rank_data()]. #' #' @param rank_data An object of class `"BayesMallowsData"` returned from #' [setup_rank_data]. #' #' @return A dataframe with transitive closure, if there is any. #' @export #' #' @family preprocessing #' #' @examples #' # Original beach preferences #' head(beach_preferences) #' dim(beach_preferences) #' # We then create a rank data object #' dat <- setup_rank_data(preferences = beach_preferences) #' # The transitive closure contains additional filled-in preferences implied #' # by the stated preferences. #' head(get_transitive_closure(dat)) #' dim(get_transitive_closure(dat)) #' get_transitive_closure <- function(rank_data) { if (inherits(rank_data$preferences, "BayesMallowsTransitiveClosure")) { rank_data$preferences } else { message("Intransitive comparisons, no closure exists.") } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/get_transitive_closure.R
#' Heat plot of posterior probabilities #' #' Generates a heat plot with items in their consensus ordering along the #' horizontal axis and ranking along the vertical axis. The color denotes #' posterior probability. #' #' @param model_fit An object of type `BayesMallows`, returned from #' [compute_mallows()]. #' #' @param ... Additional arguments passed on to other methods. In particular, #' `type = "CP"` or `type = "MAP"` can be passed on to #' [compute_consensus()] to determine the order of items along the #' horizontal axis. #' #' @return A ggplot object. #' @export #' #' @example /inst/examples/heat_plot_example.R #' @family posterior quantities heat_plot <- function(model_fit, ...) { if (is.null(burnin(model_fit))) { stop("Please specify the burnin with 'burnin(model_fit) <- value'.") } if (model_fit$n_clusters != 1) { stop("heat_plot only works for a single cluster") } item_order <- unique(compute_consensus(model_fit, ...)[["item"]]) posterior_ranks <- model_fit$rho[ model_fit$rho$iteration > burnin(model_fit), , drop = FALSE ] posterior_ranks$probability <- 1 heatplot_data <- aggregate(posterior_ranks[, "probability", drop = FALSE], by = list( cluster = posterior_ranks$cluster, item = posterior_ranks$item, value = posterior_ranks$value ), FUN = function(x) sum(x) / length(unique(posterior_ranks$iteration)) ) heatplot_data$item <- factor(heatplot_data$item, levels = item_order) heatplot_data <- heatplot_data[order(heatplot_data$item), , drop = FALSE] heatplot_expanded <- expand.grid( cluster = unique(heatplot_data$cluster), item = unique(heatplot_data$item), value = unique(heatplot_data$value) ) heatplot_expanded <- merge( heatplot_expanded, heatplot_data, by = c("cluster", "item", "value"), all.x = TRUE ) heatplot_expanded$probability[is.na(heatplot_expanded$probability)] <- 0 ggplot2::ggplot( heatplot_expanded, ggplot2::aes(x = .data$item, y = .data$value, fill = .data$probability) ) + ggplot2::geom_tile() + ggplot2::labs(fill = "Probability") + ggplot2::xlab("Item") + ggplot2::ylab("Rank") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/heat_plot.R
.onUnload <- function(libpath) { library.dynam.unload("BayesMallows", libpath) } #' Check if a vector is a permutation #' @param vec a vector #' @return TRUE if vec is a permutation #' @noRd validate_permutation <- function(vec) { if (!any(is.na(vec))) { return(all(sort(vec) == seq_along(vec))) } else if (all(is.na(vec))) { return(TRUE) } else { return(all(vec[!is.na(vec)] <= length(vec)) && all(vec[!is.na(vec)] >= 1) && !any(duplicated(vec[!is.na(vec)]))) } } count_jobs_per_cluster <- function(n_iterations, n_clusters) { # Split n_iterations into each cluster n_iterations_vec <- rep(floor(n_iterations / n_clusters), n_clusters) i <- 1 while (sum(n_iterations_vec) != n_iterations) { n_iterations_vec[i] <- n_iterations_vec[i] + 1 if (i > n_clusters) break } n_iterations_vec } prepare_cluster <- function(cl, varlist) { parallel::clusterExport( cl = cl, varlist = varlist, envir = parent.frame() ) parallel::clusterSetRNGStream(cl) function(X, FUN, ...) { parallel::parLapply(cl = cl, X = X, fun = FUN, ...) } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/misc.R
#' Plot Posterior Distributions #' #' Plot posterior distributions of the parameters of the Mallows Rank model. #' #' @param x An object of type `BayesMallows`, returned from #' [compute_mallows()]. #' #' @param parameter Character string defining the parameter to plot. Available #' options are `"alpha"`, `"rho"`, `"cluster_probs"`, #' `"cluster_assignment"`, and `"theta"`. #' #' @param items The items to study in the diagnostic plot for `rho`. Either #' a vector of item names, corresponding to `x$data$items` or a #' vector of indices. If NULL, five items are selected randomly. #' Only used when `parameter = "rho"`. #' #' @param ... Other arguments passed to `plot` (not used). #' #' @export #' @importFrom stats density #' #' @example /inst/examples/plot.BayesMallows_example.R #' @family posterior quantities plot.BayesMallows <- function(x, parameter = "alpha", items = NULL, ...) { parameter <- match.arg( parameter, c("alpha", "rho", "cluster_probs", "cluster_assignment", "theta") ) if (is.null(burnin(x))) { stop("Please specify the burnin with 'burnin(x) <- value'.") } if (parameter == "alpha") { plot_alpha(x) } else if (parameter == "rho") { plot_rho(x, items) } else if (parameter == "cluster_probs") { df <- x$cluster_probs[x$cluster_probs$iteration > burnin(x), , drop = FALSE] ggplot2::ggplot(df, ggplot2::aes(x = .data$value)) + ggplot2::geom_density() + ggplot2::xlab(expression(tau[c])) + ggplot2::ylab("Posterior density") + ggplot2::facet_wrap(~ .data$cluster) } else if (parameter == "cluster_assignment") { if (is.null(x$cluster_assignment)) { stop("No cluster assignments.") } df <- assign_cluster(x, soft = FALSE, expand = FALSE) df <- df[order(df$map_cluster), ] assessor_order <- df$assessor df <- assign_cluster(x, soft = TRUE, expand = TRUE) df$assessor <- factor(df$assessor, levels = assessor_order) ggplot2::ggplot(df, ggplot2::aes(.data$assessor, .data$cluster)) + ggplot2::geom_tile(ggplot2::aes(fill = .data$probability)) + ggplot2::theme( legend.title = ggplot2::element_blank(), axis.title.y = ggplot2::element_blank(), axis.ticks.x = ggplot2::element_blank(), axis.text.x = ggplot2::element_blank() ) + ggplot2::xlab(paste0("Assessors (", min(assessor_order), " - ", max(assessor_order), ")")) } else if (parameter == "theta") { if (is.null(x$theta)) { stop("Please run compute_mallows with error_model = 'bernoulli'.") } df <- x$theta[x$theta$iteration > burnin(x), , drop = FALSE] p <- ggplot2::ggplot(df, ggplot2::aes(x = .data$value)) + ggplot2::geom_density() + ggplot2::xlab(expression(theta)) + ggplot2::ylab("Posterior density") return(p) } } #' @title Plot SMC Posterior Distributions #' @description Plot posterior distributions of SMC-Mallow parameters. #' @param x An object of type \code{SMC-Mallows}. #' @param parameter Character string defining the parameter to plot. Available #' options are \code{"alpha"} and \code{"rho"}. #' @param items Either a vector of item names, or a vector of indices. If NULL, #' five items are selected randomly. #' @param ... Other arguments passed to \code{\link[base]{plot}} (not used). #' @return A plot of the posterior distributions #' @export #' @family posterior quantities #' @example /inst/examples/update_mallows_example.R #' plot.SMCMallows <- function( x, parameter = "alpha", items = NULL, ...) { parameter <- match.arg(parameter, c("alpha", "rho")) if (parameter == "alpha") { plot_alpha(x) } else if (parameter == "rho") { plot_rho(x, items) } else { stop("parameter must be either 'alpha' or 'rho'.") } } plot_alpha <- function(x) { plot_dat <- x$alpha[x$alpha$iteration > burnin(x), , drop = FALSE] p <- ggplot2::ggplot(plot_dat, ggplot2::aes(x = .data$value)) + ggplot2::geom_density() + ggplot2::xlab(expression(alpha)) + ggplot2::ylab("Posterior density") if (x$n_clusters > 1) { p <- p + ggplot2::facet_wrap(~ .data$cluster, scales = "free_x") } p } plot_rho <- function(x, items) { if (is.null(items) && x$data$n_items > 5) { message("Items not provided by user. Picking 5 at random.") items <- sample.int(x$data$n_items, 5) } else if (is.null(items) && x$data$n_items > 0) { items <- seq.int(from = 1, to = x$data$n_items) } else if (!is.null(items)) { if (!all(items %in% x$data$items) && !all(items %in% seq_along(x$data$items))) { stop("Unknown items.") } } if (!is.character(items)) { items <- x$data$items[items] } df <- x$rho[x$rho$iteration > burnin(x) & x$rho$item %in% items, , drop = FALSE] df1 <- aggregate(iteration ~ item + cluster + value, data = df, FUN = length) df1$pct <- df1$iteration / length(unique(df$iteration)) # Finally create the plot p <- ggplot2::ggplot(df1, ggplot2::aes(x = factor(.data$value), y = .data$pct)) + ggplot2::geom_col() + ggplot2::xlab("rank") + ggplot2::ylab("Posterior probability") if (x$n_clusters == 1) { p <- p + ggplot2::facet_wrap(~ .data$item) } else { p <- p + ggplot2::facet_wrap(~ .data$cluster + .data$item) } return(p) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/plot.R
#' Plot Within-Cluster Sum of Distances #' #' Plot the within-cluster sum of distances from the corresponding cluster #' consensus for different number of clusters. This function is useful for #' selecting the number of mixture. #' #' @param ... One or more objects returned from [compute_mallows()], separated #' by comma, or a list of such objects. Typically, each object has been run #' with a different number of mixtures, as specified in the `n_clusters` #' argument to [compute_mallows()]. Alternatively an object returned from #' [compute_mallows_mixtures()]. #' #' @return A boxplot with the number of clusters on the horizontal axis and the #' with-cluster sum of distances on the vertical axis. #' #' @export #' #' @example /inst/examples/compute_mallows_mixtures_example.R #' @family posterior quantities plot_elbow <- function(...) { models <- list(...) if (length(models) == 1 && !(inherits(models[[1]], "BayesMallows"))) { models <- models[[1]] } df <- do.call(rbind, lapply(models, function(x) { stopifnot(inherits(x, "BayesMallows")) if (is.null(burnin(x))) { stop("Please specify burnin with 'burnin(model_fit) <- value'.") } if (length(unique(x$within_cluster_distance$iteration)) != x$compute_options$nmc) { stop("To get an elbow plot, set include_wcd=TRUE in compute_mallows") } df <- x$within_cluster_distance[ x$within_cluster_distance$iteration > burnin(x), , drop = FALSE ] # Need to sum the within-cluster distances across clusters, for each iteration df <- aggregate( x = list(value = df$value), by = list(iteration = df$iteration), FUN = sum ) df$n_clusters <- x$n_clusters return(df) })) ggplot2::ggplot( df, ggplot2::aes(x = as.factor(.data$n_clusters), y = .data$value) ) + ggplot2::geom_boxplot() + ggplot2::xlab("Number of clusters") + ggplot2::ylab("Within-cluster sum of distances") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/plot_elbow.R
#' Plot Top-k Rankings with Pairwise Preferences #' #' Plot the posterior probability, per item, of being ranked among the #' top-\eqn{k} for each assessor. This plot is useful when the data take the #' form of pairwise preferences. #' #' @param model_fit An object of type `BayesMallows`, returned from #' [compute_mallows()]. #' #' @param k Integer specifying the k in top-\eqn{k}. #' #' @export #' #' @example /inst/examples/plot_top_k_example.R #' @family posterior quantities plot_top_k <- function(model_fit, k = 3) { validate_top_k(model_fit) rankings <- .predict_top_k(model_fit, k = k) ggplot2::ggplot(rankings, ggplot2::aes(.data$assessor, .data$item)) + ggplot2::geom_tile(ggplot2::aes(fill = .data$prob), colour = "white") + ggplot2::xlab("Assessor") + ggplot2::ylab("Item") + ggplot2::labs(fill = "Prob.") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/plot_top_k.R
#' Predict Top-k Rankings with Pairwise Preferences #' #' Predict the posterior probability, per item, of being ranked among the #' top-\eqn{k} for each assessor. This is useful when the data take the form of #' pairwise preferences. #' #' @param model_fit An object of type `BayesMallows`, returned from #' [compute_mallows()]. #' #' @param k Integer specifying the k in top-\eqn{k}. #' #' @export #' #' @return A dataframe with columns `assessor`, `item`, and #' `prob`, where each row states the probability that the given assessor #' rates the given item among top-\eqn{k}. #' #' @example /inst/examples/plot_top_k_example.R #' @family posterior quantities predict_top_k <- function(model_fit, k = 3) { validate_top_k(model_fit) .predict_top_k(model_fit, k) } .predict_top_k <- function(model_fit, k) { rankings <- model_fit$augmented_data[ model_fit$augmented_data$iteration > burnin(model_fit) & model_fit$augmented_data$value <= k, , drop = FALSE ] n_samples <- length(unique(rankings$iteration)) rankings <- aggregate( list(prob = rankings$iteration), by = list(assessor = rankings$assessor, item = rankings$item), FUN = function(x) length(x) / n_samples, drop = FALSE ) rankings$prob[is.na(rankings$prob)] <- 0 rankings } validate_top_k <- function(model_fit) { if (is.null(burnin(model_fit))) { stop("Please specify the burnin with 'burnin(model_fit) <- value'.") } if (!exists("augmented_data", model_fit)) { stop("model_fit must have element augmented_data. Please set save_aug = TRUE in compute_mallows in order to create a top-k plot.") } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/predict_top_k.R
#' Print Method for BayesMallows Objects #' #' The default print method for a `BayesMallows` object. #' #' @param x An object of type `BayesMallows`, returned from #' [compute_mallows()]. #' #' @param ... Other arguments passed to `print` (not used). #' #' @export #' #' @family posterior quantities print.BayesMallows <- function(x, ...) { cat("Bayesian Mallows Model with", x$data$n_items, "items and", x$data$n_assessors, "assessors.\n") cat("Use functions assess_convergence() or plot() to visualize the object.") } #' @rdname print.BayesMallows #' @export print.BayesMallowsMixtures <- function(x, ...) { cat("Bayesian Mallows Mixtures Model with", length(x), "clusters.\n") cat("Use functions assess_convergence() or plot_elbow() to analyze.\n") } #' @rdname print.BayesMallows #' @export print.SMCMallows <- function(x, ...) { cat("Bayesian Mallows Model with", x$data$n_items, "items fitted with sequential Monte Carlo.\n") cat("Use the plot() to visualize the object.") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/print.R
#' Convert between ranking and ordering. #' #' `create_ranking` takes a vector or matrix of ordered items `orderings` and #' returns a corresponding vector or matrix of ranked items. #' `create_ordering` takes a vector or matrix of rankings `rankings` and #' returns a corresponding vector or matrix of ordered items. #' #' @param orderings A vector or matrix of ordered items. If a matrix, it should be of #' size N times n, where N is the number of samples and n is the number of #' items. #' @param rankings A vector or matrix of ranked items. If a matrix, it should be N #' times n, where N is the number of samples and n is the number of items. #' #' @return A vector or matrix of rankings. Missing orderings coded as `NA` are propagated into corresponding missing ranks and vice versa. #' #' @family rank functions #' #' @examples #' # A vector of ordered items. #' orderings <- c(5, 1, 2, 4, 3) #' # Get ranks #' rankings <- create_ranking(orderings) #' # rankings is c(2, 3, 5, 4, 1) #' # Finally we convert it backed to an ordering. #' orderings_2 <- create_ordering(rankings) #' # Confirm that we get back what we had #' all.equal(orderings, orderings_2) #' #' # Next, we have a matrix with N = 19 samples #' # and n = 4 items #' set.seed(21) #' N <- 10 #' n <- 4 #' orderings <- t(replicate(N, sample.int(n))) #' # Convert the ordering to ranking #' rankings <- create_ranking(orderings) #' # Now we try to convert it back to an ordering. #' orderings_2 <- create_ordering(rankings) #' # Confirm that we get back what we had #' all.equal(orderings, orderings_2) #' @export create_ranking <- function(orderings) { # Check that it is a permutation if (is.vector(orderings)) { stopifnot(validate_permutation(orderings)) return(order(orderings)) } else if (is.matrix(orderings)) { n_items <- ncol(orderings) # Convert to list, for easier functional programming orderings <- split(orderings, f = seq_len(nrow(orderings))) # Check that matrix contains permutations check <- lapply(orderings, validate_permutation) if (!Reduce(`&&`, check)) { stop(paste("orderings must contain proper permutations. Problem row(s):", which(!check))) } # Convert each ordering to ranking, taking special care of missing values rankings <- lapply(orderings, function(x) { # Find out which items are missing missing_items <- setdiff(1:n_items, x) # Possible rankings for each item candidates <- matrix(1:n_items, ncol = n_items, nrow = n_items) candidates[, missing_items] <- NA_real_ # Logical matrix specifying which item to pick inds <- outer(X = x, Y = 1:n_items, FUN = "==") inds[1, colSums(inds, na.rm = TRUE) == 0] <- TRUE inds[is.na(inds)] <- FALSE # Extract the correct items candidates[inds] }) return(t(matrix(unlist(rankings), ncol = length(rankings)))) } else { stop("orderings must be a vector or matrix") } } #' @rdname create_ranking #' @export create_ordering <- function(rankings) { create_ranking(rankings) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/rank_conversion.R
#' Random Samples from the Mallows Rank Model #' #' Generate random samples from the Mallows Rank Model #' \insertCite{mallows1957}{BayesMallows} with consensus ranking \eqn{\rho} and #' scale parameter \eqn{\alpha}. The samples are obtained by running the #' Metropolis-Hastings algorithm described in Appendix C of #' \insertCite{vitelli2018;textual}{BayesMallows}. #' #' @param rho0 Vector specifying the latent consensus ranking in the Mallows #' rank model. #' @param alpha0 Scalar specifying the scale parameter in the Mallows rank #' model. #' @param n_samples Integer specifying the number of random samples to generate. #' When `diagnostic = TRUE`, this number must be larger than 1. #' @param leap_size Integer specifying the step size of the leap-and-shift #' proposal distribution. #' @param metric Character string specifying the distance measure to use. #' Available options are `"footrule"` (default), `"spearman"`, #' `"cayley"`, `"hamming"`, `"kendall"`, and `"ulam"`. #' See also the `rmm` function in the `PerMallows` package #' \insertCite{irurozki2016}{BayesMallows} for sampling from the Mallows #' model with Cayley, Hamming, Kendall, and Ulam distances. #' @param diagnostic Logical specifying whether to output convergence #' diagnostics. If `TRUE`, a diagnostic plot is printed, together with #' the returned samples. #' @param burnin Integer specifying the number of iterations to discard as #' burn-in. Defaults to 1000 when `diagnostic = FALSE`, else to 0. #' @param thinning Integer specifying the number of MCMC iterations to perform #' between each time a random rank vector is sampled. Defaults to 1000 when #' `diagnostic = FALSE`, else to 1. #' @param items_to_plot Integer vector used if `diagnostic = TRUE`, in #' order to specify the items to plot in the diagnostic output. If not #' provided, 5 items are picked at random. #' @param max_lag Integer specifying the maximum lag to use in the computation #' of autocorrelation. Defaults to 1000L. This argument is passed to #' `stats::acf`. Only used when `diagnostic = TRUE`. #' #' @references \insertAllCited{} #' #' @export #' #' @example /inst/examples/sample_mallows_example.R #' @family rank functions #' sample_mallows <- function(rho0, alpha0, n_samples, leap_size = max(1L, floor(n_items / 5)), metric = "footrule", diagnostic = FALSE, burnin = ifelse(diagnostic, 0, 1000), thinning = ifelse(diagnostic, 1, 1000), items_to_plot = NULL, max_lag = 1000L) { if (!(validate_permutation(rho0) && sum(is.na(rho0)) == 0)) { stop("rho0 must be a proper ranking with no missing values.") } if (diagnostic && n_samples == 1) { stop("Must have more than one samples to create diagnostic plots") } else if (n_samples <= 0) { stop("n_samples must be positive.") } n_items <- length(rho0) if (diagnostic) { internal_burnin <- 0 internal_thinning <- 1 internal_n_samples <- burnin + n_samples * thinning } else { internal_burnin <- burnin internal_thinning <- thinning internal_n_samples <- n_samples } samples <- t(rmallows( rho0 = rho0, alpha0 = alpha0, n_samples = internal_n_samples, burnin = internal_burnin, thinning = internal_thinning, leap_size = leap_size, metric = metric )) if (diagnostic) { if (is.null(items_to_plot) && n_items > 5) { message("Items not provided by user. Picking 5 at random.") items_to_plot <- sample.int(n_items, 5) } else { items_to_plot <- seq(from = 1, to = n_items, by = 1) } # Compute the autocorrelation in the samples autocorr <- apply(samples[, items_to_plot, drop = FALSE], 2, stats::acf, lag.max = max_lag, plot = FALSE, demean = TRUE ) names(autocorr) <- items_to_plot autocorr <- do.call(rbind, Map(function(x, xnm) { data.frame( item = xnm, acf = x$acf[, 1, 1], lag = x$lag[, 1, 1] ) }, x = autocorr, xnm = names(autocorr))) autocorr$item <- as.factor(as.integer(autocorr$item)) ac_plot <- ggplot2::ggplot( autocorr, ggplot2::aes(x = .data$lag, y = .data$acf, color = .data$item) ) + ggplot2::geom_line() + ggplot2::theme(legend.title = ggplot2::element_blank()) + ggplot2::xlab("Lag") + ggplot2::ylab("Autocorrelation") + ggplot2::ggtitle("Autocorrelation of Rank Values") print(ac_plot) con <- getOption("ask_opts.con", stdin()) print("Press [enter] to see the next plot") response <- readLines(con = con, n = 1) colnames(samples) <- seq(from = 1, to = n_items, by = 1) diagnostic <- as.data.frame(samples) diagnostic$iteration <- seq_len(nrow(diagnostic)) diagnostic <- stats::reshape(diagnostic, direction = "long", varying = setdiff(names(diagnostic), "iteration"), v.names = "value", timevar = "item", times = setdiff(names(diagnostic), "iteration"), idvar = "iteration", ids = diagnostic$iteration ) diagnostic <- diagnostic[diagnostic$item %in% items_to_plot, , drop = FALSE] diagnostic$item <- as.factor(as.integer(diagnostic$item)) rho_plot <- ggplot2::ggplot( diagnostic, ggplot2::aes(x = .data$iteration, y = .data$value, color = .data$item) ) + ggplot2::geom_line() + ggplot2::theme(legend.title = ggplot2::element_blank()) + ggplot2::xlab("Iteration") + ggplot2::ylab("Rank value") + ggplot2::ggtitle("Trace Plot of Rank Values") print(rho_plot) samples <- samples[seq(from = burnin + 1, by = thinning, length.out = n_samples), ] } return(samples) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/sample_mallows.R
#' Sample from prior distribution #' #' Function to obtain samples from the prior distributions of the Bayesian #' Mallows model. Intended to be given to [update_mallows()]. #' #' @param n An integer specifying the number of samples to take. #' @param n_items An integer specifying the number of items to be ranked. #' @param priors An object of class "BayesMallowsPriors" returned from #' [set_priors()]. #' #' @return An object of class "BayesMallowsPriorSample", containing `n` #' independent samples of \eqn{\alpha} and \eqn{\rho}. #' #' @export #' #' @family modeling #' @example /inst/examples/sample_prior_example.R sample_prior <- function(n, n_items, priors = set_priors()) { validate_positive(n) validate_positive(n_items) ret <- list( alpha = stats::rgamma(n, shape = priors$gamma, rate = priors$lambda), rho = replicate(n, sample(n_items, n_items)), priors = priors, n_items = n_items, items = seq_len(n_items) ) class(ret) <- "BayesMallowsPriorSamples" ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/sample_prior.R
#' @title Specify options for computation #' #' @description Set parameters related to the Metropolis-Hastings algorithm. #' #' #' @param nmc Integer specifying the number of iteration of the #' Metropolis-Hastings algorithm to run. Defaults to `2000`. See #' [assess_convergence()] for tools to check convergence of the Markov chain. #' #' @param burnin Integer defining the number of samples to discard. Defaults to #' `NULL`, which means that burn-in is not set. #' #' @param alpha_prop_sd Numeric value specifying the \eqn{\sigma} parameter of #' the lognormal proposal distribution used for \eqn{\alpha} in the #' Metropolis-Hastings algorithm. The logarithm of the proposed samples will #' have standard deviation given by `alpha_prop_sd`. Defaults to `0.1`. #' #' @param rho_proposal Character string specifying the proposal distribution of #' modal ranking \eqn{\rho}. Defaults to "ls", which means that the #' leap-and-shift algorithm of \insertCite{vitelli2018;textual}{BayesMallows} #' is used. The other option is "swap", which means that the swap proposal of #' \insertCite{crispino2019;textual}{BayesMallows} is used instead. #' #' @param leap_size Integer specifying the step size of the distribution defined #' in `rho_proposal` for proposing new latent ranks \eqn{rho}. Defaults to 1. #' #' @param aug_method Augmentation proposal for use with missing data. One of #' "pseudo" and "uniform". Defaults to "uniform", which means that new #' augmented rankings are proposed by sampling uniformly from the set of #' available ranks, see Section 4 in #' \insertCite{vitelli2018;textual}{BayesMallows}. Setting the argument to #' "pseudo" instead, means that the pseudo-likelihood proposal defined in #' Chapter 5 of #' \insertCite{steinSequentialInferenceMallows2023;textual}{BayesMallows} is #' used instead. #' #' @param pseudo_aug_metric String defining the metric to be used in the #' pseudo-likelihood proposal. Only used if `aug_method = "pseudo"`. Can be #' either "footrule" or "spearman", and defaults to "footrule". #' #' @param swap_leap Integer specifying the leap size for the swap proposal used #' for proposing latent ranks in the case of non-transitive pairwise #' preference data. Note that leap size for the swap proposal when used for #' proposal the modal ranking \eqn{\rho} is given by the \code{leap_size} #' argument above. #' #' @param alpha_jump Integer specifying how many times to sample \eqn{\rho} #' between each sampling of \eqn{\alpha}. In other words, how many times to #' jump over \eqn{\alpha} while sampling \eqn{\rho}, and possibly other #' parameters like augmented ranks \eqn{\tilde{R}} or cluster assignments #' \eqn{z}. Setting `alpha_jump` to a high number can speed up computation #' time, by reducing the number of times the partition function for the #' Mallows model needs to be computed. Defaults to `1`. #' #' @param aug_thinning Integer specifying the thinning for saving augmented #' data. Only used when `save_aug = TRUE`. Defaults to `1`. #' #' @param clus_thinning Integer specifying the thinning to be applied to cluster #' assignments and cluster probabilities. Defaults to `1`. #' #' @param rho_thinning Integer specifying the thinning of `rho` to be performed #' in the Metropolis- Hastings algorithm. Defaults to `1`. `compute_mallows` #' save every `rho_thinning`th value of \eqn{\rho}. #' #' @param include_wcd Logical indicating whether to store the within-cluster #' distances computed during the Metropolis-Hastings algorithm. Defaults to #' `FALSE`. Setting `include_wcd = TRUE` is useful when deciding the number of #' mixture components to include, and is required by [plot_elbow()]. #' #' @param save_aug Logical specifying whether or not to save the augmented #' rankings every `aug_thinning`th iteration, for the case of missing data or #' pairwise preferences. Defaults to `FALSE`. Saving augmented data is useful #' for predicting the rankings each assessor would give to the items not yet #' ranked, and is required by [plot_top_k()]. #' #' @param save_ind_clus Whether or not to save the individual cluster #' probabilities in each step. This results in csv files `cluster_probs1.csv`, #' `cluster_probs2.csv`, ..., being saved in the calling directory. This #' option may slow down the code considerably, but is necessary for detecting #' label switching using Stephen's algorithm. #' #' @return An object of class `"BayesMallowsComputeOptions"`, to be provided in #' the `compute_options` argument to [compute_mallows()], #' [compute_mallows_mixtures()], or [update_mallows()]. #' @export #' #' @family preprocessing #' #' @references \insertAllCited{} #' set_compute_options <- function( nmc = 2000, burnin = NULL, alpha_prop_sd = 0.1, rho_proposal = c("ls", "swap"), leap_size = 1, aug_method = c("uniform", "pseudo"), pseudo_aug_metric = c("footrule", "spearman"), swap_leap = 1, alpha_jump = 1, aug_thinning = 1, clus_thinning = 1, rho_thinning = 1, include_wcd = FALSE, save_aug = FALSE, save_ind_clus = FALSE) { rho_proposal <- match.arg(rho_proposal, c("ls", "swap")) aug_method <- match.arg(aug_method, c("uniform", "pseudo")) pseudo_aug_metric <- match.arg(pseudo_aug_metric, c("footrule", "spearman")) validate_integer(nmc) if (!is.null(burnin)) validate_integer(burnin) validate_positive(alpha_prop_sd) validate_integer(leap_size) validate_integer(swap_leap) validate_integer(alpha_jump) validate_integer(aug_thinning) validate_integer(clus_thinning) validate_integer(rho_thinning) validate_logical(include_wcd) validate_logical(save_aug) validate_logical(save_ind_clus) if (!is.null(burnin)) check_larger(nmc, burnin) check_larger(nmc, alpha_jump) check_larger(nmc, aug_thinning) check_larger(nmc, clus_thinning) check_larger(nmc, rho_thinning) if (save_ind_clus) prompt_save_files(nmc) ret <- as.list(environment()) class(ret) <- "BayesMallowsComputeOptions" ret } prompt_save_files <- function(nmc) { con <- getOption("ask_opts.con", stdin()) print( paste( nmc, "csv files will be saved in your current working directory.", "Proceed? (yes/no): " ) ) response <- readLines(con = con, n = 1) if (tolower(response) %in% c("n", "no")) stop("quitting") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/set_compute_options.R
#' @title Set initial values of scale parameter and modal ranking #' #' @description #' Set initial values used by the Metropolis-Hastings algorithm. #' #' #' @param rho_init Numeric vector specifying the initial value of the latent #' consensus ranking \eqn{\rho}. Defaults to NULL, which means that the #' initial value is set randomly. If `rho_init` is provided when #' `n_clusters > 1`, each mixture component \eqn{\rho_{c}} gets the same #' initial value. #' #' #' @param alpha_init Numeric value specifying the initial value of the scale #' parameter \eqn{\alpha}. Defaults to `1`. When `n_clusters > 1`, #' each mixture component \eqn{\alpha_{c}} gets the same initial value. When #' chains are run in parallel, by providing an argument `cl = cl`, then #' `alpha_init` can be a vector of of length `length(cl)`, each #' element of which becomes an initial value for the given chain. #' #' #' @return An object of class `"BayesMallowsInitialValues"`, to be #' provided to the `initial_values` argument of [compute_mallows()] or #' [compute_mallows_mixtures()]. #' #' @export #' #' @family preprocessing #' set_initial_values <- function(rho_init = NULL, alpha_init = 1) { if (!is.null(rho_init)) { if (!validate_permutation(rho_init)) stop("rho_init must be a proper permutation") if (!(sum(is.na(rho_init)) == 0)) stop("rho_init cannot have missing values") rho_init <- matrix(rho_init, ncol = 1) } ret <- as.list(environment()) class(ret) <- "BayesMallowsInitialValues" ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/set_initial_values.R
#' @title Set options for Bayesian Mallows model #' #' @description #' Specify various model options for the Bayesian Mallows model. #' #' #' @param metric A character string specifying the distance metric to use in the #' Bayesian Mallows Model. Available options are `"footrule"`, `"spearman"`, #' `"cayley"`, `"hamming"`, `"kendall"`, and `"ulam"`. The distance given by #' `metric` is also used to compute within-cluster distances, when #' `include_wcd = TRUE`. #' #' #' @param error_model Character string specifying which model to use for #' inconsistent rankings. Defaults to `"none"`, which means that inconsistent #' rankings are not allowed. At the moment, the only available other option is #' `"bernoulli"`, which means that the Bernoulli error model is used. See #' \insertCite{crispino2019;textual}{BayesMallows} for a definition of the #' Bernoulli model. #' #' @param n_clusters Integer specifying the number of clusters, i.e., the number #' of mixture components to use. Defaults to `1L`, which means no clustering #' is performed. See [compute_mallows_mixtures()] for a convenience function #' for computing several models with varying numbers of mixtures. #' #' @return An object of class `"BayesMallowsModelOptions"`, to be provided in #' the `model_options` argument to [compute_mallows()], #' [compute_mallows_mixtures()], or [update_mallows()]. #' #' @export #' #' @family preprocessing #' #' @references \insertAllCited{} #' set_model_options <- function( metric = c("footrule", "spearman", "cayley", "hamming", "kendall", "ulam"), n_clusters = 1, error_model = c("none", "bernoulli")) { metric <- match.arg(metric, c( "footrule", "spearman", "cayley", "hamming", "kendall", "ulam" )) error_model <- match.arg(error_model, c("none", "bernoulli")) validate_integer(n_clusters) validate_positive(n_clusters) ret <- as.list(environment()) class(ret) <- "BayesMallowsModelOptions" ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/set_model_options.R
#' @title Set prior parameters for Bayesian Mallows model #' #' @description Set values related to the prior distributions for the Bayesian #' Mallows model. #' #' @param gamma Strictly positive numeric value specifying the shape parameter #' of the gamma prior distribution of \eqn{\alpha}. Defaults to `1`, thus #' recovering the exponential prior distribution used by #' \insertCite{vitelli2018}{BayesMallows}. #' #' @param lambda Strictly positive numeric value specifying the rate parameter #' of the gamma prior distribution of \eqn{\alpha}. Defaults #' to `0.001`. When `n_cluster > 1`, each mixture component \eqn{\alpha_{c}} #' has the same prior distribution. #' #' @param psi Positive integer specifying the concentration parameter \eqn{\psi} #' of the Dirichlet prior distribution used for the cluster probabilities #' \eqn{\tau_{1}, \tau_{2}, \dots, \tau_{C}}, where \eqn{C} is the value of #' `n_clusters`. Defaults to `10L`. When `n_clusters = 1`, this argument is #' not used. #' #' @param kappa Hyperparameters of the truncated Beta prior used for error #' probability \eqn{\theta} in the Bernoulli error model. The prior has the #' form \eqn{\pi(\theta) = \theta^{\kappa_{1}} (1 - \theta)^{\kappa_{2}}}. #' Defaults to `c(1, 3)`, which means that the \eqn{\theta} is a priori #' expected to be closer to zero than to 0.5. See #' \insertCite{crispino2019}{BayesMallows} for details. #' #' @return An object of class `"BayesMallowsPriors"`, to be provided in the #' `priors` argument to [compute_mallows()], [compute_mallows_mixtures()], or #' [update_mallows()]. #' @export #' #' @references \insertAllCited{} #' #' @family preprocessing #' set_priors <- function(gamma = 1, lambda = 0.001, psi = 10, kappa = c(1, 3)) { stopifnot(length(kappa) == 2) validate_positive(gamma) validate_positive(lambda) validate_integer(psi) validate_positive(psi) validate_positive(kappa[[1]]) validate_positive(kappa[[2]]) ret <- as.list(environment()) class(ret) <- "BayesMallowsPriors" ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/set_priors.R
#' @title Set SMC compute options #' #' @description Sets the SMC compute options to be used in #' [update_mallows.BayesMallows()]. #' #' @param n_particles Integer specifying the number of particles. #' @param mcmc_steps Number of MCMC steps to be applied in the resample-move #' step. #' @param resampler Character string defining the resampling method to use. One #' of "stratified", "systematic", "residual", and "multinomial". Defaults to #' "stratified". While multinomial resampling was used in #' \insertCite{steinSequentialInferenceMallows2023;textual}{BayesMallows}, #' stratified, systematic, or residual resampling typically give lower Monte #' Carlo error \insertCite{Douc2005,Hol2006,Naesseth2019}{BayesMallows}. #' @param latent_sampling_lag Parameter specifying the number of timesteps to go #' back when resampling the latent ranks in the move step. See Section 6.2.3 #' of \insertCite{Kantas2015}{BayesMallows} for details. The \eqn{L} in their #' notation corresponds to `latent_sampling_lag`. See more under Details. #' Defaults to `NA`, which means that all latent ranks from previous timesteps #' are moved. If set to `0`, no move step is applied to the latent ranks. #' #' @return An object of class "SMCOptions". #' #' #' @section Lag parameter in move step: #' #' The parameter `latent_sampling_lag` corresponds to \eqn{L} in #' \insertCite{Kantas2015}{BayesMallows}. Its use in this package is can be #' explained in terms of Algorithm 12 in #' \insertCite{steinSequentialInferenceMallows2023}{BayesMallows}. The #' relevant line of the algorithm is: #' #' **for** \eqn{j = 1 : M_{t}} **do** \cr #' **M-H step:** update \eqn{\tilde{\mathbf{R}}_{j}^{(i)}} with proposal #' \eqn{\tilde{\mathbf{R}}_{j}' \sim q(\tilde{\mathbf{R}}_{j}^{(i)} | #' \mathbf{R}_{j}, \boldsymbol{\rho}_{t}^{(i)}, \alpha_{t}^{(i)})}.\cr #' **end** #' #' Let \eqn{L} denote the value of `latent_sampling_lag`. With this parameter, #' we modify for algorithm so it becomes #' #' **for** \eqn{j = M_{t-L+1} : M_{t}} **do** \cr #' **M-H step:** update \eqn{\tilde{\mathbf{R}}_{j}^{(i)}} with proposal #' \eqn{\tilde{\mathbf{R}}_{j}' \sim q(\tilde{\mathbf{R}}_{j}^{(i)} | #' \mathbf{R}_{j}, \boldsymbol{\rho}_{t}^{(i)}, \alpha_{t}^{(i)})}.\cr #' **end** #' #' This means that with \eqn{L=0} no move step is performed on any latent #' ranks, whereas \eqn{L=1} means that the move step is only applied to the #' parameters entering at the given timestep. The default, #' `latent_sampling_lag = NA` means that \eqn{L=t} at each timestep, and hence #' all latent ranks are part of the move step at each timestep. #' #' #' @export #' @references \insertAllCited{} #' #' @family preprocessing set_smc_options <- function( n_particles = 1000, mcmc_steps = 5, resampler = c("stratified", "systematic", "residual", "multinomial"), latent_sampling_lag = NA_integer_) { validate_integer(n_particles) validate_integer(mcmc_steps) if (!is.na(latent_sampling_lag)) validate_integer(latent_sampling_lag) resampler <- match.arg( resampler, c("stratified", "systematic", "residual", "multinomial") ) ret <- as.list(environment()) class(ret) <- "SMCOptions" ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/set_smc_options.R
#' @title Setup rank data #' #' @description Prepare rank or preference data for further analyses. #' #' @param rankings A matrix of ranked items, of size `n_assessors x n_items`. #' See [create_ranking()] if you have an ordered set of items that need to be #' converted to rankings. If `preferences` is provided, `rankings` is an #' optional initial value of the rankings. If `rankings` has column names, #' these are assumed to be the names of the items. `NA` values in rankings are #' treated as missing data and automatically augmented; to change this #' behavior, see the `na_action` argument to [set_model_options()]. A vector #' length `n_items` is silently converted to a matrix of length `1 x n_items`, #' and names (if any), are used as column names. #' #' @param preferences A data frame with one row per pairwise comparison, and #' columns `assessor`, `top_item`, and `bottom_item`. Each column contains the #' following: #' \itemize{ #' \item `assessor` is a numeric vector containing the assessor index. #' #' \item `bottom_item` is a numeric vector containing the index of the item that #' was disfavored in each pairwise comparison. #' #' \item `top_item` is a numeric vector containing the index of the item that was #' preferred in each pairwise comparison. #' } #' So if we have two assessors and five items, and assessor 1 prefers item 1 #' to item 2 and item 1 to item 5, while assessor 2 prefers item 3 to item 5, #' we have the following `df`: #' \tabular{rrr}{ #' **assessor** \tab **bottom_item** \tab **top_item**\cr #' 1 \tab 2 \tab 1\cr #' 1 \tab 5 \tab 1\cr #' 2 \tab 5 \tab 3\cr #' } #' #' @param user_ids Optional `numeric` vector of user IDs. Only only used by #' [update_mallows()]. If provided, new data can consist of updated partial #' rankings from users already in the dataset, as described in Section 6 of #' \insertCite{steinSequentialInferenceMallows2023;textual}{BayesMallows}. #' #' @param observation_frequency A vector of observation frequencies (weights) to #' apply do each row in `rankings`. This can speed up computation if a large #' number of assessors share the same rank pattern. Defaults to `NULL`, which #' means that each row of `rankings` is multiplied by 1. If provided, #' `observation_frequency` must have the same number of elements as there are #' rows in `rankings`, and `rankings` cannot be `NULL`. See #' [compute_observation_frequency()] for a convenience function for computing #' it. #' #' @param validate_rankings Logical specifying whether the rankings provided (or #' generated from `preferences`) should be validated. Defaults to `TRUE`. #' Turning off this check will reduce computing time with a large number of #' items or assessors. #' #' @param na_action Character specifying how to deal with `NA` values in the #' `rankings` matrix, if provided. Defaults to `"augment"`, which means that #' missing values are automatically filled in using the Bayesian data #' augmentation scheme described in #' \insertCite{vitelli2018;textual}{BayesMallows}. The other options for this #' argument are `"fail"`, which means that an error message is printed and the #' algorithm stops if there are `NA`s in `rankings`, and `"omit"` which simply #' deletes rows with `NA`s in them. #' #' @param cl Optional computing cluster used for parallelization when generating #' transitive closure based on preferences, returned from #' [parallel::makeCluster()]. Defaults to `NULL`. #' #' @param shuffle_unranked Logical specifying whether or not to randomly permute #' unranked items in the initial ranking. When `shuffle_unranked=TRUE` and #' `random=FALSE`, all unranked items for each assessor are randomly permuted. #' Otherwise, the first ordering returned by `igraph::topo_sort()` is #' returned. #' #' @param random Logical specifying whether or not to use a random initial #' ranking. Defaults to `FALSE`. Setting this to `TRUE` means that all #' possible orderings consistent with the stated pairwise preferences are #' generated for each assessor, and one of them is picked at random. #' #' @param random_limit Integer specifying the maximum number of items allowed #' when all possible orderings are computed, i.e., when `random=TRUE`. #' Defaults to `8L`. #' #' @param timepoint Integer vector specifying the timepoint. Defaults to `NULL`, #' which means that a vector of ones, one for each observation, is generated. #' Used by [update_mallows()] to identify data with a given iteration of the #' sequential Monte Carlo algorithm. If not `NULL`, must contain one integer #' for each row in `rankings`. #' #' @param n_items Integer specifying the number of items. Defaults to `NULL`, #' which means that the number of items is inferred from `rankings` or from #' `preferences`. Setting `n_items` manually can be useful with pairwise #' preference data in the SMC algorithm, i.e., when `rankings` is `NULL` and #' `preferences` is non-`NULL`, and contains a small number of pairwise #' preferences for a subset of users and items. #' #' @note Setting `random=TRUE` means that all possible orderings of each #' assessor's preferences are generated, and one of them is picked at random. #' This can be useful when experiencing convergence issues, e.g., if the MCMC #' algorithm does not mix properly. However, finding all possible orderings is #' a combinatorial problem, which may be computationally very hard. The result #' may not even be possible to fit in memory, which may cause the R session to #' crash. When using this option, please try to increase the size of the #' problem incrementally, by starting with smaller subsets of the complete #' data. An example is given below. #' #' It is assumed that the items are labeled starting from 1. For example, if a #' single comparison of the following form is provided, it is assumed that #' there is a total of 30 items (`n_items=30`), and the initial ranking is a #' permutation of these 30 items consistent with the preference 29<30. #' #' \tabular{rrr}{ #' **assessor** \tab **bottom_item** \tab **top_item**\cr #' 1 \tab 29 \tab 30\cr #' } #' #' If in reality there are only two items, they should be relabeled to 1 and #' 2, as follows: #' #' \tabular{rrr}{ #' **assessor** \tab **bottom_item** \tab **top_item**\cr #' 1 \tab 1 \tab 2\cr #' } #' #' #' #' @return An object of class `"BayesMallowsData"`, to be provided in the `data` #' argument to [compute_mallows()]. #' @export #' #' @family preprocessing #' #' @references \insertAllCited{} #' setup_rank_data <- function( rankings = NULL, preferences = NULL, user_ids = numeric(), observation_frequency = NULL, validate_rankings = TRUE, na_action = c("augment", "fail", "omit"), cl = NULL, shuffle_unranked = FALSE, random = FALSE, random_limit = 8L, timepoint = NULL, n_items = NULL) { na_action <- match.arg(na_action, c("augment", "fail", "omit")) if (!is.null(rankings) && !is.null(n_items)) { stop("n_items can only be set when rankings=NULL") } if (is.null(rankings) && is.null(preferences)) { stop("Either rankings or preferences (or both) must be provided.") } preferences <- generate_transitive_closure(preferences, cl) if (!is.null(rankings)) { if (na_action == "fail" && any(is.na(rankings))) { stop("rankings matrix contains NA values") } if (!is.matrix(rankings)) { rankings <- matrix(rankings, nrow = 1, dimnames = list(NULL, names(rankings)) ) } if (na_action == "omit" && any(is.na(rankings))) { keeps <- apply(rankings, 1, function(x) !any(is.na(x))) print(paste( "Omitting", sum(!keeps), "row(s) from rankings due to NA values" )) rankings <- rankings[keeps, , drop = FALSE] } } else { if (is.null(n_items)) n_items <- max(preferences[, c("bottom_item", "top_item")]) rankings <- generate_initial_ranking( preferences, n_items, cl, shuffle_unranked, random, random_limit ) } if (!is.null(observation_frequency)) { validate_positive_vector(observation_frequency) if (nrow(rankings) != length(observation_frequency)) { stop( "observation_frequency must be of same ", "length as the number of rows in rankings" ) } } else { observation_frequency <- rep(1, nrow(rankings)) } if (validate_rankings && !all(apply(rankings, 1, validate_permutation))) { stop("invalid permutations provided in rankings matrix") } n_items <- ncol(rankings) if (!is.null(colnames(rankings))) { items <- colnames(rankings) } else { items <- paste("Item", seq(from = 1, to = n_items, by = 1)) } if (is.null(timepoint)) timepoint <- rep(1, nrow(rankings)) if (length(timepoint) != nrow(rankings)) { stop("must have one timepoint per row") } constraints <- generate_constraints(preferences, n_items, cl) consistent <- matrix(integer(0)) n_assessors <- nrow(rankings) any_missing <- any(is.na(rankings)) augpair <- !is.null(preferences) stopifnot(is.numeric(user_ids)) ret <- as.list(environment()) class(ret) <- "BayesMallowsData" ret }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/setup_rank_data.R
tidy_smc <- function(ret, items) { result <- list() result$alpha <- tidy_alpha(matrix(ret$alpha_samples, nrow = 1), 1, 1) rho_mat <- array(dim = c(dim(ret$rho_samples)[[1]], 1, dim(ret$rho_samples)[[2]])) rho_mat[, 1, ] <- ret$rho_samples result$rho <- tidy_rho(rho_mat, 1, 1, items) result } extract_alpha_init <- function(model, n_particles) { thinned_inds <- floor( seq( from = burnin(model) + 1, to = ncol(model$alpha_samples), length.out = n_particles ) ) model$alpha_samples[1, thinned_inds, drop = TRUE] } extract_rho_init <- function(model, n_particles) { thinned_inds <- floor( seq( from = burnin(model) + 1, to = dim(model$rho_samples)[[3]], length.out = n_particles ) ) model$rho_samples[, 1, thinned_inds, drop = TRUE] } run_common_part <- function( data, new_data, model_options, smc_options, compute_options, priors, initial_values, pfun_list, model) { ret <- run_smc( data = data, new_data = list(new_data), model_options = model_options, smc_options = smc_options, compute_options = compute_options, priors = priors, initial_values = initial_values, pfun_values = pfun_list$pfun_values, pfun_estimate = pfun_list$pfun_estimate ) ret$alpha_samples <- ret$alpha_samples[, 1] ret$rho_samples <- ret$rho_samples[, , 1] ret <- c(ret, tidy_smc(ret, data$items)) ret$model_options <- model_options ret$smc_options <- smc_options ret$compute_options <- compute_options class(ret$compute_options) <- "list" ret$priors <- priors ret$n_items <- model$n_items ret$n_clusters <- 1 ret$data <- new_data ret$pfun_values <- pfun_list$pfun_values ret$pfun_estimate <- pfun_list$pfun_estimate ret$model_options$metric <- model_options$metric if (prod(dim(ret$augmented_rankings)) == 0) ret$augmented_rankings <- NULL ret$items <- data$items class(ret) <- c("SMCMallows", "BayesMallows") ret } flush <- function(data) { data$rankings <- data$rankings[integer(), , drop = FALSE] data$preferences <- data$preferences[integer(), , drop = FALSE] data$constraints <- data$constraints[integer()] data$n_assessors <- 0 data$observation_frequency <- data$observation_frequency[integer()] data$consistent <- data$consistent[integer(), , drop = FALSE] data$user_ids <- data$user_ids[integer()] data$timepoint <- data$timepoint[integer()] data }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/smc_misc.R
tidy_mcmc <- function(fits, data, model_options, compute_options) { fit <- list() fit$save_aug <- compute_options$save_aug rho_dims <- dim(fits[[1]]$rho) fit$rho_samples <- fits[[1]]$rho for (f in fits[-1]) { fit$rho_samples <- abind(fit$rho_samples, f$rho) } rhos <- lapply(seq_along(fits), function(i) fits[[i]]$rho) fit$rho <- do.call(rbind, lapply(seq_along(rhos), function(i) { tidy_rho(rhos[[i]], i, compute_options$rho_thinning, data$items) })) alpha_dims <- dim(fits[[1]]$alpha) alphas <- lapply(seq_along(fits), function(i) fits[[i]]$alpha) fit$alpha_samples <- matrix(as.vector(do.call(rbind, alphas)), nrow = alpha_dims[[1]], ncol = length(fits) * alpha_dims[[2]] ) fit$alpha <- do.call(rbind, lapply(seq_along(alphas), function(i) { tidy_alpha(alphas[[i]], i, compute_options$alpha_jump) })) fit$cluster_assignment <- do.call(rbind, lapply(seq_along(fits), function(i) { tidy_cluster_assignment( fits[[i]]$cluster_assignment, i, model_options$n_clusters, data$n_assessors, compute_options$nmc ) })) fit$cluster_probs <- do.call(rbind, lapply(seq_along(fits), function(i) { tidy_cluster_probabilities( fits[[i]]$cluster_probs, i, model_options$n_clusters, compute_options$nmc ) })) fit$within_cluster_distance <- do.call(rbind, lapply(seq_along(fits), function(i) { tidy_wcd(fits[[i]]$within_cluster_distance, i) })) fit$augmented_data <- do.call(rbind, lapply(seq_along(fits), function(i) { tidy_augmented_data( fits[[i]]$augmented_data, i, data$items, compute_options$aug_thinning ) })) fit$theta <- do.call(rbind, lapply(seq_along(fits), function(i) { tidy_error_probability(fits[[i]]$theta, i) })) fit$n_clusters <- model_options$n_clusters fit$data <- data fit$compute_options <- compute_options fit$acceptance_ratios <- list( alpha_acceptance = lapply(fits, function(x) x$alpha_acceptance), rho_acceptance = lapply(fits, function(x) x$rho_acceptance), aug_acceptance = lapply(fits, function(x) x$aug_acceptance) ) return(fit) } tidy_rho <- function(rho_mat, chain, rho_thinning, items) { # Tidy rho rho_dims <- dim(rho_mat) # Item1, Item2, Item3, ...., Item1, Item2, Item3 # Cluster1, Cluster1, Cluster1, ..., Cluster2, Cluster2, Cluster2 # Iteration1, Iteration1, ..., Iteration1, Iteration1, Iteration1, Iteration2 value <- c(rho_mat) item <- rep(items, times = rho_dims[[2]] * rho_dims[[3]]) item <- factor(item, levels = items) cluster <- rep( seq(from = 1, to = rho_dims[[2]], by = 1), each = rho_dims[[1]], times = rho_dims[[3]] ) cluster <- factor(paste("Cluster", cluster), levels = paste("Cluster", sort(unique(cluster))) ) iteration <- rep(seq(from = 1, to = rho_dims[[3]] * rho_thinning, by = rho_thinning), each = rho_dims[[1]] * rho_dims[[2]] ) # Store the final rho as a dataframe data.frame( chain = factor(chain), item = item, cluster = cluster, iteration = iteration, value = value ) } tidy_alpha <- function(alpha_mat, chain, alpha_jump) { # Tidy alpha alpha_dims <- dim(alpha_mat) # Cluster1, Cluster2, ..., Cluster1, Cluster2 # Iteration1, Iteration1, ..., Iteration2, Iteration2 cluster <- rep( seq(from = 1, to = alpha_dims[[1]], by = 1), times = alpha_dims[[2]] ) cluster <- factor(paste("Cluster", cluster), levels = paste("Cluster", sort(unique(cluster))) ) iteration <- rep( seq(from = 1, to = alpha_dims[[2]] * alpha_jump, by = alpha_jump), each = alpha_dims[[1]] ) data.frame( chain = factor(chain), cluster = cluster, iteration = iteration, value = c(alpha_mat) ) } tidy_cluster_assignment <- function( cluster_assignment, chain, n_clusters, n_assessors, nmc) { if (n_clusters > 1) { cluster_dims <- dim(cluster_assignment) value <- paste("Cluster", cluster_assignment) } else if (n_assessors > 1) { cluster_dims <- c(n_assessors, nmc) value <- paste("Cluster", rep(1, prod(cluster_dims))) } else { return(data.frame()) } # Assessor1, Assessor2, ..., Assessor1, Assessor2 # Iteration1, Iteration1, ..., Iteration2, Iteration2 assessor <- rep( seq(from = 1, to = cluster_dims[[1]], by = 1), times = cluster_dims[[2]] ) iteration <- rep( seq(from = 1, to = cluster_dims[[2]], by = 1), each = cluster_dims[[1]] ) data.frame( chain = factor(chain), assessor = assessor, iteration = iteration, value = value ) } tidy_cluster_probabilities <- function(cluster_probs, chain, n_clusters, nmc) { # Tidy cluster probabilities if (n_clusters > 1) { clusprob_dims <- dim(cluster_probs) value <- c(cluster_probs) } else { clusprob_dims <- c(n_clusters, nmc) value <- rep(1, times = prod(clusprob_dims)) } # Cluster1, Cluster2, ..., Cluster1, Cluster2 # Iteration1, Iteration1, ..., Iteration2, Iteration2 cluster <- rep( seq(from = 1, to = clusprob_dims[[1]], by = 1), times = clusprob_dims[[2]] ) cluster <- factor(paste("Cluster", cluster), levels = paste("Cluster", sort(unique(cluster))) ) iteration <- rep( seq(from = 1, to = clusprob_dims[[2]], by = 1), each = clusprob_dims[[1]] ) data.frame( chain = factor(chain), cluster = cluster, iteration = iteration, value = value ) } tidy_wcd <- function(within_cluster_distance, chain) { # Tidy the within-cluster distances, or delete the empty matrix if (!is.null(within_cluster_distance)) { wcd_dims <- dim(within_cluster_distance) value <- c(within_cluster_distance) # Cluster1, Cluster2, ..., Cluster1, Cluster2 # Iteration1, Iteration1, ..., Iteration2, Iteration2 cluster <- rep( paste("Cluster", seq(from = 1, to = wcd_dims[[1]], by = 1)), times = wcd_dims[[2]] ) cluster <- factor(paste("Cluster", cluster), levels = paste("Cluster", sort(unique(cluster))) ) iteration <- rep( seq(from = 1, to = wcd_dims[[2]], by = 1), each = wcd_dims[[1]] ) data.frame( chain = factor(chain), cluster = cluster, iteration = iteration, value = value ) } else { NULL } } tidy_augmented_data <- function(augmented_data, chain, items, aug_thinning) { # Tidy augmented data, or delete if (!is.null(augmented_data) && prod(dim(augmented_data)) != 0) { augdata_dims <- dim(augmented_data) # Item1, Item2, ..., Item1, Item2, ..., Item1, Item2, ..., Item1, Item2 # Assessor1, Assessor1, ..., Assessor2, Assessor2, ... Assessor1, Assessor1, ..., Assessor2, Assessor2 # Iteration1, Iteration1, ..., Iteration1, Iteration1, ..., Iteration2, Iteration2, ... Iteration2, Iteration2 value <- c(augmented_data) item <- rep(items, times = augdata_dims[[2]] * augdata_dims[[3]]) item <- factor(item, levels = items) assessor <- rep(seq(from = 1, to = augdata_dims[[2]], by = 1), each = augdata_dims[[1]], times = augdata_dims[[3]] ) iteration <- rep(seq(from = 1, to = augdata_dims[[3]] * aug_thinning, by = aug_thinning), each = augdata_dims[[1]] * augdata_dims[[2]] ) data.frame( chain = factor(chain), iteration = iteration, assessor = assessor, item = item, value = value ) } else { NULL } } tidy_error_probability <- function(theta, chain) { theta_length <- length(theta) if (theta_length > 0) { data.frame( chain = factor(chain), iteration = seq(from = 1, to = theta_length, by = 1), value = c(theta) ) } else { NULL } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/tidy_mcmc.R
#' Update a Bayesian Mallows model with new users #' #' Update a Bayesian Mallows model estimated using the Metropolis-Hastings #' algorithm in [compute_mallows()] using the sequential Monte Carlo algorithm #' described in #' \insertCite{steinSequentialInferenceMallows2023;textual}{BayesMallows}. #' #' @param model A model object of class "BayesMallows" returned from #' [compute_mallows()], an object of class "SMCMallows" returned from this #' function, or an object of class "BayesMallowsPriorSamples" returned from #' [sample_prior()]. #' @param new_data An object of class "BayesMallowsData" returned from #' [setup_rank_data()]. The object should contain the new data being provided. #' @param model_options An object of class "BayesMallowsModelOptions" returned #' from [set_model_options()]. #' @param smc_options An object of class "SMCOptions" returned from #' [set_smc_options()]. #' @param compute_options An object of class "BayesMallowsComputeOptions" #' returned from [set_compute_options()]. #' @param priors An object of class "BayesMallowsPriors" returned from #' [set_priors()]. Defaults to the priors used in `model`. #' @param pfun_estimate Object returned from [estimate_partition_function()]. #' Defaults to \code{NULL}, and will only be used for footrule, Spearman, or #' Ulam distances when the cardinalities are not available, cf. #' [get_cardinalities()]. Only used by the specialization for objects of type #' "BayesMallowsPriorSamples". #' @param ... Optional arguments. Currently not used. #' #' @return An updated model, of class "SMCMallows". #' @export #' #' @family modeling #' #' @example /inst/examples/update_mallows_example.R #' update_mallows <- function(model, new_data, ...) { validate_class(new_data, "BayesMallowsData") UseMethod("update_mallows") } #' @export #' @rdname update_mallows update_mallows.BayesMallowsPriorSamples <- function( model, new_data, model_options = set_model_options(), smc_options = set_smc_options(), compute_options = set_compute_options(), priors = model$priors, pfun_estimate = NULL, ...) { alpha_init <- sample(model$alpha, smc_options$n_particles, replace = TRUE) rho_init <- model$rho[, sample(ncol(model$rho), smc_options$n_particles, replace = TRUE)] pfun_values <- extract_pfun_values(model_options$metric, new_data$n_items, pfun_estimate) if (length(new_data$user_ids) == 0) { new_data$user_ids <- seq(from = 1, to = nrow(new_data$rankings), by = 1) } run_common_part( data = flush(new_data), new_data = new_data, model_options = model_options, smc_options = smc_options, compute_options = compute_options, priors = priors, initial_values = list( alpha_init = alpha_init, rho_init = rho_init, aug_init = NULL ), pfun_list = list( pfun_values = pfun_values, pfun_estimate = pfun_estimate ), model = model ) } #' @export #' @rdname update_mallows update_mallows.BayesMallows <- function( model, new_data, model_options = set_model_options(), smc_options = set_smc_options(), compute_options = set_compute_options(), priors = model$priors, ...) { if (is.null(burnin(model))) stop("Burnin must be set.") alpha_init <- extract_alpha_init(model, smc_options$n_particles) rho_init <- extract_rho_init(model, smc_options$n_particles) if (length(new_data$user_ids) == 0) { new_data$user_ids <- seq(from = 1, to = nrow(new_data$rankings), by = 1) } run_common_part( data = flush(new_data), new_data = new_data, model_options = model_options, smc_options = smc_options, compute_options = compute_options, priors = priors, initial_values = list( alpha_init = alpha_init, rho_init = rho_init, aug_init = NULL ), pfun_list = list( pfun_values = model$pfun_values, pfun_estimate = model$pfun_estimate ), model = model ) } #' @export #' @rdname update_mallows update_mallows.SMCMallows <- function(model, new_data, ...) { if (length(new_data$user_ids) == 0) { new_data$user_ids <- max(as.numeric(model$data$user_ids)) + seq(from = 1, to = nrow(new_data$rankings), by = 1) } ret <- run_smc( data = model$data, new_data = list(new_data), model_options = model$model_options, smc_options = model$smc_options, compute_options = model$compute_options, priors = model$priors, initial_values = list( alpha_init = model$alpha_samples, rho_init = model$rho_samples, aug_init = model$augmented_rankings ), pfun_values = model$pfun_values, pfun_estimate = model$pfun_estimate ) model$acceptance_ratios <- ret$acceptance_ratios model$alpha_samples <- ret$alpha_samples[, 1] model$rho_samples <- ret$rho_samples[, , 1] model$augmented_rankings <- if (prod(dim(ret$augmented_rankings)) == 0) { NULL } else { ret$augmented_rankings } tidy_parameters <- tidy_smc(ret, model$items) model$alpha <- tidy_parameters$alpha model$rho <- tidy_parameters$rho model$augmented_rankings <- ret$augmented_rankings items <- model$data$items new_constraints <- c(model$data$constraints, new_data$constraints) model$data <- ret$data model$data$constraints <- new_constraints model$data$items <- items class(model) <- c("SMCMallows", "BayesMallows") model }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/update_mallows.R
validate_class <- function(argument, class) { if (!inherits(argument, class)) { stop(paste0( deparse(substitute(argument)), " must be an object of class ", class, "." )) } } validate_integer <- function(argument) { if (!is.numeric(argument) || argument < 0 || (round(argument) != argument)) { stop(paste(deparse(substitute(argument)), "must be a positive integer")) } } validate_positive <- function(argument) { if (argument <= 0 || !is.numeric(argument)) { stop(paste( deparse(substitute(argument)), "must be a strictly positive number of length one" )) } } validate_positive_vector <- function(argument) { if (any(argument <= 0) || !is.numeric(argument)) { stop(paste( deparse(substitute(argument)), "must be a vector of strictly positive numbers" )) } } validate_logical <- function(argument) { if (!is.logical(argument) || length(argument) != 1) { stop(paste( deparse(substitute(argument)), "must be a logical value of length one" )) } } check_larger <- function(larger, smaller) { if (larger <= smaller) { stop(paste( deparse(substitute(larger)), "must be strictly larger than", deparse(substitute(smaller)) )) } } validate_preferences <- function(data, model) { if (inherits(data$preferences, "BayesMallowsIntransitive") && model$error_model == "none") { stop("Intransitive pairwise comparisons. Please specify an error model.") } } validate_rankings <- function(data) { if (nrow(data$rankings) <= 0) stop("Data must have at least one row.") } validate_initial_values <- function(initial_values, data) { if (!is.null(initial_values$rho)) { if (length(unique(initial_values$rho)) != length(initial_values$rho)) { stop("initial value rho must be a ranking") } if (length(initial_values$rho) != data$n_items) { stop("initial value for rho must have one value per item") } } }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/R/validation_functions.R
--- title: "Introduction" output: rmarkdown::html_vignette: fig_width: 6 fig_height: 4 bibliography: ../inst/REFERENCES.bib link-citations: yes vignette: > %\VignetteIndexEntry{Introduction} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- This vignette contains update syntax for the code examples in @sorensen2020, since both the underlying code and the user interface are continuously evolving. We refer to @sorensen2020 for notation and all other details about the models and algorithms. ```r library(BayesMallows) set.seed(123) ``` # Analysis of complete rankings We illustrate the case of complete rankings with the potato datasets described in Section 4 of [@liu2019]. In short, a bag of 20 potatoes was bought, and 12 assessors were asked to rank the potatoes by weight, first by visual inspection, and next by holding the potatoes in hand. These datasets are available in `BayesMallows` as matrices with names `potato_weighing` and `potato_visual`, respectively. The true ranking of the potatoes' weights is available in the vector `potato_true_ranking`. In general, `compute_mallows` expects ranking datasets to have one row for each assessor and one column for each item. Each row has to be a proper permutation, possibly with missing values. We are interested in the posterior distribution of both the level of agreement between assessors, as described by $\alpha$, and in the latent ranking of the potatoes, as described by $\boldsymbol{\rho}$. We refer to the attached replication script for random number seeds for exact reproducibility. We start by defining our data object, which in this case consists of complete rankings. ```r complete_data <- setup_rank_data(rankings = potato_visual) ``` First, we do a test run to check convergence of the MCMC algorithm, and then get trace plots with `assess_convergence`. ```r bmm_test <- compute_mallows(data = complete_data) assess_convergence(bmm_test) ``` ![Trace plot for scale parameter.](complete_data_diagnostic_alpha-1.png) By default, `assess_convergence` returns a trace plot for $\alpha$, shown in the figure above. The algorithm seems to be mixing well after around 500 iterations. Next, we study the convergence of $\mathbf{\rho}$. To avoid overly complex plots, we pick potatoes $1-5$ by specifying this in the `items` argument. ```r assess_convergence(bmm_test, parameter = "rho", items = 1:5) ``` ![Trace plot for modal ranking.](complete_data_diagnostic_rho-1.png) The plot shows that the MCMC algorithm seems to have converged after around 1,000 iterations. From the trace plots, we decide to discard the first 1,000 MCMC samples as burn-in. We rerun the algorithm to get 20,000 samples after burn-in. The object `bmm_visual` has `S3` class `BayesMallows`, so we plot the posterior distribution of $\alpha$ with `plot.BayesMallows`. ```r bmm_visual <- compute_mallows( data = complete_data, compute_options = set_compute_options(nmc = 21000, burnin = 1000) ) plot(bmm_visual) ``` ![Posterior for scale parameter.](complete_data_model-1.png) We can also get posterior credible intervals for $\alpha$ using `compute_posterior_intervals`, which returns both highest posterior density intervals (HPDI) and central intervals in a `data.frame`. ```r compute_posterior_intervals(bmm_visual, decimals = 1L) #> parameter mean median hpdi central_interval #> 1 alpha 10.9 10.9 [9.5,12.3] [9.5,12.3] ``` Next, we can go on to study the posterior distribution of $\boldsymbol{\rho}$. If the \code{items} argument is not provided, and the number of items exceeds five, five items are picked at random for plotting. To show all potatoes, we explicitly set \code{items = 1:20}. ```r plot(bmm_visual, parameter = "rho", items = 1:20) ``` ![Posterior for modal ranking.](complete_data_posterior_rho-1.png) ## Jumping over the scale parameter Updating $\alpha$ in every step of the MCMC algorithm may not be necessary, as the number of posterior samples typically is more than large enough to obtain good estimates of its posterior distribution. With the `alpha_jump` argument, we can tell the MCMC algorithm to update $\alpha$ only every `alpha_jump`-th iteration. To update $\alpha$ every 10th time $\boldsymbol{\rho}$ is updated, we do ```r bmm_visual <- compute_mallows( data = complete_data, compute_options = set_compute_options(nmc = 21000, burnin = 1000, alpha_jump = 10) ) ``` ## Other distance metric By default, `compute_mallows` uses the footrule distance, but the user can also choose to use Cayley, Kendall, Hamming, Spearman, or Ulam distance. Running the same analysis of the potato data with Spearman distance is done with the command ```r bmm <- compute_mallows( data = complete_data, model_options = set_model_options(metric = "spearman"), compute_options = set_compute_options(nmc = 21000, burnin = 1000) ) ``` For the particular case of Spearman distance, `BayesMallows` only has integer sequences for computing the exact partition function with 14 or fewer items. In this case a precomputed importance sampling estimate is part of the package, and used instead. # Analysis of preference data Unless the argument `error_model` to `set_model_options` is set, pairwise preference data are assumed to be consistent within each assessor. These data should be provided in a dataframe with the following three columns, with one row per pairwise comparison: * `assessor` is an identifier for the assessor; either a numeric vector containing the assessor index, or a character vector containing the unique name of the assessor. * `bottom_item` is a numeric vector containing the index of the item that was disfavored in each pairwise comparison. * `top_item` is a numeric vector containing the index of the item that was preferred in each pairwise comparison. A dataframe with this structure can be given in the `preferences` argument to `setup_rank_data`, which will generate the full set of implied rankings for each assessor as well as an initial ranking matrix consistent with the pairwise preferences. We illustrate with the beach preference data containing stated pairwise preferences between random subsets of 15 images of beaches, by 60 assessors [@vitelli2018]. This dataset is provided in the dataframe `beach_preferences`, whose first six rows are shown below: ```r head(beach_preferences) #> assessor bottom_item top_item #> 1 1 2 15 #> 2 1 5 3 #> 3 1 13 3 #> 4 1 4 7 #> 5 1 5 15 #> 6 1 12 6 ``` We can define a rank data object based on these preferences. ```r beach_data <- setup_rank_data(preferences = beach_preferences) ``` It is instructive to compare the computed transitive closure to the stated preferences. Let's do this for all preferences stated by assessor 1 involving beach 2. We first look at the raw preferences. ```r subset(beach_preferences, assessor == 1 & (bottom_item == 2 | top_item == 2)) #> assessor bottom_item top_item #> 1 1 2 15 ``` We then use the function `get_transitive_closure` to obtain the transitive closure, and then focus on the same subset: ```r tc <- get_transitive_closure(beach_data) subset(tc, assessor == 1 & (bottom_item == 2 | top_item == 2)) #> assessor bottom_item top_item #> 11 1 2 6 #> 44 1 2 15 ``` Assessor 1 has performed only one direct comparison involving beach 2, in which the assessor stated that beach 15 is preferred to beach 2. The implied orderings, on the other hand, contain two preferences involving beach 2. In addition to the statement that beach 15 is preferred to beach 2, all the other orderings stated by assessor 1 imply that this assessor prefers beach 6 to beach 2. ## Convergence diagnostics As with the potato data, we can do a test run to assess the convergence of the MCMC algorithm. This time we use the `beach_data` object that we generated above, based on the stated preferences. We also set `save_aug = TRUE` to save the augmented rankings in each MCMC step, hence letting us assess the convergence of the augmented rankings. ```r bmm_test <- compute_mallows( data = beach_data, compute_options = set_compute_options(save_aug = TRUE)) ``` Running `assess_convergence` for $\alpha$ and $\boldsymbol{\rho}$ shows good convergence after 1000 iterations. ```r assess_convergence(bmm_test) ``` ![Trace plot for scale parameter.](preferences_alpha_trace-1.png) ```r assess_convergence(bmm_test, parameter = "rho", items = 1:6) ``` ![Trace plot for modal ranking.](preferences_rho_trace-1.png) To check the convergence of the data augmentation scheme, we need to set `parameter = "Rtilde"`, and also specify which items and assessors to plot. Let us start by considering items 2, 6, and 15 for assessor 1, which we studied above. ```r assess_convergence( bmm_test, parameter = "Rtilde", items = c(2, 6, 15), assessors = 1) ``` ![Trace plot for augmented rankings.](preferences_augmented_rankings-1.png) The convergence plot illustrates how the augmented rankings vary, while also obeying their implied ordering. By further investigation of the transitive closure, we find that no orderings are implied between beach 1 and beach 15 for assessor 2. That is, the following statement returns zero rows. ```r subset(tc, assessor == 2 & bottom_item %in% c(1, 15) & top_item %in% c(1, 15)) #> [1] assessor bottom_item top_item #> <0 rows> (or 0-length row.names) ``` With the following command, we create trace plots to confirm this: ```r assess_convergence( bmm_test, parameter = "Rtilde", items = c(1, 15), assessors = 2) ``` ![Trace plot for augmented rankings where items have not been compared.](preferences_augmented_rankings_free-1.png) As expected, the traces of the augmented rankings for beach 1 and 15 for assessor 2 do cross each other, since no ordering is implied between them. Ideally, we should look at trace plots for augmented ranks for more assessors to be sure that the algorithm is close to convergence. We can plot assessors 1-8 by setting `assessors = 1:8`. We also quite arbitrarily pick items 13-15, but the same procedure can be repeated for other items. ```r assess_convergence( bmm_test, parameter = "Rtilde", items = 13:15, assessors = 1:8) ``` ![Trace plots for items 13-15 and assessors 1-8.](preferences_augmented_rankings_many-1.png) The plot indicates good mixing. ## Posterior distributions Based on the convergence diagnostics, and being fairly conservative, we discard the first 2,000 MCMC iterations as burn-in, and take 20,000 additional samples. ```r bmm_beaches <- compute_mallows( data = beach_data, compute_options = set_compute_options(nmc = 22000, burnin = 2000, save_aug = TRUE) ) ``` The posterior distributions of $\alpha$ and $\boldsymbol{\rho}$ can be studied as shown in the previous sections. Posterior intervals for the latent rankings of each beach are obtained with `compute_posterior_intervals`: ```r compute_posterior_intervals(bmm_beaches, parameter = "rho") #> parameter item mean median hpdi central_interval #> 1 rho Item 1 7 7 [7] [7] #> 2 rho Item 2 15 15 [15] [15] #> 3 rho Item 3 3 3 [3,4] [3,4] #> 4 rho Item 4 12 12 [11,13] [11,14] #> 5 rho Item 5 9 9 [8,10] [8,10] #> 6 rho Item 6 2 2 [1,2] [1,2] #> 7 rho Item 7 8 8 [8,9] [8,10] #> 8 rho Item 8 12 12 [11,13] [11,14] #> 9 rho Item 9 1 1 [1,2] [1,2] #> 10 rho Item 10 6 6 [5,6] [5,6] #> 11 rho Item 11 4 4 [3,4] [3,5] #> 12 rho Item 12 13 13 [12,14] [12,14] #> 13 rho Item 13 10 10 [9,10] [9,10] #> 14 rho Item 14 13 14 [11,14] [11,14] #> 15 rho Item 15 5 5 [4,5] [4,6] ``` We can also rank the beaches according to their cumulative probability (CP) consensus [@vitelli2018] and their maximum posterior (MAP) rankings. This is done with the function `compute_consensus`, and the following call returns the CP consensus: ```r compute_consensus(bmm_beaches, type = "CP") #> cluster ranking item cumprob #> 1 Cluster 1 1 Item 9 0.89815 #> 2 Cluster 1 2 Item 6 1.00000 #> 3 Cluster 1 3 Item 3 0.72665 #> 4 Cluster 1 4 Item 11 0.95160 #> 5 Cluster 1 5 Item 15 0.95400 #> 6 Cluster 1 6 Item 10 0.97645 #> 7 Cluster 1 7 Item 1 1.00000 #> 8 Cluster 1 8 Item 7 0.62585 #> 9 Cluster 1 9 Item 5 0.85950 #> 10 Cluster 1 10 Item 13 1.00000 #> 11 Cluster 1 11 Item 4 0.46870 #> 12 Cluster 1 12 Item 8 0.84435 #> 13 Cluster 1 13 Item 12 0.61905 #> 14 Cluster 1 14 Item 14 0.99665 #> 15 Cluster 1 15 Item 2 1.00000 ``` The column `cumprob` shows the probability of having the given rank or lower. Looking at the second row, for example, this means that beach 6 has probability 1 of having latent rank $\rho_{6} \leq 2$. Next, beach 3 has probability 0.738 of having latent rank $\rho_{3}\leq 3$. This is an example of how the Bayesian framework can be used to not only rank items, but also to give posterior assessments of the uncertainty of the rankings. The MAP consensus is obtained similarly, by setting `type = "MAP"`. ```r compute_consensus(bmm_beaches, type = "MAP") #> cluster map_ranking item probability #> 1 Cluster 1 1 Item 9 0.04955 #> 2 Cluster 1 2 Item 6 0.04955 #> 3 Cluster 1 3 Item 3 0.04955 #> 4 Cluster 1 4 Item 11 0.04955 #> 5 Cluster 1 5 Item 15 0.04955 #> 6 Cluster 1 6 Item 10 0.04955 #> 7 Cluster 1 7 Item 1 0.04955 #> 8 Cluster 1 8 Item 7 0.04955 #> 9 Cluster 1 9 Item 5 0.04955 #> 10 Cluster 1 10 Item 13 0.04955 #> 11 Cluster 1 11 Item 4 0.04955 #> 12 Cluster 1 12 Item 8 0.04955 #> 13 Cluster 1 13 Item 14 0.04955 #> 14 Cluster 1 14 Item 12 0.04955 #> 15 Cluster 1 15 Item 2 0.04955 ``` Keeping in mind that the ranking of beaches is based on sparse pairwise preferences, we can also ask: for beach $i$, what is the probability of being ranked top-$k$ by assessor $j$, and what is the probability of having latent rank among the top-$k$. The function `plot_top_k` plots these probabilities. By default, it sets `k = 3`, so a heatplot of the probability of being ranked top-3 is obtained with the call: ```r plot_top_k(bmm_beaches) ``` ![Top-3 rankings for beach preferences.](preferences_top_k-1.png) The plot shows, for each beach as indicated on the left axis, the probability that assessor $j$ ranks the beach among top-3. For example, we see that assessor 1 has a very low probability of ranking beach 9 among her top-3, while assessor 3 has a very high probability of doing this. The function `predict_top_k` returns a dataframe with all the underlying probabilities. For example, in order to find all the beaches that are among the top-3 of assessors 1-5 with more than 90 \% probability, we would do: ```r subset(predict_top_k(bmm_beaches), prob > .9 & assessor %in% 1:5) #> assessor item prob #> 301 1 Item 6 0.99435 #> 303 3 Item 6 0.99600 #> 305 5 Item 6 0.97605 #> 483 3 Item 9 1.00000 #> 484 4 Item 9 0.99975 #> 601 1 Item 11 0.95030 ``` Note that assessor 2 does not appear in this table, i.e., there are no beaches for which we are at least 90 \% certain that the beach is among assessor 2's top-3. # Clustering `BayesMallows` comes with a set of sushi preference data, in which 5,000 assessors each have ranked a set of 10 types of sushi [@kamishima2003]. It is interesting to see if we can find subsets of assessors with similar preferences. The sushi dataset was analyzed with the BMM by @vitelli2018, but the results in that paper differ somewhat from those obtained here, due to a bug in the function that was used to sample cluster probabilities from the Dirichlet distribution. We start by defining the data object. ```r sushi_data <- setup_rank_data(sushi_rankings) ``` ## Convergence diagnostics The function `compute_mallows_mixtures` computes multiple Mallows models with different numbers of mixture components. It returns a list of models of class `BayesMallowsMixtures`, in which each list element contains a model with a given number of mixture components. Its arguments are `n_clusters`, which specifies the number of mixture components to compute, an optional parameter `cl` which can be set to the return value of the `makeCluster` function in the `parallel` package, and an ellipsis (`...`) for passing on arguments to `compute_mallows`. Hypothesizing that we may not need more than 10 clusters to find a useful partitioning of the assessors, we start by doing test runs with 1, 4, 7, and 10 mixture components in order to assess convergence. We set the number of Monte Carlo samples to 5,000, and since this is a test run, we do not save within-cluster distances from each MCMC iteration and hence set `include_wcd = FALSE`. ```r library("parallel") cl <- makeCluster(detectCores()) bmm <- compute_mallows_mixtures( n_clusters = c(1, 4, 7, 10), data = sushi_data, compute_options = set_compute_options(nmc = 5000, include_wcd = FALSE), cl = cl) stopCluster(cl) ``` The function `assess_convergence` automatically creates a grid plot when given an object of class `BayesMallowsMixtures`, so we can check the convergence of $\alpha$ with the command ```r assess_convergence(bmm) ``` ![Trace plots for scale parameters.](cluster_trace_alpha-1.png) The resulting plot shows that all the chains seem to be close to convergence quite quickly. We can also make sure that the posterior distributions of the cluster probabilities $\tau_{c}$, $(c = 1, \dots, C)$ have converged properly, by setting `parameter = "cluster_probs"`. ```r assess_convergence(bmm, parameter = "cluster_probs") ``` ![Trace plots for cluster assignment probabilities.](cluster_trace_probs-1.png) Note that with only one cluster, the cluster probability is fixed at the value 1, while for other number of mixture components, the chains seem to be mixing well. ## Deciding on the number of mixture components Given the convergence assessment of the previous section, we are fairly confident that a burn-in of 1,000 is sufficient. We run 40,000 additional iterations, and try from 1 to 10 mixture components. Our goal is now to determine the number of mixture components to use, and in order to create an elbow plot, we set `include_wcd = TRUE` to compute the within-cluster distances in each step of the MCMC algorithm. Since the posterior distributions of $\rho_{c}$ ($c = 1,\dots,C$) are highly peaked, we save some memory by only saving every 10th value of $\boldsymbol{\rho}$ by setting `rho_thinning = 10`. ```r cl <- makeCluster(detectCores()) bmm <- compute_mallows_mixtures( n_clusters = 1:10, data = sushi_data, compute_options = set_compute_options(nmc = 11000, burnin = 1000, rho_thinning = 10, include_wcd = TRUE), cl = cl) stopCluster(cl) ``` We then create an elbow plot: ```r plot_elbow(bmm) ``` ![Elbow plot for deciding on the number of mixture components.](cluster_elbow-1.png) Although not clear-cut, we see that the within-cluster sum of distances levels off at around 5 clusters, and hence we choose to use 5 clusters in our model. ## Posterior distributions Having chosen 5 mixture components, we go on to fit a final model, still running 10,000 iterations after burnin. This time we call `compute_mallows` and set `n_clusters = 5`. We also set `clus_thinning = 10` to save the cluster assignments of each assessor in every 10th iteration, and `rho_thinning = 10` to save the estimated latent rank every 10th iteration. Note that thinning is done only for because saving the values at every iteration would result in very large objects being stored in memory, thus slowing down computation. For statistical efficiency, it is best to avoid thinning. ```r bmm <- compute_mallows( data = sushi_data, model_options = set_model_options(n_cluster = 5), compute_options = set_compute_options( nmc = 11000, burnin = 1000, clus_thinning = 10, rho_thinning = 10) ) ``` We can plot the posterior distributions of $\alpha$ and $\boldsymbol{\rho}$ in each cluster using `plot.BayesMallows` as shown previously for the potato data. ```r plot(bmm) ``` ![Posterior of scale parameter in each cluster.](cluster_posterior_alpha-1.png) Since there are five clusters, the easiest way of visualizing posterior rankings is by choosing a single item. ```r plot(bmm, parameter = "rho", items = 1) ``` ![Posterior of item 1 in each cluster.](cluster_posterior_rho-1.png) We can also show the posterior distributions of the cluster probabilities. ```r plot(bmm, parameter = "cluster_probs") ``` ![Posterior for cluster probabilities.](cluster_probs_posterior-1.png) Using the argument `parameter = "cluster_assignment"`, we can visualize the posterior probability for each assessor of belonging to each cluster: ```r plot(bmm, parameter = "cluster_assignment") ``` ![Posterior for cluster assignment.](cluster_assignment_posterior-1.png) The number underlying the plot can be found using `assign_cluster`. We can find clusterwise consensus rankings using `compute_consensus`. ```r cp_consensus <- compute_consensus(bmm) reshape( cp_consensus, direction = "wide", idvar = "ranking", timevar = "cluster", varying = list(unique(cp_consensus$cluster)), drop = "cumprob" ) #> ranking Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 #> 1 1 shrimp fatty tuna fatty tuna sea urchin fatty tuna #> 2 2 sea eel tuna salmon roe fatty tuna sea urchin #> 3 3 squid sea eel sea urchin salmon roe tuna #> 4 4 egg shrimp tuna sea eel salmon roe #> 5 5 fatty tuna tuna roll shrimp shrimp sea eel #> 6 6 tuna squid tuna roll tuna tuna roll #> 7 7 tuna roll egg squid squid shrimp #> 8 8 cucumber roll cucumber roll sea eel tuna roll squid #> 9 9 salmon roe salmon roe egg egg egg #> 10 10 sea urchin sea urchin cucumber roll cucumber roll cucumber roll ``` Note that for estimating cluster specific parameters, label switching is a potential problem that needs to be handled. `BayesMallows` ignores label switching issues inside the MCMC, because it has been shown that this approach is better for ensuring full convergence of the chain [@jasra2005;@celeux2000]. MCMC iterations can be re-ordered after convergence is achieved, for example by using the implementation of Stephens' algorithm [@Stephens2000] provided by the R package `label.switching` [@papastamoulis2016]. A full example of how to assess label switching is provided in the examples for the `compute_mallows` function. # References
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/doc/BayesMallows.Rmd
--- title: "Sequential Monte Carlo for the Bayesian Mallows model" output: rmarkdown::html_vignette pkgdown: as_is: true bibliography: ../inst/REFERENCES.bib link-citations: yes vignette: > %\VignetteIndexEntry{Sequential Monte Carlo for the Bayesian Mallows model} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```r library(BayesMallows) library(ggplot2) set.seed(123) ``` This vignette describes sequential Monte Carlo (SMC) algorithms to provide updated approximations to the posterior distribution of a single Mallows model. We consider scenarios where we receive sequential information in the form of complete rankings, partial rankings and updated rankings from existing individuals who have previously provided a (partial) ranking. This vignette focuses on the code. For an in-depth treatment of the implemented methodology, see @steinSequentialInferenceMallows2023 which is available <a href="https://eprints.lancs.ac.uk/id/eprint/195759/" target="_blank">here</a>. ## New users with complete rankings We use the `sushi_rankings` dataset to illustrate the methodology [@kamishima2003nantonac]. This dataset contains 5000 complete rankings for 10 sushi dishes. ```r head(sushi_rankings) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> [1,] 2 8 10 3 4 1 5 9 7 6 #> [2,] 1 8 6 4 10 9 3 5 7 2 #> [3,] 2 8 3 4 6 7 10 1 5 9 #> [4,] 4 7 5 6 1 2 8 3 9 10 #> [5,] 4 10 7 5 9 3 2 8 1 6 #> [6,] 4 6 2 10 7 5 1 9 8 3 ``` The SMC methodology is designed for the case where date arrive in batches. Assume that we initially have only 300 observed rankings, in `data_batch1`: ```r data_batch1 <- sushi_rankings[1:300, ] ``` We estimate a model on these data using `compute_mallows()`, which runs a full Metropolis-Hastings algorithm. ```r model1 <- compute_mallows(data = setup_rank_data(data_batch1)) ``` We assess convergence, and find that 300 is an appropriate burnin value. ```r assess_convergence(model1) ``` <div class="figure"> <img src="convergence_smc_full-1.png" alt="Trace plot for SMC model." height="4cm" /> <p class="caption">Trace plot for SMC model.</p> </div> ```r burnin(model1) <- 300 ``` Having saved this model, assume we receive another batch of preferences at a later timepoint, with an additional 300 rankings. ```r data_batch2 <- sushi_rankings[301:600, ] ``` We can now update the initial model, without rerunning the full Metropolis-Hastings algorithm, by calling `update_mallows()`. This function uses the sequential Monte Carlo algorithm of @steinSequentialInferenceMallows2023, and extracts a thinned sample of size `n_particles` from `model1` as initial values. ```r model2 <- update_mallows( model = model1, new_data = setup_rank_data(data_batch2), smc_options = set_smc_options(n_particles = 1000)) ``` All the posterior summary methods can be used for `model2`. For example, we can plot the posterior of $\alpha$. ```r plot(model2) ``` <div class="figure"> <img src="smc_complete_model2_alpha-1.png" alt="Posterior distribution of scale parameter for model 2." height="4cm" /> <p class="caption">Posterior distribution of scale parameter for model 2.</p> </div> And we can plot the posterior of the latent ranks of selected items: ```r plot(model2, parameter = "rho", items = c("shrimp", "sea eel", "tuna")) ``` <div class="figure"> <img src="smc_complete_model2-1.png" alt="Posterior distribution of selected latent rankings for model 2." height="2cm" /> <p class="caption">Posterior distribution of selected latent rankings for model 2.</p> </div> Next, assume we get yet another set of rankings later, now of size 1000. ```r data_batch3 <- sushi_rankings[601:1600, ] ``` We can re-update the model. ```r model3 <- update_mallows(model2, new_data = setup_rank_data(data_batch3)) ``` We can again plot posterior quantities, and the plots reveal that as expected, the posterior uncertainty about the rankings has decreased once we added more data. ```r plot(model3, parameter = "rho", items = c("shrimp", "sea eel", "tuna")) ``` <div class="figure"> <img src="smc_complete_model3-1.png" alt="Posterior distribution of selected latent rankings for model 3." height="2cm" /> <p class="caption">Posterior distribution of selected latent rankings for model 3.</p> </div> Finally, we add a batch with the last data and re-update the model. ```r data_batch4 <- sushi_rankings[1601:5000, ] model4 <- update_mallows( model3, new_data = setup_rank_data(rankings = data_batch4)) ``` The posterior uncertainty is now very small: ```r plot(model4, parameter = "rho", items = c("shrimp", "sea eel", "tuna")) ``` <div class="figure"> <img src="smc_complete_model4_rho-1.png" alt="Posterior distribution of selected latent rankings for model 4." height="2cm" /> <p class="caption">Posterior distribution of selected latent rankings for model 4.</p> </div> Below is a comparison of the posterior intervals of the dispersion parameter for each model. Note how the intervals get increasingly narrower as more data is added. ```r rbind( compute_posterior_intervals(model1), compute_posterior_intervals(model2), compute_posterior_intervals(model3), compute_posterior_intervals(model4) ) #> parameter mean median hpdi central_interval #> 1 alpha 1.768 1.766 [1.603,1.917] [1.604,1.922] #> 2 alpha 1.777 1.773 [1.630,1.935] [1.620,1.931] #> 3 alpha 1.753 1.756 [1.676,1.827] [1.677,1.827] #> 4 alpha 1.712 1.714 [1.667,1.748] [1.669,1.752] ``` As an assurance that the implementation is correct, we can compare the final model to what we get by running `compute_mallows` on the complete dataset: ```r mod_bmm <- compute_mallows( data = setup_rank_data(rankings = sushi_rankings), compute_options = set_compute_options(nmc = 5000, burnin = 1000) ) ``` We can compare the posteriors for $\alpha$ of the two models. Note that although both are rather wiggly, they agree very well about location and scale. ```r plot(mod_bmm) ``` <div class="figure"> <img src="smc_complete_mod_bmm-1.png" alt="Posterior distribution of scale parameter for Metropolis-Hastings run on the complete data." height="3cm" /> <p class="caption">Posterior distribution of scale parameter for Metropolis-Hastings run on the complete data.</p> </div> ```r plot(model4) ``` <div class="figure"> <img src="smc_complete_model4_alpha-1.png" alt="Posterior distribution of scale parameter for model 4." height="3cm" /> <p class="caption">Posterior distribution of scale parameter for model 4.</p> </div> The posterior intervals are also in good agreement. ```r rbind( compute_posterior_intervals(mod_bmm), compute_posterior_intervals(model4) ) #> parameter mean median hpdi central_interval #> 1 alpha 1.691 1.690 [1.648,1.734] [1.643,1.732] #> 2 alpha 1.712 1.714 [1.667,1.748] [1.669,1.752] ``` The cumulative probability consensus is also in good agrement: ```r compute_consensus(model4) #> cluster ranking item cumprob #> 1 Cluster 1 1 fatty tuna 1.000 #> 2 Cluster 1 2 salmon roe 1.000 #> 3 Cluster 1 3 tuna 1.000 #> 4 Cluster 1 4 shrimp 1.000 #> 5 Cluster 1 5 sea eel 1.000 #> 6 Cluster 1 6 tuna roll 0.835 #> 7 Cluster 1 7 squid 1.000 #> 8 Cluster 1 8 sea urchin 1.000 #> 9 Cluster 1 9 egg 1.000 #> 10 Cluster 1 10 cucumber roll 1.000 compute_consensus(mod_bmm) #> cluster ranking item cumprob #> 1 Cluster 1 1 fatty tuna 1 #> 2 Cluster 1 2 sea urchin 1 #> 3 Cluster 1 3 tuna 1 #> 4 Cluster 1 4 salmon roe 1 #> 5 Cluster 1 5 shrimp 1 #> 6 Cluster 1 6 sea eel 1 #> 7 Cluster 1 7 tuna roll 1 #> 8 Cluster 1 8 squid 1 #> 9 Cluster 1 9 egg 1 #> 10 Cluster 1 10 cucumber roll 1 ``` ## New users with partial or complete rankings The functionality extends directly to partial ranks, including both top-$k$ rankings and rankings missing at random. Pairwise preferences are also supported, although not demonstrated here. For this demonstration we shall assume that we can only observe the top-5 ranked items from each user in the `sushi_rankings` dataset. ```r data_partial <- sushi_rankings data_partial[data_partial > 5] <- NA head(data_partial) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> [1,] 2 NA NA 3 4 1 5 NA NA NA #> [2,] 1 NA NA 4 NA NA 3 5 NA 2 #> [3,] 2 NA 3 4 NA NA NA 1 5 NA #> [4,] 4 NA 5 NA 1 2 NA 3 NA NA #> [5,] 4 NA NA 5 NA 3 2 NA 1 NA #> [6,] 4 NA 2 NA NA 5 1 NA NA 3 ``` Again, assume we start out with a batch of data, this time with 100 rankings: ```r data_batch1 <- data_partial[1:100, ] ``` We estimate this model using `compute_mallows()`. Since there are `NA`s in the data, `compute_mallows()` will run imputation over the missing ranks. ```r model1 <- compute_mallows( data = setup_rank_data(data_batch1), compute_options = set_compute_options(nmc = 10000) ) ``` The trace plot shows that convergence is reached quickly. ```r assess_convergence(model1) ``` <div class="figure"> <img src="convergence_smc_partial-1.png" alt="Trace plot for SMC model." height="4cm" /> <p class="caption">Trace plot for SMC model.</p> </div> We set the burnin to 300. ```r burnin(model1) <- 300 ``` Below is the posterior for $\alpha$ after this initial run: ```r plot(model1) ``` <div class="figure"> <img src="smc_init_posterior_partial-1.png" alt="Posterior distribution of scale parameter after initial run." height="3cm" /> <p class="caption">Posterior distribution of scale parameter after initial run.</p> </div> Next, assume we receive 100 more top-5 rankings: ```r data_batch2 <- data_partial[101:200, ] ``` We now update the initial model, using SMC. By default, a uniform distribution is used to propose new values of augmented ranks. The pseudo-likelihood proposal developed in @steinSequentialInferenceMallows2023 can be used instead, by setting `aug_method = "pseudo"` in the call to `set_compute_options()`, and we do this here. ```r model2 <- update_mallows( model = model1, new_data = setup_rank_data(data_batch2), smc_options = set_smc_options(n_particles = 1000), compute_options = set_compute_options( aug_method = "pseudo", pseudo_aug_metric = "footrule") ) ``` Below is the posterior for $\alpha$: ```r plot(model2) ``` <div class="figure"> <img src="smc_updated_posterior_partial-1.png" alt="Posterior distribution of scale parameter after updating the model based on new rankings." height="3cm" /> <p class="caption">Posterior distribution of scale parameter after updating the model based on new rankings.</p> </div> When even more data arrives, we can update the model again. For example, assume we now get a set of complete rankings, with no missingness: ```r data_batch3 <- sushi_rankings[201:300, ] ``` We update the model just as before: ```r model3 <- update_mallows(model2, new_data = setup_rank_data(data_batch3)) ``` ```r plot(model3) ``` <div class="figure"> <img src="smc_second_updated_posterior_partial-1.png" alt="Posterior distribution of scale parameter after updating the model based on new rankings." height="3cm" /> <p class="caption">Posterior distribution of scale parameter after updating the model based on new rankings.</p> </div> ## Users updating their rankings Another setting supported is when existing users update their partial rankings. For example, users can initially give top-5 rankings, and subsequently update these to top-10 rankings, top-20 rankings, etc. Another setting is when there are ranks missing at random, and the users subsequently provide these rankings. The main methodological issue in this case, is that the augmented rankings at the previous SMC timepoint may be in conflict with the new rankings. In this case, the augmented rankings must be corrected, as described in Chapter 6 of @steinSequentialInferenceMallows2023. We provide an example again with the sushi data. We assume that the initial batch of data contains top-3 rankings provided by the first 100 users. ```r set.seed(123) sushi_reduced <- sushi_rankings[1:100, ] data_batch1 <- ifelse(sushi_reduced > 3, NA_real_, sushi_reduced) ``` To keep track of existing users updating their preferences, we also need a user ID in this case, which is required to be a number vector. ```r rownames(data_batch1) <- seq_len(nrow(data_batch1)) head(data_batch1) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> 1 2 NA NA 3 NA 1 NA NA NA NA #> 2 1 NA NA NA NA NA 3 NA NA 2 #> 3 2 NA 3 NA NA NA NA 1 NA NA #> 4 NA NA NA NA 1 2 NA 3 NA NA #> 5 NA NA NA NA NA 3 2 NA 1 NA #> 6 NA NA 2 NA NA NA 1 NA NA 3 ``` We fit the standard Metropolis-Hastings algorithm to these data, yielding a starting point. ```r mod_init <- compute_mallows( data = setup_rank_data( rankings = data_batch1, user_ids = as.numeric(rownames(data_batch1))) ) ``` Convergence seems to be quick, and we set the burnin to 300. ```r assess_convergence(mod_init) ``` <div class="figure"> <img src="sushi_updated_batch1_burnin-1.png" alt="Trace plot for initial run on sushi batch 1." height="4cm" /> <p class="caption">Trace plot for initial run on sushi batch 1.</p> </div> ```r burnin(mod_init) <- 300 ``` Next, assume we receive top-5 rankings from the same users. We now update the model using SMC. ```r data_batch2 <- ifelse(sushi_reduced > 5, NA_real_, sushi_reduced) rownames(data_batch2) <- seq_len(nrow(data_batch2)) model2 <- update_mallows( model = mod_init, new_data = setup_rank_data( rankings = data_batch2, user_ids = as.numeric(rownames(data_batch2))), compute_options = set_compute_options( aug_method = "pseudo", pseudo_aug_metric = "footrule") ) ``` We can plot the posterior distributions of $\alpha$ before and after. ```r plot(mod_init) + ggtitle("Posterior of dispersion parameter after data batch 1") ``` <div class="figure"> <img src="sushi_updated_batch1_posterior-1.png" alt="Posterior after sushi batch 1." height="4cm" /> <p class="caption">Posterior after sushi batch 1.</p> </div> ```r plot(model2) + ggtitle("Posterior of dispersion parameter after data batch 2") ``` <div class="figure"> <img src="sushi_updated_batch2_posterior-1.png" alt="Posterior after sushi batch 2." height="4cm" /> <p class="caption">Posterior after sushi batch 2.</p> </div> Next, assume we receive top-8 rankings from the same users. ```r data_batch3 <- ifelse(sushi_reduced > 8, NA_real_, sushi_reduced) rownames(data_batch3) <- seq_len(nrow(data_batch3)) ``` Before proceeding, it is instructive to study why this situation needs special care. Below are the augmented rankings for user 1 in particle 1: ```r (v1 <- model2$augmented_rankings[, 1, 1]) #> [1] 2 7 10 3 4 1 5 6 8 9 ``` Next, we show the data provided by user 1 in `data_batch3`: ```r (v2a <- unname(data_batch3[1, ])) #> [1] 2 8 NA 3 4 1 5 NA 7 6 ``` By comparing the non-missing ranks, we can check if they are consistent or not: ```r (v2b <- v2a[!is.na(v2a)]) #> [1] 2 8 3 4 1 5 7 6 v1[v1 %in% v2b] #> [1] 2 7 3 4 1 5 6 8 all(v1[v1 %in% v2b] == v2b) #> [1] FALSE ``` The provided data are not consistent with the augmented rankings in this case. This means that the augmented rankings for user 1 in particle 1 need to be corrected by the algorithm. Luckily, this happens automatically in our implementation, so we can update the model again. ```r model3 <- update_mallows( model = mod_init, new_data = setup_rank_data( rankings = data_batch3, user_ids = as.numeric(rownames(data_batch3)))) ``` Next we plot the posterior: ```r plot(model3) + ggtitle("Posterior of dispersion parameter after data batch 3") ``` <div class="figure"> <img src="sushi_updated_batch3_posterior-1.png" alt="Posterior after sushi batch 3." height="4cm" /> <p class="caption">Posterior after sushi batch 3.</p> </div> Now assume we get a batch of new users, without missing ranks. These can be treated just as the other ones, but we need new user IDs. ```r data_batch4 <- sushi_rankings[500:600, ] rownames(data_batch4) <- 500:600 head(data_batch4) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> 500 6 5 4 8 2 3 7 1 9 10 #> 501 3 9 5 8 4 2 6 1 7 10 #> 502 3 1 8 5 4 7 9 2 6 10 #> 503 8 6 3 1 4 5 9 7 2 10 #> 504 4 7 1 2 9 10 3 8 5 6 #> 505 1 5 6 8 3 4 9 2 7 10 ``` ```r model4 <- update_mallows( model = model3, new_data = setup_rank_data( rankings = data_batch4, user_ids = as.numeric(rownames(data_batch4)))) ``` Here is the posterior for this model. ```r plot(model4) + ggtitle("Posterior of dispersion parameter after data batch 4") ``` <div class="figure"> <img src="sushi_updated_batch4_posterior-1.png" alt="Posterior after sushi batch 4." height="4cm" /> <p class="caption">Posterior after sushi batch 4.</p> </div> We can confirm that the implementation is sensible by giving the complete data to `compute_mallows`: ```r full_data <- rbind(data_batch3, data_batch4) mod_bmm <- compute_mallows(data = setup_rank_data(rankings = full_data)) ``` The trace plot indicates good convergence, and we set the burnin to 300. ```r assess_convergence(mod_bmm) ``` <div class="figure"> <img src="sushi_updated_burnin-1.png" alt="Trace plot for MCMC run on sushi data." height="4cm" /> <p class="caption">Trace plot for MCMC run on sushi data.</p> </div> ```r burnin(mod_bmm) <- 300 ``` We see that the posterior is close to the one of `model4`: ```r plot(mod_bmm) ``` <div class="figure"> <img src="sushi_updated_bmm_posterior-1.png" alt="Posterior for MCMC on sushi data." height="4cm" /> <p class="caption">Posterior for MCMC on sushi data.</p> </div> ## References
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/doc/SMC-Mallows.Rmd
--- title: "MCMC with Parallel Chains" output: rmarkdown::html_vignette: fig_width: 6 fig_height: 4 bibliography: ../inst/REFERENCES.bib link-citations: yes vignette: > %\VignetteIndexEntry{MCMC with Parallel Chains} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```r library(BayesMallows) set.seed(123) ``` This vignette describes how to run Markov chain Monte Carlo with parallel chains. For an introduction to the "BayesMallows" package, please see [the introductory vignette](https://ocbe-uio.github.io/BayesMallows/articles/BayesMallows.html), which is an updated version of @sorensen2020. For parallel processing of particles with the sequential Monte Carlo algorithm of @steinSequentialInferenceMallows2023, see the [SMC vignette](https://ocbe-uio.github.io/BayesMallows/articles/SMC-Mallows.html). ## Why Parallel Chains? Modern computers have multiple cores, and on computing clusters one can get access to hundreds of cores easily. By running Markov Chains in parallel on $K$ cores, ideally from different starting points, we achieve at least the following: 1. The time you have to wait to get the required number of post-burnin samples scales like $1/K$. 2. You can check convergence by comparing chains. ## Parallel Chains with Complete Rankings In "BayesMallows" we use the "parallel" package for parallel computation. Parallelization is obtained by starting a cluster and providing it as an argument. Note that we also give one initial value of the dispersion parameter $\alpha$ to each chain. ```r library(parallel) cl <- makeCluster(4) fit <- compute_mallows( data = setup_rank_data(rankings = potato_visual), compute_options = set_compute_options(nmc = 5000), cl = cl ) stopCluster(cl) ``` We can assess convergence in the usual way: ```r assess_convergence(fit) ``` ![Trace plot of scale parameter for four chains.](parallel_assess_convergence_alpha-1.png) We can also assess convergence for the latent ranks $\boldsymbol{\rho}$. Since the initial value of $\boldsymbol{\rho}$ is sampled uniformly, the two chains automatically get different initial values. ```r assess_convergence(fit, parameter = "rho", items = 1:3) ``` ![Trace plot of modal ranking for four chains.](parallel_assess_convergence_rho-1.png) Based on the convergence plots, we set the burnin to 3000. ```r burnin(fit) <- 3000 ``` We can now use all the tools for assessing the posterior distributions as usual. The post-burnin samples for all parallel chains are simply combined, as they should be. Below is a plot of the posterior distribution of $\alpha$. ```r plot(fit) ``` ![Posterior of scale parameter, combing post-burnin samples from all chains.](parallel_posterior_alpha-1.png) Next is a plot of the posterior distribution of $\boldsymbol{\rho}$. ```r plot(fit, parameter = "rho", items = 4:7) ``` ![Posterior of modal ranking, combing post-burnin samples from all chains.](parallel_posterior_rho-1.png) ## Parallel Chains with Pairwise Preferences A case where parallel chains might be more strongly needed is with incomplete data, e.g., arising from pairwise preferences. In this case the MCMC algorithm needs to perform data augmentation, which tends to be both slow and sticky. We illustrate this with the beach preference data, again referring to @sorensen2020 for a more thorough introduction to the aspects not directly related to parallelism. ```r beach_data <- setup_rank_data(preferences = beach_preferences) ``` We run four parallel chains, letting the package generate random initial rankings, but again providing a vector of initial values for $\alpha$. ```r cl <- makeCluster(4) fit <- compute_mallows( data = beach_data, compute_options = set_compute_options(nmc = 4000, save_aug = TRUE), initial_values = set_initial_values(alpha_init = runif(4, 1, 4)), cl = cl ) stopCluster(cl) ``` ### Trace Plots The convergence plots shows some long-range autocorrelation, but otherwise it seems to mix relatively well. ```r assess_convergence(fit) ``` ![Trace plot of scale parameter for beach preferences data, on four chains.](parallel_assess_converge_prefs_alpha-1.png) Here is the convergence plot for $\boldsymbol{\rho}$: ```r assess_convergence(fit, parameter = "rho", items = 4:6) ``` ![Trace plot of modal ranking for beach preferences data, on four chains.](parallel_assess_converge_prefs_rho-1.png) To avoid overplotting, it's a good idea to pick a low number of assessors and chains. We here look at items 1-3 of assessors 1 and 2. ```r assess_convergence(fit, parameter = "Rtilde", items = 1:3, assessors = 1:2 ) ``` ![Trace plot of augmented rankings for beach preference data, on four chains.](parallel_assess_convergence_prefs_rtilde-1.png) ### Posterior Quantities Based on the trace plots, the chains seem to be mixing well. We set the burnin to 1000. ```r burnin(fit) <- 1000 ``` We can now study the posterior distributions. Here is the posterior for $\alpha$. Note that by increasing the `nmc` argument to `compute_mallows` above, the density would appear smoother. In this vignette we have kept it low to reduce the run time. ```r plot(fit) ``` ![Posterior distribution for scale parameter.](parallel_beach_prefs_alpha_posterior-1.png) We can also look at the posterior for $\boldsymbol{\rho}$. ```r plot(fit, parameter = "rho", items = 6:9) ``` ![Posterior distribution for modal rankings.](parallel_beach_prefs_rho_posterior-1.png) We can also compute posterior intervals in the usual way: ```r compute_posterior_intervals(fit, parameter = "alpha") #> parameter mean median hpdi central_interval #> 1 alpha 4.798 4.793 [4.242,5.373] [4.235,5.371] ``` ```r compute_posterior_intervals(fit, parameter = "rho") #> parameter item mean median hpdi central_interval #> 1 rho Item 1 7 7 [7] [6,7] #> 2 rho Item 2 15 15 [15] [14,15] #> 3 rho Item 3 3 3 [3,4] [3,4] #> 4 rho Item 4 11 11 [11,13] [11,13] #> 5 rho Item 5 9 9 [8,10] [8,10] #> 6 rho Item 6 2 2 [1,2] [1,2] #> 7 rho Item 7 9 8 [8,10] [8,10] #> 8 rho Item 8 12 12 [11,13] [11,14] #> 9 rho Item 9 1 1 [1,2] [1,2] #> 10 rho Item 10 6 6 [5,6] [5,7] #> 11 rho Item 11 4 4 [3,5] [3,5] #> 12 rho Item 12 13 13 [12,14] [11,14] #> 13 rho Item 13 10 10 [8,10] [8,10] #> 14 rho Item 14 13 14 [12,14] [12,14] #> 15 rho Item 15 5 5 [4,5] [4,6] ``` And we can compute the consensus ranking: ```r compute_consensus(fit) #> cluster ranking item cumprob #> 1 Cluster 1 1 Item 9 0.8691667 #> 2 Cluster 1 2 Item 6 1.0000000 #> 3 Cluster 1 3 Item 3 0.6391667 #> 4 Cluster 1 4 Item 11 0.9404167 #> 5 Cluster 1 5 Item 15 0.9559167 #> 6 Cluster 1 6 Item 10 0.9636667 #> 7 Cluster 1 7 Item 1 1.0000000 #> 8 Cluster 1 8 Item 7 0.5473333 #> 9 Cluster 1 9 Item 5 0.9255833 #> 10 Cluster 1 10 Item 13 1.0000000 #> 11 Cluster 1 11 Item 4 0.6924167 #> 12 Cluster 1 12 Item 8 0.7833333 #> 13 Cluster 1 13 Item 12 0.6158333 #> 14 Cluster 1 14 Item 14 0.9958333 #> 15 Cluster 1 15 Item 2 1.0000000 ``` ```r compute_consensus(fit, type = "MAP") #> cluster map_ranking item probability #> 1 Cluster 1 1 Item 9 0.2683333 #> 2 Cluster 1 2 Item 6 0.2683333 #> 3 Cluster 1 3 Item 3 0.2683333 #> 4 Cluster 1 4 Item 11 0.2683333 #> 5 Cluster 1 5 Item 15 0.2683333 #> 6 Cluster 1 6 Item 10 0.2683333 #> 7 Cluster 1 7 Item 1 0.2683333 #> 8 Cluster 1 8 Item 7 0.2683333 #> 9 Cluster 1 9 Item 5 0.2683333 #> 10 Cluster 1 10 Item 13 0.2683333 #> 11 Cluster 1 11 Item 4 0.2683333 #> 12 Cluster 1 12 Item 8 0.2683333 #> 13 Cluster 1 13 Item 12 0.2683333 #> 14 Cluster 1 14 Item 14 0.2683333 #> 15 Cluster 1 15 Item 2 0.2683333 ``` We can compute the probability of being top-$k$, here for $k=4$: ```r plot_top_k(fit, k = 4) ``` ![Probability of being top-4 for beach preference data.](parallel_top_k-1.png) # References
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/doc/parallel_chains.Rmd
set.seed(1) # Fit a model on the potato_visual data mod <- compute_mallows(setup_rank_data(potato_visual)) # Check for convergence assess_convergence(mod) assess_convergence(mod, parameter = "rho", items = 1:20)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/assess_convergence_example.R
set.seed(445) mod <- compute_mallows(setup_rank_data(potato_visual)) assess_convergence(mod) burnin(mod) burnin(mod) <- 1500 burnin(mod) plot(mod) #' models <- compute_mallows_mixtures( data = setup_rank_data(cluster_data), n_clusters = 1:3) burnin(models) burnin(models) <- 100 burnin(models) burnin(models) <- c(100, 300, 200) burnin(models)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/burnin_example.R
# The example datasets potato_visual and potato_weighing contain complete # rankings of 20 items, by 12 assessors. We first analyse these using the # Mallows model: model_fit <- compute_mallows(setup_rank_data(potato_visual)) # Se the documentation to compute_mallows for how to assess the convergence of # the algorithm. Having chosen burin = 1000, we compute posterior intervals burnin(model_fit) <- 1000 # We then compute the CP consensus. compute_consensus(model_fit, type = "CP") # And we compute the MAP consensus compute_consensus(model_fit, type = "MAP") \dontrun{ # CLUSTERWISE CONSENSUS # We can run a mixture of Mallows models, using the n_clusters argument # We use the sushi example data. See the documentation of compute_mallows for # a more elaborate example model_fit <- compute_mallows( setup_rank_data(sushi_rankings), model_options = set_model_options(n_clusters = 5)) # Keeping the burnin at 1000, we can compute the consensus ranking per cluster burnin(model_fit) <- 1000 cp_consensus_df <- compute_consensus(model_fit, type = "CP") # We can now make a table which shows the ranking in each cluster: cp_consensus_df$cumprob <- NULL stats::reshape(cp_consensus_df, direction = "wide", idvar = "ranking", timevar = "cluster", varying = list(sort(unique(cp_consensus_df$cluster)))) } \dontrun{ # MAP CONSENSUS FOR PAIRWISE PREFENCE DATA # We use the example dataset with beach preferences. model_fit <- compute_mallows(setup_rank_data(preferences = beach_preferences)) # We set burnin = 1000 burnin(model_fit) <- 1000 # We now compute the MAP consensus map_consensus_df <- compute_consensus(model_fit, type = "MAP") # CP CONSENSUS FOR AUGMENTED RANKINGS # We use the example dataset with beach preferences. model_fit <- compute_mallows( setup_rank_data(preferences = beach_preferences), compute_options = set_compute_options(save_aug = TRUE, aug_thinning = 2)) # We set burnin = 1000 burnin(model_fit) <- 1000 # We now compute the CP consensus of augmented ranks for assessors 1 and 3 cp_consensus_df <- compute_consensus( model_fit, type = "CP", parameter = "Rtilde", assessors = c(1L, 3L)) # We can also compute the MAP consensus for assessor 2 map_consensus_df <- compute_consensus( model_fit, type = "MAP", parameter = "Rtilde", assessors = 2L) # Caution! # With very sparse data or with too few iterations, there may be ties in the # MAP consensus. This is illustrated below for the case of only 5 post-burnin # iterations. Two MAP rankings are equally likely in this case (and for this # seed). model_fit <- compute_mallows( setup_rank_data(preferences = beach_preferences), compute_options = set_compute_options( nmc = 1005, save_aug = TRUE, aug_thinning = 1)) burnin(model_fit) <- 1000 compute_consensus(model_fit, type = "MAP", parameter = "Rtilde", assessors = 2L) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/compute_consensus_example.R
# ANALYSIS OF COMPLETE RANKINGS # The example datasets potato_visual and potato_weighing contain complete # rankings of 20 items, by 12 assessors. We first analyse these using the Mallows # model: set.seed(1) model_fit <- compute_mallows( data = setup_rank_data(rankings = potato_visual), compute_options = set_compute_options(nmc = 2000) ) # We study the trace plot of the parameters assess_convergence(model_fit, parameter = "alpha") assess_convergence(model_fit, parameter = "rho", items = 1:4) # Based on these plots, we set burnin = 1000. burnin(model_fit) <- 1000 # Next, we use the generic plot function to study the posterior distributions # of alpha and rho plot(model_fit, parameter = "alpha") plot(model_fit, parameter = "rho", items = 10:15) # We can also compute the CP consensus posterior ranking compute_consensus(model_fit, type = "CP") # And we can compute the posterior intervals: # First we compute the interval for alpha compute_posterior_intervals(model_fit, parameter = "alpha") # Then we compute the interval for all the items compute_posterior_intervals(model_fit, parameter = "rho") # ANALYSIS OF PAIRWISE PREFERENCES # The example dataset beach_preferences contains pairwise # preferences between beaches stated by 60 assessors. There # is a total of 15 beaches in the dataset. beach_data <- setup_rank_data( preferences = beach_preferences ) # We then run the Bayesian Mallows rank model # We save the augmented data for diagnostics purposes. model_fit <- compute_mallows( data = beach_data, compute_options = set_compute_options(save_aug = TRUE), verbose = TRUE) # We can assess the convergence of the scale parameter assess_convergence(model_fit) # We can assess the convergence of latent rankings. Here we # show beaches 1-5. assess_convergence(model_fit, parameter = "rho", items = 1:5) # We can also look at the convergence of the augmented rankings for # each assessor. assess_convergence(model_fit, parameter = "Rtilde", items = c(2, 4), assessors = c(1, 2)) # Notice how, for assessor 1, the lines cross each other, while # beach 2 consistently has a higher rank value (lower preference) for # assessor 2. We can see why by looking at the implied orderings in # beach_tc subset(get_transitive_closure(beach_data), assessor %in% c(1, 2) & bottom_item %in% c(2, 4) & top_item %in% c(2, 4)) # Assessor 1 has no implied ordering between beach 2 and beach 4, # while assessor 2 has the implied ordering that beach 4 is preferred # to beach 2. This is reflected in the trace plots. # CLUSTERING OF ASSESSORS WITH SIMILAR PREFERENCES \dontrun{ # The example dataset sushi_rankings contains 5000 complete # rankings of 10 types of sushi # We start with computing a 3-cluster solution model_fit <- compute_mallows( data = setup_rank_data(sushi_rankings), model_options = set_model_options(n_clusters = 3), compute_options = set_compute_options(nmc = 10000), verbose = TRUE) # We then assess convergence of the scale parameter alpha assess_convergence(model_fit) # Next, we assess convergence of the cluster probabilities assess_convergence(model_fit, parameter = "cluster_probs") # Based on this, we set burnin = 1000 # We now plot the posterior density of the scale parameters alpha in # each mixture: burnin(model_fit) <- 1000 plot(model_fit, parameter = "alpha") # We can also compute the posterior density of the cluster probabilities plot(model_fit, parameter = "cluster_probs") # We can also plot the posterior cluster assignment. In this case, # the assessors are sorted according to their maximum a posteriori cluster estimate. plot(model_fit, parameter = "cluster_assignment") # We can also assign each assessor to a cluster cluster_assignments <- assign_cluster(model_fit, soft = FALSE) } # DETERMINING THE NUMBER OF CLUSTERS \dontrun{ # Continuing with the sushi data, we can determine the number of cluster # Let us look at any number of clusters from 1 to 10 # We use the convenience function compute_mallows_mixtures n_clusters <- seq(from = 1, to = 10) models <- compute_mallows_mixtures( n_clusters = n_clusters, data = setup_rank_data(rankings = sushi_rankings), compute_options = set_compute_options( nmc = 6000, alpha_jump = 10, include_wcd = TRUE) ) # models is a list in which each element is an object of class BayesMallows, # returned from compute_mallows # We can create an elbow plot burnin(models) <- 1000 plot_elbow(models) # We then select the number of cluster at a point where this plot has # an "elbow", e.g., at 6 clusters. } # SPEEDING UP COMPUTION WITH OBSERVATION FREQUENCIES With a large number of # assessors taking on a relatively low number of unique rankings, the # observation_frequency argument allows providing a rankings matrix with the # unique set of rankings, and the observation_frequency vector giving the number # of assessors with each ranking. This is illustrated here for the potato_visual # dataset # # assume each row of potato_visual corresponds to between 1 and 5 assessors, as # given by the observation_frequency vector \dontrun{ set.seed(1234) observation_frequency <- sample.int(n = 5, size = nrow(potato_visual), replace = TRUE) m <- compute_mallows( setup_rank_data(rankings = potato_visual, observation_frequency = observation_frequency)) # INTRANSITIVE PAIRWISE PREFERENCES set.seed(1234) mod <- compute_mallows( setup_rank_data(preferences = bernoulli_data), compute_options = set_compute_options(nmc = 5000), priors = set_priors(kappa = c(1, 10)), model_options = set_model_options(error_model = "bernoulli") ) assess_convergence(mod) assess_convergence(mod, parameter = "theta") burnin(mod) <- 3000 plot(mod) plot(mod, parameter = "theta") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/compute_mallows_example.R
# SIMULATED CLUSTER DATA set.seed(1) n_clusters <- seq(from = 1, to = 5) models <- compute_mallows_mixtures( n_clusters = n_clusters, data = setup_rank_data(cluster_data), compute_options = set_compute_options(nmc = 2000, include_wcd = TRUE)) # There is good convergence for 1, 2, and 3 cluster, but not for 5. # Also note that there seems to be label switching around the 7000th iteration # for the 2-cluster solution. assess_convergence(models) # We can create an elbow plot, suggesting that there are three clusters, exactly # as simulated. burnin(models) <- 1000 plot_elbow(models) # We now fit a model with three clusters mixture_model <- compute_mallows( data = setup_rank_data(cluster_data), model_options = set_model_options(n_clusters = 3), compute_options = set_compute_options(nmc = 2000)) # The trace plot for this model looks good. It seems to converge quickly. assess_convergence(mixture_model) # We set the burnin to 500 burnin(mixture_model) <- 500 # We can now look at posterior quantities # Posterior of scale parameter alpha plot(mixture_model) plot(mixture_model, parameter = "rho", items = 4:5) # There is around 33 % probability of being in each cluster, in agreemeent # with the data simulating mechanism plot(mixture_model, parameter = "cluster_probs") # We can also look at a cluster assignment plot plot(mixture_model, parameter = "cluster_assignment") # DETERMINING THE NUMBER OF CLUSTERS IN THE SUSHI EXAMPLE DATA \dontrun{ # Let us look at any number of clusters from 1 to 10 # We use the convenience function compute_mallows_mixtures n_clusters <- seq(from = 1, to = 10) models <- compute_mallows_mixtures( n_clusters = n_clusters, data = setup_rank_data(sushi_rankings), compute_options = set_compute_options(include_wcd = TRUE)) # models is a list in which each element is an object of class BayesMallows, # returned from compute_mallows # We can create an elbow plot burnin(models) <- 1000 plot_elbow(models) # We then select the number of cluster at a point where this plot has # an "elbow", e.g., n_clusters = 5. # Having chosen the number of clusters, we can now study the final model # Rerun with 5 clusters mixture_model <- compute_mallows( rankings = sushi_rankings, model_options = set_model_options(n_clusters = 5), compute_options = set_compute_options(include_wcd = TRUE)) # Delete the models object to free some memory rm(models) # Set the burnin burnin(mixture_model) <- 1000 # Plot the posterior distributions of alpha per cluster plot(mixture_model) # Compute the posterior interval of alpha per cluster compute_posterior_intervals(mixture_model, parameter = "alpha") # Plot the posterior distributions of cluster probabilities plot(mixture_model, parameter = "cluster_probs") # Plot the posterior probability of cluster assignment plot(mixture_model, parameter = "cluster_assignment") # Plot the posterior distribution of "tuna roll" in each cluster plot(mixture_model, parameter = "rho", items = "tuna roll") # Compute the cluster-wise CP consensus, and show one column per cluster cp <- compute_consensus(mixture_model, type = "CP") cp$cumprob <- NULL stats::reshape(cp, direction = "wide", idvar = "ranking", timevar = "cluster", varying = list(as.character(unique(cp$cluster)))) # Compute the MAP consensus, and show one column per cluster map <- compute_consensus(mixture_model, type = "MAP") map$probability <- NULL stats::reshape(map, direction = "wide", idvar = "map_ranking", timevar = "cluster", varying = list(as.character(unique(map$cluster)))) # RUNNING IN PARALLEL # Computing Mallows models with different number of mixtures in parallel leads to # considerably speedup library(parallel) cl <- makeCluster(detectCores() - 1) n_clusters <- seq(from = 1, to = 10) models <- compute_mallows_mixtures( n_clusters = n_clusters, rankings = sushi_rankings, compute_options = set_compute_options(include_wcd = TRUE), cl = cl) stopCluster(cl) }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/compute_mallows_mixtures_example.R
# Observe one ranking at each of 12 timepoints library(ggplot2) data <- lapply(seq_len(nrow(potato_visual)), function(i) { setup_rank_data(potato_visual[i, ], user_ids = i) }) initial_values <- sample_prior( n = 200, n_items = 20, priors = set_priors(gamma = 3, lambda = .1)) mod <- compute_mallows_sequentially( data = data, initial_values = initial_values, smc_options = set_smc_options(n_particles = 500, mcmc_steps = 20)) # We can see the acceptance ratio of the move step for each timepoint: get_acceptance_ratios(mod) plot_dat <- data.frame( n_obs = seq_along(data), alpha_mean = apply(mod$alpha_samples, 2, mean), alpha_sd = apply(mod$alpha_samples, 2, sd) ) # Visualize how the dispersion parameter is being learned as more data arrive ggplot(plot_dat, aes(x = n_obs, y = alpha_mean, ymin = alpha_mean - alpha_sd, ymax = alpha_mean + alpha_sd)) + geom_line() + geom_ribbon(alpha = .1) + ylab(expression(alpha)) + xlab("Observations") + theme_classic() + scale_x_continuous( breaks = seq(min(plot_dat$n_obs), max(plot_dat$n_obs), by = 1)) # Visualize the learning of the rank for a given item (item 1 in this example) plot_dat <- data.frame( n_obs = seq_along(data), rank_mean = apply(mod$rho_samples[1, , ], 2, mean), rank_sd = apply(mod$rho_samples[1, , ], 2, sd) ) ggplot(plot_dat, aes(x = n_obs, y = rank_mean, ymin = rank_mean - rank_sd, ymax = rank_mean + rank_sd)) + geom_line() + geom_ribbon(alpha = .1) + xlab("Observations") + ylab(expression(rho[1])) + theme_classic() + scale_x_continuous( breaks = seq(min(plot_dat$n_obs), max(plot_dat$n_obs), by = 1))
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/compute_mallows_sequentially_example.R
# Create example data. We set the burn-in and thinning very low # for the sampling to go fast data0 <- sample_mallows(rho0 = 1:5, alpha = 10, n_samples = 1000, burnin = 10, thinning = 1) # Find the frequency distribution compute_observation_frequency(rankings = data0) # The function also works when the data have missing values rankings <- matrix(c(1, 2, 3, 4, 1, 2, 4, NA, 1, 2, 4, NA, 3, 2, 1, 4, NA, NA, 2, 1, NA, NA, 2, 1, NA, NA, 2, 1, 2, NA, 1, NA), ncol = 4, byrow = TRUE) compute_observation_frequency(rankings)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/compute_observation_frequency_example.R
set.seed(1) model_fit <- compute_mallows( setup_rank_data(potato_visual), compute_options = set_compute_options(nmc = 3000, burnin = 1000)) # First we compute the interval for alpha compute_posterior_intervals(model_fit, parameter = "alpha") # We can reduce the number decimals compute_posterior_intervals(model_fit, parameter = "alpha", decimals = 2) # By default, we get a 95 % interval. We can change that to 99 %. compute_posterior_intervals(model_fit, parameter = "alpha", level = 0.99) # We can also compute the posterior interval for the latent ranks rho compute_posterior_intervals(model_fit, parameter = "rho") \dontrun{ # Posterior intervals of cluster probabilities model_fit <- compute_mallows( setup_rank_data(sushi_rankings), model_options = set_model_options(n_clusters = 5)) burnin(model_fit) <- 1000 compute_posterior_intervals(model_fit, parameter = "alpha") compute_posterior_intervals(model_fit, parameter = "cluster_probs") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/compute_posterior_intervals_example.R
# Distance between two vectors of rankings: compute_rank_distance(1:5, 5:1, metric = "kendall") compute_rank_distance(c(2, 4, 3, 6, 1, 7, 5), c(3, 5, 4, 7, 6, 2, 1), metric = "cayley") compute_rank_distance(c(4, 2, 3, 1), c(3, 4, 1, 2), metric = "hamming") compute_rank_distance(c(1, 3, 5, 7, 9, 8, 6, 4, 2), c(1, 2, 3, 4, 9, 8, 7, 6, 5), "ulam") compute_rank_distance(c(8, 7, 1, 2, 6, 5, 3, 4), c(1, 2, 8, 7, 3, 4, 6, 5), "footrule") compute_rank_distance(c(1, 6, 2, 5, 3, 4), c(4, 3, 5, 2, 6, 1), "spearman") # Difference between a metric and a vector # We set the burn-in and thinning too low for the example to run fast data0 <- sample_mallows(rho0 = 1:10, alpha = 20, n_samples = 1000, burnin = 10, thinning = 1) compute_rank_distance(rankings = data0, rho = 1:10, metric = "kendall")
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/compute_rank_distance_example.R
# IMPORTANCE SAMPLING # Let us estimate logZ(alpha) for 20 items with Spearman distance # We create a grid of alpha values from 0 to 10 alpha_vector <- seq(from = 0, to = 10, by = 0.5) n_items <- 20 metric <- "spearman" # We start with 1e3 Monte Carlo samples fit1 <- estimate_partition_function( method = "importance_sampling", alpha_vector = alpha_vector, n_items = n_items, metric = metric, n_iterations = 1e3) # A matrix containing powers of alpha and regression coefficients is returned fit1 # The approximated partition function can hence be obtained: estimate1 <- vapply(alpha_vector, function(a) sum(a^fit1[, 1] * fit1[, 2]), numeric(1)) # Now let us recompute with 2e3 Monte Carlo samples fit2 <- estimate_partition_function( method = "importance_sampling", alpha_vector = alpha_vector, n_items = n_items, metric = metric, n_iterations = 2e3) estimate2 <- vapply(alpha_vector, function(a) sum(a^fit2[, 1] * fit2[, 2]), numeric(1)) # ASYMPTOTIC APPROXIMATION # We can also compute an estimate using the asymptotic approximation fit3 <- estimate_partition_function( method = "asymptotic", alpha_vector = alpha_vector, n_items = n_items, metric = metric, n_iterations = 50) estimate3 <- vapply(alpha_vector, function(a) sum(a^fit3[, 1] * fit3[, 2]), numeric(1)) # We can now plot the estimates side-by-side plot(alpha_vector, estimate1, type = "l", xlab = expression(alpha), ylab = expression(log(Z(alpha)))) lines(alpha_vector, estimate2, col = 2) lines(alpha_vector, estimate3, col = 3) legend(x = 7, y = 40, legend = c("IS,1e3", "IS,2e3", "IPFP"), col = 1:3, lty = 1) # We see that the two importance sampling estimates, which are unbiased, # overlap. The asymptotic approximation seems a bit off. It can be worthwhile # to try different values of n_iterations and K. # When we are happy, we can provide the coefficient vector in the # pfun_estimate argument to compute_mallows # Say we choose to use the importance sampling estimate with 1e4 Monte Carlo samples: model_fit <- compute_mallows( setup_rank_data(potato_visual), model_options = set_model_options(metric = "spearman"), compute_options = set_compute_options(nmc = 200), pfun_estimate = fit2)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/estimate_partition_function_example.R
compute_expected_distance(1, 5, metric = "kendall") compute_expected_distance(2, 6, metric = "cayley") compute_expected_distance(1.5, 7, metric = "hamming") compute_expected_distance(5, 30, "ulam") compute_expected_distance(3.5, 45, "footrule") compute_expected_distance(4, 10, "spearman")
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/expected_dist_example.R
set.seed(1) mod <- compute_mallows( data = setup_rank_data(potato_visual), compute_options = set_compute_options(burnin = 200) ) get_acceptance_ratios(mod)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/get_acceptance_ratios_example.R
# Extract the cardinalities for four items with footrule distance n_items <- 4 dat <- get_cardinalities(n_items) # Compute the partition function at alpha = 2 alpha <- 2 sum(dat$value * exp(-alpha / n_items * dat$distance)) #' # We can confirm that it is correct by enumerating all possible combinations all <- expand.grid(1:4, 1:4, 1:4, 1:4) perms <- all[apply(all, 1, function(x) length(unique(x)) == 4), ] sum(apply(perms, 1, function(x) exp(-alpha / n_items * sum(abs(x - 1:4))))) # We do the same for the Spearman distance dat <- get_cardinalities(n_items, metric = "spearman") sum(dat$value * exp(-alpha / n_items * dat$distance)) #' # We can confirm that it is correct by enumerating all possible combinations sum(apply(perms, 1, function(x) exp(-alpha / n_items * sum((x - 1:4)^2))))
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/get_cardinalities_example.R
# Simulate a sample from a Mallows model with the Kendall distance n_items <- 5 mydata <- sample_mallows( n_samples = 100, rho0 = 1:n_items, alpha0 = 10, metric = "kendall") # Compute the likelihood and log-likelihood values under the true model... get_mallows_loglik( rho = rbind(1:n_items, 1:n_items), alpha = c(10, 10), weights = c(0.5, 0.5), metric = "kendall", rankings = mydata, log = FALSE ) get_mallows_loglik( rho = rbind(1:n_items, 1:n_items), alpha = c(10, 10), weights = c(0.5, 0.5), metric = "kendall", rankings = mydata, log = TRUE ) # or equivalently, by using the frequency distribution freq_distr <- compute_observation_frequency(mydata) get_mallows_loglik( rho = rbind(1:n_items, 1:n_items), alpha = c(10, 10), weights = c(0.5, 0.5), metric = "kendall", rankings = freq_distr[, 1:n_items], observation_frequency = freq_distr[, n_items + 1], log = FALSE ) get_mallows_loglik( rho = rbind(1:n_items, 1:n_items), alpha = c(10, 10), weights = c(0.5, 0.5), metric = "kendall", rankings = freq_distr[, 1:n_items], observation_frequency = freq_distr[, n_items + 1], log = TRUE )
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/get_mallows_loglik_example.R
set.seed(1) model_fit <- compute_mallows( setup_rank_data(potato_visual), compute_options = set_compute_options(nmc = 2000, burnin = 500)) heat_plot(model_fit) heat_plot(model_fit, type = "MAP")
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/heat_plot_example.R
# CHEKING FOR LABEL SWITCHING \dontrun{ # This example shows how to assess if label switching happens in BayesMallows # We start by creating a directory in which csv files with individual # cluster probabilities should be saved in each step of the MCMC algorithm # NOTE: For computational efficiency, we use much fewer MCMC iterations than one # would normally do. dir.create("./test_label_switch") # Next, we go into this directory setwd("./test_label_switch/") # For comparison, we run compute_mallows with and without saving the cluster # probabilities The purpose of this is to assess the time it takes to save # the cluster probabilites. system.time(m <- compute_mallows( setup_rank_data(rankings = sushi_rankings), model_options = set_model_options(n_clusters = 3), compute_options = set_compute_options(nmc = 500, save_ind_clus = FALSE), verbose = TRUE)) # With this options, compute_mallows will save cluster_probs2.csv, # cluster_probs3.csv, ..., cluster_probs[nmc].csv. system.time(m <- compute_mallows( setup_rank_data(rankings = sushi_rankings), model_options = set_model_options(n_clusters = 3), compute_options = set_compute_options(nmc = 500, save_ind_clus = TRUE), verbose = TRUE)) # Next, we check convergence of alpha assess_convergence(m) # We set the burnin to 200 burnin <- 200 # Find all files that were saved. Note that the first file saved is # cluster_probs2.csv cluster_files <- list.files(pattern = "cluster\\_probs[[:digit:]]+\\.csv") # Check the size of the files that were saved. paste(sum(do.call(file.size, list(cluster_files))) * 1e-6, "MB") # Find the iteration each file corresponds to, by extracting its number iteration_number <- as.integer( regmatches(x = cluster_files,m = regexpr(pattern = "[0-9]+", cluster_files) )) # Remove all files before burnin file.remove(cluster_files[iteration_number <= burnin]) # Update the vector of files, after the deletion cluster_files <- list.files(pattern = "cluster\\_probs[[:digit:]]+\\.csv") # Create 3d array, with dimensions (iterations, assessors, clusters) prob_array <- array( dim = c(length(cluster_files), m$data$n_assessors, m$n_clusters)) # Read each file, adding to the right element of the array for(i in seq_along(cluster_files)){ prob_array[i, , ] <- as.matrix( read.csv(cluster_files[[i]], header = FALSE)) } # Create an integer array of latent allocations, as this is required by # label.switching z <- subset(m$cluster_assignment, iteration > burnin) z$value <- as.integer(gsub("Cluster ", "", z$value)) z$chain <- NULL z <- reshape(z, direction = "wide", idvar = "iteration", timevar = "assessor") z$iteration <- NULL z <- as.matrix(z) # Now apply Stephen's algorithm library(label.switching) switch_check <- label.switching("STEPHENS", z = z, K = m$n_clusters, p = prob_array) # Check the proportion of cluster assignments that were switched mean(apply(switch_check$permutations$STEPHENS, 1, function(x) { !all(x == seq(1, m$n_clusters, by = 1)) })) # Remove the rest of the csv files file.remove(cluster_files) # Move up one directory setwd("..") # Remove the directory in which the csv files were saved file.remove("./test_label_switch/") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/label_switching_example.R
model_fit <- compute_mallows(setup_rank_data(potato_visual)) burnin(model_fit) <- 1000 # By default, the scale parameter "alpha" is plotted plot(model_fit) # We can also plot the latent rankings "rho" plot(model_fit, parameter = "rho") # By default, a random subset of 5 items are plotted # Specify which items to plot in the items argument. plot(model_fit, parameter = "rho", items = c(2, 4, 6, 9, 10, 20)) # When the ranking matrix has column names, we can also # specify these in the items argument. # In this case, we have the following names: colnames(potato_visual) # We can therefore get the same plot with the following call: plot(model_fit, parameter = "rho", items = c("P2", "P4", "P6", "P9", "P10", "P20")) \dontrun{ # Plots of mixture parameters: model_fit <- compute_mallows( setup_rank_data(sushi_rankings), model_options = set_model_options(n_clusters = 5)) burnin(model_fit) <- 1000 # Posterior distributions of the cluster probabilities plot(model_fit, parameter = "cluster_probs") # Cluster assignment plot. Color shows the probability of belonging to each # cluster. plot(model_fit, parameter = "cluster_assignment") }
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/plot.BayesMallows_example.R
set.seed(1) # We use the example dataset with beach preferences. Se the documentation to # compute_mallows for how to assess the convergence of the algorithm # We need to save the augmented data, so setting this option to TRUE model_fit <- compute_mallows( data = setup_rank_data(preferences = beach_preferences), compute_options = set_compute_options( nmc = 1000, burnin = 500, save_aug = TRUE)) # By default, the probability of being top-3 is plotted # The default plot gives the probability for each assessor plot_top_k(model_fit) # We can also plot the probability of being top-5, for each item plot_top_k(model_fit, k = 5) # We get the underlying numbers with predict_top_k probs <- predict_top_k(model_fit) # To find all items ranked top-3 by assessors 1-3 with probability more than 80 %, # we do subset(probs, assessor %in% 1:3 & prob > 0.8) # We can also plot for clusters model_fit <- compute_mallows( data = setup_rank_data(preferences = beach_preferences), model_options = set_model_options(n_clusters = 3), compute_options = set_compute_options( nmc = 1000, burnin = 500, save_aug = TRUE) ) # The modal ranking in general differs between clusters, but the plot still # represents the posterior distribution of each user's augmented rankings. plot_top_k(model_fit)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/plot_top_k_example.R
# Sample 100 random rankings from a Mallows distribution with footrule distance set.seed(1) # Number of items n_items <- 15 # Set the consensus ranking rho0 <- seq(from = 1, to = n_items, by = 1) # Set the scale alpha0 <- 10 # Number of samples n_samples <- 100 # We first do a diagnostic run, to find the thinning and burnin to use # We set n_samples to 1000, in order to run 1000 diagnostic iterations. test <- sample_mallows(rho0 = rho0, alpha0 = alpha0, diagnostic = TRUE, n_samples = 1000, burnin = 1, thinning = 1) # When items_to_plot is not set, 5 items are picked at random. We can change this. # We can also reduce the number of lags computed in the autocorrelation plots test <- sample_mallows(rho0 = rho0, alpha0 = alpha0, diagnostic = TRUE, n_samples = 1000, burnin = 1, thinning = 1, items_to_plot = c(1:3, 10, 15), max_lag = 500) # From the autocorrelation plot, it looks like we should use # a thinning of at least 200. We set thinning = 1000 to be safe, # since the algorithm in any case is fast. The Markov Chain # seems to mix quickly, but we set the burnin to 1000 to be safe. # We now run sample_mallows again, to get the 100 samples we want: samples <- sample_mallows(rho0 = rho0, alpha0 = alpha0, n_samples = 100, burnin = 1000, thinning = 1000) # The samples matrix now contains 100 rows with rankings of 15 items. # A good diagnostic, in order to confirm that burnin and thinning are set high # enough, is to run compute_mallows on the samples model_fit <- compute_mallows( setup_rank_data(samples), compute_options = set_compute_options(nmc = 10000)) # The highest posterior density interval covers alpha0 = 10. burnin(model_fit) <- 2000 compute_posterior_intervals(model_fit, parameter = "alpha")
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/sample_mallows_example.R
# We can use a collection of particles from the prior distribution as # initial values for the sequential Monte Carlo algorithm. # Here we start by drawing 1000 particles from the priors, using default # parameters. prior_samples <- sample_prior(1000, ncol(sushi_rankings)) # Next, we provide the prior samples to update_mallws(), together # with the first five rows of the sushi dataset model1 <- update_mallows( model = prior_samples, new_data = setup_rank_data(sushi_rankings[1:5, ])) plot(model1) # We keep adding more data model2 <- update_mallows( model = model1, new_data = setup_rank_data(sushi_rankings[6:10, ])) plot(model2) model3 <- update_mallows( model = model2, new_data = setup_rank_data(sushi_rankings[11:15, ])) plot(model3)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/sample_prior_example.R
set.seed(1) # UPDATING A MALLOWS MODEL WITH NEW COMPLETE RANKINGS # Assume we first only observe the first four rankings in the potato_visual # dataset data_first_batch <- potato_visual[1:4, ] # We start by fitting a model using Metropolis-Hastings mod_init <- compute_mallows( data = setup_rank_data(data_first_batch), compute_options = set_compute_options(nmc = 10000)) # Convergence seems good after no more than 2000 iterations assess_convergence(mod_init) burnin(mod_init) <- 2000 # Next, assume we receive four more observations data_second_batch <- potato_visual[5:8, ] # We can now update the model using sequential Monte Carlo mod_second <- update_mallows( model = mod_init, new_data = setup_rank_data(rankings = data_second_batch), smc_options = set_smc_options(resampler = "systematic") ) # This model now has a collection of particles approximating the posterior # distribution after the first and second batch # We can use all the posterior summary functions as we do for the model # based on compute_mallows(): plot(mod_second) plot(mod_second, parameter = "rho", items = 1:4) compute_posterior_intervals(mod_second) # Next, assume we receive the third and final batch of data. We can update # the model again data_third_batch <- potato_visual[9:12, ] mod_final <- update_mallows( model = mod_second, new_data = setup_rank_data(rankings = data_third_batch)) # We can plot the same things as before plot(mod_final) compute_consensus(mod_final) # UPDATING A MALLOWS MODEL WITH NEW OR UPDATED PARTIAL RANKINGS # The sequential Monte Carlo algorithm works for data with missing ranks as # well. This both includes the case where new users arrive with partial ranks, # and when previously seen users arrive with more complete data than they had # previously. # We illustrate for top-k rankings of the first 10 users in potato_visual potato_top_10 <- ifelse(potato_visual[1:10, ] > 10, NA_real_, potato_visual[1:10, ]) potato_top_12 <- ifelse(potato_visual[1:10, ] > 12, NA_real_, potato_visual[1:10, ]) potato_top_14 <- ifelse(potato_visual[1:10, ] > 14, NA_real_, potato_visual[1:10, ]) # We need the rownames as user IDs (user_ids <- 1:10) # First, users provide top-10 rankings mod_init <- compute_mallows( data = setup_rank_data(rankings = potato_top_10, user_ids = user_ids), compute_options = set_compute_options(nmc = 10000)) # Convergence seems fine. We set the burnin to 2000. assess_convergence(mod_init) burnin(mod_init) <- 2000 # Next assume the users update their rankings, so we have top-12 instead. mod1 <- update_mallows( model = mod_init, new_data = setup_rank_data(rankings = potato_top_12, user_ids = user_ids), smc_options = set_smc_options(resampler = "stratified") ) plot(mod1) # Then, assume we get even more data, this time top-14 rankings: mod2 <- update_mallows( model = mod1, new_data = setup_rank_data(rankings = potato_top_14, user_ids = user_ids) ) plot(mod2) # Finally, assume a set of new users arrive, who have complete rankings. potato_new <- potato_visual[11:12, ] # We need to update the user IDs, to show that these users are different (user_ids <- 11:12) mod_final <- update_mallows( model = mod2, new_data = setup_rank_data(rankings = potato_new, user_ids = user_ids) ) plot(mod_final) # We can also update models with pairwise preferences # We here start by running MCMC on the first 20 assessors of the beach data # A realistic application should run a larger number of iterations than we # do in this example. set.seed(3) dat <- subset(beach_preferences, assessor <= 20) mod <- compute_mallows( data = setup_rank_data( preferences = beach_preferences), compute_options = set_compute_options(nmc = 3000, burnin = 1000) ) # Next we provide assessors 21 to 24 one at a time. for(i in 21:24){ mod <- update_mallows( model = mod, new_data = setup_rank_data( preferences = subset(beach_preferences, assessor == i), user_ids = i, shuffle_unranked = TRUE), smc_options = set_smc_options(latent_sampling_lag = 0) ) } # Compared to running full MCMC, there is a downward bias in the scale # parameter. This can be alleviated by increasing the number of particles, # MCMC steps, and the latent sampling lag. plot(mod) compute_consensus(mod)
/scratch/gouwar.j/cran-all/cranData/BayesMallows/inst/examples/update_mallows_example.R
--- title: "Introduction" output: rmarkdown::html_vignette: fig_width: 6 fig_height: 4 bibliography: ../inst/REFERENCES.bib link-citations: yes vignette: > %\VignetteIndexEntry{Introduction} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- This vignette contains update syntax for the code examples in @sorensen2020, since both the underlying code and the user interface are continuously evolving. We refer to @sorensen2020 for notation and all other details about the models and algorithms. ```r library(BayesMallows) set.seed(123) ``` # Analysis of complete rankings We illustrate the case of complete rankings with the potato datasets described in Section 4 of [@liu2019]. In short, a bag of 20 potatoes was bought, and 12 assessors were asked to rank the potatoes by weight, first by visual inspection, and next by holding the potatoes in hand. These datasets are available in `BayesMallows` as matrices with names `potato_weighing` and `potato_visual`, respectively. The true ranking of the potatoes' weights is available in the vector `potato_true_ranking`. In general, `compute_mallows` expects ranking datasets to have one row for each assessor and one column for each item. Each row has to be a proper permutation, possibly with missing values. We are interested in the posterior distribution of both the level of agreement between assessors, as described by $\alpha$, and in the latent ranking of the potatoes, as described by $\boldsymbol{\rho}$. We refer to the attached replication script for random number seeds for exact reproducibility. We start by defining our data object, which in this case consists of complete rankings. ```r complete_data <- setup_rank_data(rankings = potato_visual) ``` First, we do a test run to check convergence of the MCMC algorithm, and then get trace plots with `assess_convergence`. ```r bmm_test <- compute_mallows(data = complete_data) assess_convergence(bmm_test) ``` ![Trace plot for scale parameter.](complete_data_diagnostic_alpha-1.png) By default, `assess_convergence` returns a trace plot for $\alpha$, shown in the figure above. The algorithm seems to be mixing well after around 500 iterations. Next, we study the convergence of $\mathbf{\rho}$. To avoid overly complex plots, we pick potatoes $1-5$ by specifying this in the `items` argument. ```r assess_convergence(bmm_test, parameter = "rho", items = 1:5) ``` ![Trace plot for modal ranking.](complete_data_diagnostic_rho-1.png) The plot shows that the MCMC algorithm seems to have converged after around 1,000 iterations. From the trace plots, we decide to discard the first 1,000 MCMC samples as burn-in. We rerun the algorithm to get 20,000 samples after burn-in. The object `bmm_visual` has `S3` class `BayesMallows`, so we plot the posterior distribution of $\alpha$ with `plot.BayesMallows`. ```r bmm_visual <- compute_mallows( data = complete_data, compute_options = set_compute_options(nmc = 21000, burnin = 1000) ) plot(bmm_visual) ``` ![Posterior for scale parameter.](complete_data_model-1.png) We can also get posterior credible intervals for $\alpha$ using `compute_posterior_intervals`, which returns both highest posterior density intervals (HPDI) and central intervals in a `data.frame`. ```r compute_posterior_intervals(bmm_visual, decimals = 1L) #> parameter mean median hpdi central_interval #> 1 alpha 10.9 10.9 [9.5,12.3] [9.5,12.3] ``` Next, we can go on to study the posterior distribution of $\boldsymbol{\rho}$. If the \code{items} argument is not provided, and the number of items exceeds five, five items are picked at random for plotting. To show all potatoes, we explicitly set \code{items = 1:20}. ```r plot(bmm_visual, parameter = "rho", items = 1:20) ``` ![Posterior for modal ranking.](complete_data_posterior_rho-1.png) ## Jumping over the scale parameter Updating $\alpha$ in every step of the MCMC algorithm may not be necessary, as the number of posterior samples typically is more than large enough to obtain good estimates of its posterior distribution. With the `alpha_jump` argument, we can tell the MCMC algorithm to update $\alpha$ only every `alpha_jump`-th iteration. To update $\alpha$ every 10th time $\boldsymbol{\rho}$ is updated, we do ```r bmm_visual <- compute_mallows( data = complete_data, compute_options = set_compute_options(nmc = 21000, burnin = 1000, alpha_jump = 10) ) ``` ## Other distance metric By default, `compute_mallows` uses the footrule distance, but the user can also choose to use Cayley, Kendall, Hamming, Spearman, or Ulam distance. Running the same analysis of the potato data with Spearman distance is done with the command ```r bmm <- compute_mallows( data = complete_data, model_options = set_model_options(metric = "spearman"), compute_options = set_compute_options(nmc = 21000, burnin = 1000) ) ``` For the particular case of Spearman distance, `BayesMallows` only has integer sequences for computing the exact partition function with 14 or fewer items. In this case a precomputed importance sampling estimate is part of the package, and used instead. # Analysis of preference data Unless the argument `error_model` to `set_model_options` is set, pairwise preference data are assumed to be consistent within each assessor. These data should be provided in a dataframe with the following three columns, with one row per pairwise comparison: * `assessor` is an identifier for the assessor; either a numeric vector containing the assessor index, or a character vector containing the unique name of the assessor. * `bottom_item` is a numeric vector containing the index of the item that was disfavored in each pairwise comparison. * `top_item` is a numeric vector containing the index of the item that was preferred in each pairwise comparison. A dataframe with this structure can be given in the `preferences` argument to `setup_rank_data`, which will generate the full set of implied rankings for each assessor as well as an initial ranking matrix consistent with the pairwise preferences. We illustrate with the beach preference data containing stated pairwise preferences between random subsets of 15 images of beaches, by 60 assessors [@vitelli2018]. This dataset is provided in the dataframe `beach_preferences`, whose first six rows are shown below: ```r head(beach_preferences) #> assessor bottom_item top_item #> 1 1 2 15 #> 2 1 5 3 #> 3 1 13 3 #> 4 1 4 7 #> 5 1 5 15 #> 6 1 12 6 ``` We can define a rank data object based on these preferences. ```r beach_data <- setup_rank_data(preferences = beach_preferences) ``` It is instructive to compare the computed transitive closure to the stated preferences. Let's do this for all preferences stated by assessor 1 involving beach 2. We first look at the raw preferences. ```r subset(beach_preferences, assessor == 1 & (bottom_item == 2 | top_item == 2)) #> assessor bottom_item top_item #> 1 1 2 15 ``` We then use the function `get_transitive_closure` to obtain the transitive closure, and then focus on the same subset: ```r tc <- get_transitive_closure(beach_data) subset(tc, assessor == 1 & (bottom_item == 2 | top_item == 2)) #> assessor bottom_item top_item #> 11 1 2 6 #> 44 1 2 15 ``` Assessor 1 has performed only one direct comparison involving beach 2, in which the assessor stated that beach 15 is preferred to beach 2. The implied orderings, on the other hand, contain two preferences involving beach 2. In addition to the statement that beach 15 is preferred to beach 2, all the other orderings stated by assessor 1 imply that this assessor prefers beach 6 to beach 2. ## Convergence diagnostics As with the potato data, we can do a test run to assess the convergence of the MCMC algorithm. This time we use the `beach_data` object that we generated above, based on the stated preferences. We also set `save_aug = TRUE` to save the augmented rankings in each MCMC step, hence letting us assess the convergence of the augmented rankings. ```r bmm_test <- compute_mallows( data = beach_data, compute_options = set_compute_options(save_aug = TRUE)) ``` Running `assess_convergence` for $\alpha$ and $\boldsymbol{\rho}$ shows good convergence after 1000 iterations. ```r assess_convergence(bmm_test) ``` ![Trace plot for scale parameter.](preferences_alpha_trace-1.png) ```r assess_convergence(bmm_test, parameter = "rho", items = 1:6) ``` ![Trace plot for modal ranking.](preferences_rho_trace-1.png) To check the convergence of the data augmentation scheme, we need to set `parameter = "Rtilde"`, and also specify which items and assessors to plot. Let us start by considering items 2, 6, and 15 for assessor 1, which we studied above. ```r assess_convergence( bmm_test, parameter = "Rtilde", items = c(2, 6, 15), assessors = 1) ``` ![Trace plot for augmented rankings.](preferences_augmented_rankings-1.png) The convergence plot illustrates how the augmented rankings vary, while also obeying their implied ordering. By further investigation of the transitive closure, we find that no orderings are implied between beach 1 and beach 15 for assessor 2. That is, the following statement returns zero rows. ```r subset(tc, assessor == 2 & bottom_item %in% c(1, 15) & top_item %in% c(1, 15)) #> [1] assessor bottom_item top_item #> <0 rows> (or 0-length row.names) ``` With the following command, we create trace plots to confirm this: ```r assess_convergence( bmm_test, parameter = "Rtilde", items = c(1, 15), assessors = 2) ``` ![Trace plot for augmented rankings where items have not been compared.](preferences_augmented_rankings_free-1.png) As expected, the traces of the augmented rankings for beach 1 and 15 for assessor 2 do cross each other, since no ordering is implied between them. Ideally, we should look at trace plots for augmented ranks for more assessors to be sure that the algorithm is close to convergence. We can plot assessors 1-8 by setting `assessors = 1:8`. We also quite arbitrarily pick items 13-15, but the same procedure can be repeated for other items. ```r assess_convergence( bmm_test, parameter = "Rtilde", items = 13:15, assessors = 1:8) ``` ![Trace plots for items 13-15 and assessors 1-8.](preferences_augmented_rankings_many-1.png) The plot indicates good mixing. ## Posterior distributions Based on the convergence diagnostics, and being fairly conservative, we discard the first 2,000 MCMC iterations as burn-in, and take 20,000 additional samples. ```r bmm_beaches <- compute_mallows( data = beach_data, compute_options = set_compute_options(nmc = 22000, burnin = 2000, save_aug = TRUE) ) ``` The posterior distributions of $\alpha$ and $\boldsymbol{\rho}$ can be studied as shown in the previous sections. Posterior intervals for the latent rankings of each beach are obtained with `compute_posterior_intervals`: ```r compute_posterior_intervals(bmm_beaches, parameter = "rho") #> parameter item mean median hpdi central_interval #> 1 rho Item 1 7 7 [7] [7] #> 2 rho Item 2 15 15 [15] [15] #> 3 rho Item 3 3 3 [3,4] [3,4] #> 4 rho Item 4 12 12 [11,13] [11,14] #> 5 rho Item 5 9 9 [8,10] [8,10] #> 6 rho Item 6 2 2 [1,2] [1,2] #> 7 rho Item 7 8 8 [8,9] [8,10] #> 8 rho Item 8 12 12 [11,13] [11,14] #> 9 rho Item 9 1 1 [1,2] [1,2] #> 10 rho Item 10 6 6 [5,6] [5,6] #> 11 rho Item 11 4 4 [3,4] [3,5] #> 12 rho Item 12 13 13 [12,14] [12,14] #> 13 rho Item 13 10 10 [9,10] [9,10] #> 14 rho Item 14 13 14 [11,14] [11,14] #> 15 rho Item 15 5 5 [4,5] [4,6] ``` We can also rank the beaches according to their cumulative probability (CP) consensus [@vitelli2018] and their maximum posterior (MAP) rankings. This is done with the function `compute_consensus`, and the following call returns the CP consensus: ```r compute_consensus(bmm_beaches, type = "CP") #> cluster ranking item cumprob #> 1 Cluster 1 1 Item 9 0.89815 #> 2 Cluster 1 2 Item 6 1.00000 #> 3 Cluster 1 3 Item 3 0.72665 #> 4 Cluster 1 4 Item 11 0.95160 #> 5 Cluster 1 5 Item 15 0.95400 #> 6 Cluster 1 6 Item 10 0.97645 #> 7 Cluster 1 7 Item 1 1.00000 #> 8 Cluster 1 8 Item 7 0.62585 #> 9 Cluster 1 9 Item 5 0.85950 #> 10 Cluster 1 10 Item 13 1.00000 #> 11 Cluster 1 11 Item 4 0.46870 #> 12 Cluster 1 12 Item 8 0.84435 #> 13 Cluster 1 13 Item 12 0.61905 #> 14 Cluster 1 14 Item 14 0.99665 #> 15 Cluster 1 15 Item 2 1.00000 ``` The column `cumprob` shows the probability of having the given rank or lower. Looking at the second row, for example, this means that beach 6 has probability 1 of having latent rank $\rho_{6} \leq 2$. Next, beach 3 has probability 0.738 of having latent rank $\rho_{3}\leq 3$. This is an example of how the Bayesian framework can be used to not only rank items, but also to give posterior assessments of the uncertainty of the rankings. The MAP consensus is obtained similarly, by setting `type = "MAP"`. ```r compute_consensus(bmm_beaches, type = "MAP") #> cluster map_ranking item probability #> 1 Cluster 1 1 Item 9 0.04955 #> 2 Cluster 1 2 Item 6 0.04955 #> 3 Cluster 1 3 Item 3 0.04955 #> 4 Cluster 1 4 Item 11 0.04955 #> 5 Cluster 1 5 Item 15 0.04955 #> 6 Cluster 1 6 Item 10 0.04955 #> 7 Cluster 1 7 Item 1 0.04955 #> 8 Cluster 1 8 Item 7 0.04955 #> 9 Cluster 1 9 Item 5 0.04955 #> 10 Cluster 1 10 Item 13 0.04955 #> 11 Cluster 1 11 Item 4 0.04955 #> 12 Cluster 1 12 Item 8 0.04955 #> 13 Cluster 1 13 Item 14 0.04955 #> 14 Cluster 1 14 Item 12 0.04955 #> 15 Cluster 1 15 Item 2 0.04955 ``` Keeping in mind that the ranking of beaches is based on sparse pairwise preferences, we can also ask: for beach $i$, what is the probability of being ranked top-$k$ by assessor $j$, and what is the probability of having latent rank among the top-$k$. The function `plot_top_k` plots these probabilities. By default, it sets `k = 3`, so a heatplot of the probability of being ranked top-3 is obtained with the call: ```r plot_top_k(bmm_beaches) ``` ![Top-3 rankings for beach preferences.](preferences_top_k-1.png) The plot shows, for each beach as indicated on the left axis, the probability that assessor $j$ ranks the beach among top-3. For example, we see that assessor 1 has a very low probability of ranking beach 9 among her top-3, while assessor 3 has a very high probability of doing this. The function `predict_top_k` returns a dataframe with all the underlying probabilities. For example, in order to find all the beaches that are among the top-3 of assessors 1-5 with more than 90 \% probability, we would do: ```r subset(predict_top_k(bmm_beaches), prob > .9 & assessor %in% 1:5) #> assessor item prob #> 301 1 Item 6 0.99435 #> 303 3 Item 6 0.99600 #> 305 5 Item 6 0.97605 #> 483 3 Item 9 1.00000 #> 484 4 Item 9 0.99975 #> 601 1 Item 11 0.95030 ``` Note that assessor 2 does not appear in this table, i.e., there are no beaches for which we are at least 90 \% certain that the beach is among assessor 2's top-3. # Clustering `BayesMallows` comes with a set of sushi preference data, in which 5,000 assessors each have ranked a set of 10 types of sushi [@kamishima2003]. It is interesting to see if we can find subsets of assessors with similar preferences. The sushi dataset was analyzed with the BMM by @vitelli2018, but the results in that paper differ somewhat from those obtained here, due to a bug in the function that was used to sample cluster probabilities from the Dirichlet distribution. We start by defining the data object. ```r sushi_data <- setup_rank_data(sushi_rankings) ``` ## Convergence diagnostics The function `compute_mallows_mixtures` computes multiple Mallows models with different numbers of mixture components. It returns a list of models of class `BayesMallowsMixtures`, in which each list element contains a model with a given number of mixture components. Its arguments are `n_clusters`, which specifies the number of mixture components to compute, an optional parameter `cl` which can be set to the return value of the `makeCluster` function in the `parallel` package, and an ellipsis (`...`) for passing on arguments to `compute_mallows`. Hypothesizing that we may not need more than 10 clusters to find a useful partitioning of the assessors, we start by doing test runs with 1, 4, 7, and 10 mixture components in order to assess convergence. We set the number of Monte Carlo samples to 5,000, and since this is a test run, we do not save within-cluster distances from each MCMC iteration and hence set `include_wcd = FALSE`. ```r library("parallel") cl <- makeCluster(detectCores()) bmm <- compute_mallows_mixtures( n_clusters = c(1, 4, 7, 10), data = sushi_data, compute_options = set_compute_options(nmc = 5000, include_wcd = FALSE), cl = cl) stopCluster(cl) ``` The function `assess_convergence` automatically creates a grid plot when given an object of class `BayesMallowsMixtures`, so we can check the convergence of $\alpha$ with the command ```r assess_convergence(bmm) ``` ![Trace plots for scale parameters.](cluster_trace_alpha-1.png) The resulting plot shows that all the chains seem to be close to convergence quite quickly. We can also make sure that the posterior distributions of the cluster probabilities $\tau_{c}$, $(c = 1, \dots, C)$ have converged properly, by setting `parameter = "cluster_probs"`. ```r assess_convergence(bmm, parameter = "cluster_probs") ``` ![Trace plots for cluster assignment probabilities.](cluster_trace_probs-1.png) Note that with only one cluster, the cluster probability is fixed at the value 1, while for other number of mixture components, the chains seem to be mixing well. ## Deciding on the number of mixture components Given the convergence assessment of the previous section, we are fairly confident that a burn-in of 1,000 is sufficient. We run 40,000 additional iterations, and try from 1 to 10 mixture components. Our goal is now to determine the number of mixture components to use, and in order to create an elbow plot, we set `include_wcd = TRUE` to compute the within-cluster distances in each step of the MCMC algorithm. Since the posterior distributions of $\rho_{c}$ ($c = 1,\dots,C$) are highly peaked, we save some memory by only saving every 10th value of $\boldsymbol{\rho}$ by setting `rho_thinning = 10`. ```r cl <- makeCluster(detectCores()) bmm <- compute_mallows_mixtures( n_clusters = 1:10, data = sushi_data, compute_options = set_compute_options(nmc = 11000, burnin = 1000, rho_thinning = 10, include_wcd = TRUE), cl = cl) stopCluster(cl) ``` We then create an elbow plot: ```r plot_elbow(bmm) ``` ![Elbow plot for deciding on the number of mixture components.](cluster_elbow-1.png) Although not clear-cut, we see that the within-cluster sum of distances levels off at around 5 clusters, and hence we choose to use 5 clusters in our model. ## Posterior distributions Having chosen 5 mixture components, we go on to fit a final model, still running 10,000 iterations after burnin. This time we call `compute_mallows` and set `n_clusters = 5`. We also set `clus_thinning = 10` to save the cluster assignments of each assessor in every 10th iteration, and `rho_thinning = 10` to save the estimated latent rank every 10th iteration. Note that thinning is done only for because saving the values at every iteration would result in very large objects being stored in memory, thus slowing down computation. For statistical efficiency, it is best to avoid thinning. ```r bmm <- compute_mallows( data = sushi_data, model_options = set_model_options(n_cluster = 5), compute_options = set_compute_options( nmc = 11000, burnin = 1000, clus_thinning = 10, rho_thinning = 10) ) ``` We can plot the posterior distributions of $\alpha$ and $\boldsymbol{\rho}$ in each cluster using `plot.BayesMallows` as shown previously for the potato data. ```r plot(bmm) ``` ![Posterior of scale parameter in each cluster.](cluster_posterior_alpha-1.png) Since there are five clusters, the easiest way of visualizing posterior rankings is by choosing a single item. ```r plot(bmm, parameter = "rho", items = 1) ``` ![Posterior of item 1 in each cluster.](cluster_posterior_rho-1.png) We can also show the posterior distributions of the cluster probabilities. ```r plot(bmm, parameter = "cluster_probs") ``` ![Posterior for cluster probabilities.](cluster_probs_posterior-1.png) Using the argument `parameter = "cluster_assignment"`, we can visualize the posterior probability for each assessor of belonging to each cluster: ```r plot(bmm, parameter = "cluster_assignment") ``` ![Posterior for cluster assignment.](cluster_assignment_posterior-1.png) The number underlying the plot can be found using `assign_cluster`. We can find clusterwise consensus rankings using `compute_consensus`. ```r cp_consensus <- compute_consensus(bmm) reshape( cp_consensus, direction = "wide", idvar = "ranking", timevar = "cluster", varying = list(unique(cp_consensus$cluster)), drop = "cumprob" ) #> ranking Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 #> 1 1 shrimp fatty tuna fatty tuna sea urchin fatty tuna #> 2 2 sea eel tuna salmon roe fatty tuna sea urchin #> 3 3 squid sea eel sea urchin salmon roe tuna #> 4 4 egg shrimp tuna sea eel salmon roe #> 5 5 fatty tuna tuna roll shrimp shrimp sea eel #> 6 6 tuna squid tuna roll tuna tuna roll #> 7 7 tuna roll egg squid squid shrimp #> 8 8 cucumber roll cucumber roll sea eel tuna roll squid #> 9 9 salmon roe salmon roe egg egg egg #> 10 10 sea urchin sea urchin cucumber roll cucumber roll cucumber roll ``` Note that for estimating cluster specific parameters, label switching is a potential problem that needs to be handled. `BayesMallows` ignores label switching issues inside the MCMC, because it has been shown that this approach is better for ensuring full convergence of the chain [@jasra2005;@celeux2000]. MCMC iterations can be re-ordered after convergence is achieved, for example by using the implementation of Stephens' algorithm [@Stephens2000] provided by the R package `label.switching` [@papastamoulis2016]. A full example of how to assess label switching is provided in the examples for the `compute_mallows` function. # References
/scratch/gouwar.j/cran-all/cranData/BayesMallows/vignettes/BayesMallows.Rmd
--- title: "Sequential Monte Carlo for the Bayesian Mallows model" output: rmarkdown::html_vignette pkgdown: as_is: true bibliography: ../inst/REFERENCES.bib link-citations: yes vignette: > %\VignetteIndexEntry{Sequential Monte Carlo for the Bayesian Mallows model} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```r library(BayesMallows) library(ggplot2) set.seed(123) ``` This vignette describes sequential Monte Carlo (SMC) algorithms to provide updated approximations to the posterior distribution of a single Mallows model. We consider scenarios where we receive sequential information in the form of complete rankings, partial rankings and updated rankings from existing individuals who have previously provided a (partial) ranking. This vignette focuses on the code. For an in-depth treatment of the implemented methodology, see @steinSequentialInferenceMallows2023 which is available <a href="https://eprints.lancs.ac.uk/id/eprint/195759/" target="_blank">here</a>. ## New users with complete rankings We use the `sushi_rankings` dataset to illustrate the methodology [@kamishima2003nantonac]. This dataset contains 5000 complete rankings for 10 sushi dishes. ```r head(sushi_rankings) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> [1,] 2 8 10 3 4 1 5 9 7 6 #> [2,] 1 8 6 4 10 9 3 5 7 2 #> [3,] 2 8 3 4 6 7 10 1 5 9 #> [4,] 4 7 5 6 1 2 8 3 9 10 #> [5,] 4 10 7 5 9 3 2 8 1 6 #> [6,] 4 6 2 10 7 5 1 9 8 3 ``` The SMC methodology is designed for the case where date arrive in batches. Assume that we initially have only 300 observed rankings, in `data_batch1`: ```r data_batch1 <- sushi_rankings[1:300, ] ``` We estimate a model on these data using `compute_mallows()`, which runs a full Metropolis-Hastings algorithm. ```r model1 <- compute_mallows(data = setup_rank_data(data_batch1)) ``` We assess convergence, and find that 300 is an appropriate burnin value. ```r assess_convergence(model1) ``` <div class="figure"> <img src="convergence_smc_full-1.png" alt="Trace plot for SMC model." height="4cm" /> <p class="caption">Trace plot for SMC model.</p> </div> ```r burnin(model1) <- 300 ``` Having saved this model, assume we receive another batch of preferences at a later timepoint, with an additional 300 rankings. ```r data_batch2 <- sushi_rankings[301:600, ] ``` We can now update the initial model, without rerunning the full Metropolis-Hastings algorithm, by calling `update_mallows()`. This function uses the sequential Monte Carlo algorithm of @steinSequentialInferenceMallows2023, and extracts a thinned sample of size `n_particles` from `model1` as initial values. ```r model2 <- update_mallows( model = model1, new_data = setup_rank_data(data_batch2), smc_options = set_smc_options(n_particles = 1000)) ``` All the posterior summary methods can be used for `model2`. For example, we can plot the posterior of $\alpha$. ```r plot(model2) ``` <div class="figure"> <img src="smc_complete_model2_alpha-1.png" alt="Posterior distribution of scale parameter for model 2." height="4cm" /> <p class="caption">Posterior distribution of scale parameter for model 2.</p> </div> And we can plot the posterior of the latent ranks of selected items: ```r plot(model2, parameter = "rho", items = c("shrimp", "sea eel", "tuna")) ``` <div class="figure"> <img src="smc_complete_model2-1.png" alt="Posterior distribution of selected latent rankings for model 2." height="2cm" /> <p class="caption">Posterior distribution of selected latent rankings for model 2.</p> </div> Next, assume we get yet another set of rankings later, now of size 1000. ```r data_batch3 <- sushi_rankings[601:1600, ] ``` We can re-update the model. ```r model3 <- update_mallows(model2, new_data = setup_rank_data(data_batch3)) ``` We can again plot posterior quantities, and the plots reveal that as expected, the posterior uncertainty about the rankings has decreased once we added more data. ```r plot(model3, parameter = "rho", items = c("shrimp", "sea eel", "tuna")) ``` <div class="figure"> <img src="smc_complete_model3-1.png" alt="Posterior distribution of selected latent rankings for model 3." height="2cm" /> <p class="caption">Posterior distribution of selected latent rankings for model 3.</p> </div> Finally, we add a batch with the last data and re-update the model. ```r data_batch4 <- sushi_rankings[1601:5000, ] model4 <- update_mallows( model3, new_data = setup_rank_data(rankings = data_batch4)) ``` The posterior uncertainty is now very small: ```r plot(model4, parameter = "rho", items = c("shrimp", "sea eel", "tuna")) ``` <div class="figure"> <img src="smc_complete_model4_rho-1.png" alt="Posterior distribution of selected latent rankings for model 4." height="2cm" /> <p class="caption">Posterior distribution of selected latent rankings for model 4.</p> </div> Below is a comparison of the posterior intervals of the dispersion parameter for each model. Note how the intervals get increasingly narrower as more data is added. ```r rbind( compute_posterior_intervals(model1), compute_posterior_intervals(model2), compute_posterior_intervals(model3), compute_posterior_intervals(model4) ) #> parameter mean median hpdi central_interval #> 1 alpha 1.768 1.766 [1.603,1.917] [1.604,1.922] #> 2 alpha 1.777 1.773 [1.630,1.935] [1.620,1.931] #> 3 alpha 1.753 1.756 [1.676,1.827] [1.677,1.827] #> 4 alpha 1.712 1.714 [1.667,1.748] [1.669,1.752] ``` As an assurance that the implementation is correct, we can compare the final model to what we get by running `compute_mallows` on the complete dataset: ```r mod_bmm <- compute_mallows( data = setup_rank_data(rankings = sushi_rankings), compute_options = set_compute_options(nmc = 5000, burnin = 1000) ) ``` We can compare the posteriors for $\alpha$ of the two models. Note that although both are rather wiggly, they agree very well about location and scale. ```r plot(mod_bmm) ``` <div class="figure"> <img src="smc_complete_mod_bmm-1.png" alt="Posterior distribution of scale parameter for Metropolis-Hastings run on the complete data." height="3cm" /> <p class="caption">Posterior distribution of scale parameter for Metropolis-Hastings run on the complete data.</p> </div> ```r plot(model4) ``` <div class="figure"> <img src="smc_complete_model4_alpha-1.png" alt="Posterior distribution of scale parameter for model 4." height="3cm" /> <p class="caption">Posterior distribution of scale parameter for model 4.</p> </div> The posterior intervals are also in good agreement. ```r rbind( compute_posterior_intervals(mod_bmm), compute_posterior_intervals(model4) ) #> parameter mean median hpdi central_interval #> 1 alpha 1.691 1.690 [1.648,1.734] [1.643,1.732] #> 2 alpha 1.712 1.714 [1.667,1.748] [1.669,1.752] ``` The cumulative probability consensus is also in good agrement: ```r compute_consensus(model4) #> cluster ranking item cumprob #> 1 Cluster 1 1 fatty tuna 1.000 #> 2 Cluster 1 2 salmon roe 1.000 #> 3 Cluster 1 3 tuna 1.000 #> 4 Cluster 1 4 shrimp 1.000 #> 5 Cluster 1 5 sea eel 1.000 #> 6 Cluster 1 6 tuna roll 0.835 #> 7 Cluster 1 7 squid 1.000 #> 8 Cluster 1 8 sea urchin 1.000 #> 9 Cluster 1 9 egg 1.000 #> 10 Cluster 1 10 cucumber roll 1.000 compute_consensus(mod_bmm) #> cluster ranking item cumprob #> 1 Cluster 1 1 fatty tuna 1 #> 2 Cluster 1 2 sea urchin 1 #> 3 Cluster 1 3 tuna 1 #> 4 Cluster 1 4 salmon roe 1 #> 5 Cluster 1 5 shrimp 1 #> 6 Cluster 1 6 sea eel 1 #> 7 Cluster 1 7 tuna roll 1 #> 8 Cluster 1 8 squid 1 #> 9 Cluster 1 9 egg 1 #> 10 Cluster 1 10 cucumber roll 1 ``` ## New users with partial or complete rankings The functionality extends directly to partial ranks, including both top-$k$ rankings and rankings missing at random. Pairwise preferences are also supported, although not demonstrated here. For this demonstration we shall assume that we can only observe the top-5 ranked items from each user in the `sushi_rankings` dataset. ```r data_partial <- sushi_rankings data_partial[data_partial > 5] <- NA head(data_partial) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> [1,] 2 NA NA 3 4 1 5 NA NA NA #> [2,] 1 NA NA 4 NA NA 3 5 NA 2 #> [3,] 2 NA 3 4 NA NA NA 1 5 NA #> [4,] 4 NA 5 NA 1 2 NA 3 NA NA #> [5,] 4 NA NA 5 NA 3 2 NA 1 NA #> [6,] 4 NA 2 NA NA 5 1 NA NA 3 ``` Again, assume we start out with a batch of data, this time with 100 rankings: ```r data_batch1 <- data_partial[1:100, ] ``` We estimate this model using `compute_mallows()`. Since there are `NA`s in the data, `compute_mallows()` will run imputation over the missing ranks. ```r model1 <- compute_mallows( data = setup_rank_data(data_batch1), compute_options = set_compute_options(nmc = 10000) ) ``` The trace plot shows that convergence is reached quickly. ```r assess_convergence(model1) ``` <div class="figure"> <img src="convergence_smc_partial-1.png" alt="Trace plot for SMC model." height="4cm" /> <p class="caption">Trace plot for SMC model.</p> </div> We set the burnin to 300. ```r burnin(model1) <- 300 ``` Below is the posterior for $\alpha$ after this initial run: ```r plot(model1) ``` <div class="figure"> <img src="smc_init_posterior_partial-1.png" alt="Posterior distribution of scale parameter after initial run." height="3cm" /> <p class="caption">Posterior distribution of scale parameter after initial run.</p> </div> Next, assume we receive 100 more top-5 rankings: ```r data_batch2 <- data_partial[101:200, ] ``` We now update the initial model, using SMC. By default, a uniform distribution is used to propose new values of augmented ranks. The pseudo-likelihood proposal developed in @steinSequentialInferenceMallows2023 can be used instead, by setting `aug_method = "pseudo"` in the call to `set_compute_options()`, and we do this here. ```r model2 <- update_mallows( model = model1, new_data = setup_rank_data(data_batch2), smc_options = set_smc_options(n_particles = 1000), compute_options = set_compute_options( aug_method = "pseudo", pseudo_aug_metric = "footrule") ) ``` Below is the posterior for $\alpha$: ```r plot(model2) ``` <div class="figure"> <img src="smc_updated_posterior_partial-1.png" alt="Posterior distribution of scale parameter after updating the model based on new rankings." height="3cm" /> <p class="caption">Posterior distribution of scale parameter after updating the model based on new rankings.</p> </div> When even more data arrives, we can update the model again. For example, assume we now get a set of complete rankings, with no missingness: ```r data_batch3 <- sushi_rankings[201:300, ] ``` We update the model just as before: ```r model3 <- update_mallows(model2, new_data = setup_rank_data(data_batch3)) ``` ```r plot(model3) ``` <div class="figure"> <img src="smc_second_updated_posterior_partial-1.png" alt="Posterior distribution of scale parameter after updating the model based on new rankings." height="3cm" /> <p class="caption">Posterior distribution of scale parameter after updating the model based on new rankings.</p> </div> ## Users updating their rankings Another setting supported is when existing users update their partial rankings. For example, users can initially give top-5 rankings, and subsequently update these to top-10 rankings, top-20 rankings, etc. Another setting is when there are ranks missing at random, and the users subsequently provide these rankings. The main methodological issue in this case, is that the augmented rankings at the previous SMC timepoint may be in conflict with the new rankings. In this case, the augmented rankings must be corrected, as described in Chapter 6 of @steinSequentialInferenceMallows2023. We provide an example again with the sushi data. We assume that the initial batch of data contains top-3 rankings provided by the first 100 users. ```r set.seed(123) sushi_reduced <- sushi_rankings[1:100, ] data_batch1 <- ifelse(sushi_reduced > 3, NA_real_, sushi_reduced) ``` To keep track of existing users updating their preferences, we also need a user ID in this case, which is required to be a number vector. ```r rownames(data_batch1) <- seq_len(nrow(data_batch1)) head(data_batch1) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> 1 2 NA NA 3 NA 1 NA NA NA NA #> 2 1 NA NA NA NA NA 3 NA NA 2 #> 3 2 NA 3 NA NA NA NA 1 NA NA #> 4 NA NA NA NA 1 2 NA 3 NA NA #> 5 NA NA NA NA NA 3 2 NA 1 NA #> 6 NA NA 2 NA NA NA 1 NA NA 3 ``` We fit the standard Metropolis-Hastings algorithm to these data, yielding a starting point. ```r mod_init <- compute_mallows( data = setup_rank_data( rankings = data_batch1, user_ids = as.numeric(rownames(data_batch1))) ) ``` Convergence seems to be quick, and we set the burnin to 300. ```r assess_convergence(mod_init) ``` <div class="figure"> <img src="sushi_updated_batch1_burnin-1.png" alt="Trace plot for initial run on sushi batch 1." height="4cm" /> <p class="caption">Trace plot for initial run on sushi batch 1.</p> </div> ```r burnin(mod_init) <- 300 ``` Next, assume we receive top-5 rankings from the same users. We now update the model using SMC. ```r data_batch2 <- ifelse(sushi_reduced > 5, NA_real_, sushi_reduced) rownames(data_batch2) <- seq_len(nrow(data_batch2)) model2 <- update_mallows( model = mod_init, new_data = setup_rank_data( rankings = data_batch2, user_ids = as.numeric(rownames(data_batch2))), compute_options = set_compute_options( aug_method = "pseudo", pseudo_aug_metric = "footrule") ) ``` We can plot the posterior distributions of $\alpha$ before and after. ```r plot(mod_init) + ggtitle("Posterior of dispersion parameter after data batch 1") ``` <div class="figure"> <img src="sushi_updated_batch1_posterior-1.png" alt="Posterior after sushi batch 1." height="4cm" /> <p class="caption">Posterior after sushi batch 1.</p> </div> ```r plot(model2) + ggtitle("Posterior of dispersion parameter after data batch 2") ``` <div class="figure"> <img src="sushi_updated_batch2_posterior-1.png" alt="Posterior after sushi batch 2." height="4cm" /> <p class="caption">Posterior after sushi batch 2.</p> </div> Next, assume we receive top-8 rankings from the same users. ```r data_batch3 <- ifelse(sushi_reduced > 8, NA_real_, sushi_reduced) rownames(data_batch3) <- seq_len(nrow(data_batch3)) ``` Before proceeding, it is instructive to study why this situation needs special care. Below are the augmented rankings for user 1 in particle 1: ```r (v1 <- model2$augmented_rankings[, 1, 1]) #> [1] 2 7 10 3 4 1 5 6 8 9 ``` Next, we show the data provided by user 1 in `data_batch3`: ```r (v2a <- unname(data_batch3[1, ])) #> [1] 2 8 NA 3 4 1 5 NA 7 6 ``` By comparing the non-missing ranks, we can check if they are consistent or not: ```r (v2b <- v2a[!is.na(v2a)]) #> [1] 2 8 3 4 1 5 7 6 v1[v1 %in% v2b] #> [1] 2 7 3 4 1 5 6 8 all(v1[v1 %in% v2b] == v2b) #> [1] FALSE ``` The provided data are not consistent with the augmented rankings in this case. This means that the augmented rankings for user 1 in particle 1 need to be corrected by the algorithm. Luckily, this happens automatically in our implementation, so we can update the model again. ```r model3 <- update_mallows( model = mod_init, new_data = setup_rank_data( rankings = data_batch3, user_ids = as.numeric(rownames(data_batch3)))) ``` Next we plot the posterior: ```r plot(model3) + ggtitle("Posterior of dispersion parameter after data batch 3") ``` <div class="figure"> <img src="sushi_updated_batch3_posterior-1.png" alt="Posterior after sushi batch 3." height="4cm" /> <p class="caption">Posterior after sushi batch 3.</p> </div> Now assume we get a batch of new users, without missing ranks. These can be treated just as the other ones, but we need new user IDs. ```r data_batch4 <- sushi_rankings[500:600, ] rownames(data_batch4) <- 500:600 head(data_batch4) #> shrimp sea eel tuna squid sea urchin salmon roe egg fatty tuna tuna roll cucumber roll #> 500 6 5 4 8 2 3 7 1 9 10 #> 501 3 9 5 8 4 2 6 1 7 10 #> 502 3 1 8 5 4 7 9 2 6 10 #> 503 8 6 3 1 4 5 9 7 2 10 #> 504 4 7 1 2 9 10 3 8 5 6 #> 505 1 5 6 8 3 4 9 2 7 10 ``` ```r model4 <- update_mallows( model = model3, new_data = setup_rank_data( rankings = data_batch4, user_ids = as.numeric(rownames(data_batch4)))) ``` Here is the posterior for this model. ```r plot(model4) + ggtitle("Posterior of dispersion parameter after data batch 4") ``` <div class="figure"> <img src="sushi_updated_batch4_posterior-1.png" alt="Posterior after sushi batch 4." height="4cm" /> <p class="caption">Posterior after sushi batch 4.</p> </div> We can confirm that the implementation is sensible by giving the complete data to `compute_mallows`: ```r full_data <- rbind(data_batch3, data_batch4) mod_bmm <- compute_mallows(data = setup_rank_data(rankings = full_data)) ``` The trace plot indicates good convergence, and we set the burnin to 300. ```r assess_convergence(mod_bmm) ``` <div class="figure"> <img src="sushi_updated_burnin-1.png" alt="Trace plot for MCMC run on sushi data." height="4cm" /> <p class="caption">Trace plot for MCMC run on sushi data.</p> </div> ```r burnin(mod_bmm) <- 300 ``` We see that the posterior is close to the one of `model4`: ```r plot(mod_bmm) ``` <div class="figure"> <img src="sushi_updated_bmm_posterior-1.png" alt="Posterior for MCMC on sushi data." height="4cm" /> <p class="caption">Posterior for MCMC on sushi data.</p> </div> ## References
/scratch/gouwar.j/cran-all/cranData/BayesMallows/vignettes/SMC-Mallows.Rmd
--- title: "MCMC with Parallel Chains" output: rmarkdown::html_vignette: fig_width: 6 fig_height: 4 bibliography: ../inst/REFERENCES.bib link-citations: yes vignette: > %\VignetteIndexEntry{MCMC with Parallel Chains} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- ```r library(BayesMallows) set.seed(123) ``` This vignette describes how to run Markov chain Monte Carlo with parallel chains. For an introduction to the "BayesMallows" package, please see [the introductory vignette](https://ocbe-uio.github.io/BayesMallows/articles/BayesMallows.html), which is an updated version of @sorensen2020. For parallel processing of particles with the sequential Monte Carlo algorithm of @steinSequentialInferenceMallows2023, see the [SMC vignette](https://ocbe-uio.github.io/BayesMallows/articles/SMC-Mallows.html). ## Why Parallel Chains? Modern computers have multiple cores, and on computing clusters one can get access to hundreds of cores easily. By running Markov Chains in parallel on $K$ cores, ideally from different starting points, we achieve at least the following: 1. The time you have to wait to get the required number of post-burnin samples scales like $1/K$. 2. You can check convergence by comparing chains. ## Parallel Chains with Complete Rankings In "BayesMallows" we use the "parallel" package for parallel computation. Parallelization is obtained by starting a cluster and providing it as an argument. Note that we also give one initial value of the dispersion parameter $\alpha$ to each chain. ```r library(parallel) cl <- makeCluster(4) fit <- compute_mallows( data = setup_rank_data(rankings = potato_visual), compute_options = set_compute_options(nmc = 5000), cl = cl ) stopCluster(cl) ``` We can assess convergence in the usual way: ```r assess_convergence(fit) ``` ![Trace plot of scale parameter for four chains.](parallel_assess_convergence_alpha-1.png) We can also assess convergence for the latent ranks $\boldsymbol{\rho}$. Since the initial value of $\boldsymbol{\rho}$ is sampled uniformly, the two chains automatically get different initial values. ```r assess_convergence(fit, parameter = "rho", items = 1:3) ``` ![Trace plot of modal ranking for four chains.](parallel_assess_convergence_rho-1.png) Based on the convergence plots, we set the burnin to 3000. ```r burnin(fit) <- 3000 ``` We can now use all the tools for assessing the posterior distributions as usual. The post-burnin samples for all parallel chains are simply combined, as they should be. Below is a plot of the posterior distribution of $\alpha$. ```r plot(fit) ``` ![Posterior of scale parameter, combing post-burnin samples from all chains.](parallel_posterior_alpha-1.png) Next is a plot of the posterior distribution of $\boldsymbol{\rho}$. ```r plot(fit, parameter = "rho", items = 4:7) ``` ![Posterior of modal ranking, combing post-burnin samples from all chains.](parallel_posterior_rho-1.png) ## Parallel Chains with Pairwise Preferences A case where parallel chains might be more strongly needed is with incomplete data, e.g., arising from pairwise preferences. In this case the MCMC algorithm needs to perform data augmentation, which tends to be both slow and sticky. We illustrate this with the beach preference data, again referring to @sorensen2020 for a more thorough introduction to the aspects not directly related to parallelism. ```r beach_data <- setup_rank_data(preferences = beach_preferences) ``` We run four parallel chains, letting the package generate random initial rankings, but again providing a vector of initial values for $\alpha$. ```r cl <- makeCluster(4) fit <- compute_mallows( data = beach_data, compute_options = set_compute_options(nmc = 4000, save_aug = TRUE), initial_values = set_initial_values(alpha_init = runif(4, 1, 4)), cl = cl ) stopCluster(cl) ``` ### Trace Plots The convergence plots shows some long-range autocorrelation, but otherwise it seems to mix relatively well. ```r assess_convergence(fit) ``` ![Trace plot of scale parameter for beach preferences data, on four chains.](parallel_assess_converge_prefs_alpha-1.png) Here is the convergence plot for $\boldsymbol{\rho}$: ```r assess_convergence(fit, parameter = "rho", items = 4:6) ``` ![Trace plot of modal ranking for beach preferences data, on four chains.](parallel_assess_converge_prefs_rho-1.png) To avoid overplotting, it's a good idea to pick a low number of assessors and chains. We here look at items 1-3 of assessors 1 and 2. ```r assess_convergence(fit, parameter = "Rtilde", items = 1:3, assessors = 1:2 ) ``` ![Trace plot of augmented rankings for beach preference data, on four chains.](parallel_assess_convergence_prefs_rtilde-1.png) ### Posterior Quantities Based on the trace plots, the chains seem to be mixing well. We set the burnin to 1000. ```r burnin(fit) <- 1000 ``` We can now study the posterior distributions. Here is the posterior for $\alpha$. Note that by increasing the `nmc` argument to `compute_mallows` above, the density would appear smoother. In this vignette we have kept it low to reduce the run time. ```r plot(fit) ``` ![Posterior distribution for scale parameter.](parallel_beach_prefs_alpha_posterior-1.png) We can also look at the posterior for $\boldsymbol{\rho}$. ```r plot(fit, parameter = "rho", items = 6:9) ``` ![Posterior distribution for modal rankings.](parallel_beach_prefs_rho_posterior-1.png) We can also compute posterior intervals in the usual way: ```r compute_posterior_intervals(fit, parameter = "alpha") #> parameter mean median hpdi central_interval #> 1 alpha 4.798 4.793 [4.242,5.373] [4.235,5.371] ``` ```r compute_posterior_intervals(fit, parameter = "rho") #> parameter item mean median hpdi central_interval #> 1 rho Item 1 7 7 [7] [6,7] #> 2 rho Item 2 15 15 [15] [14,15] #> 3 rho Item 3 3 3 [3,4] [3,4] #> 4 rho Item 4 11 11 [11,13] [11,13] #> 5 rho Item 5 9 9 [8,10] [8,10] #> 6 rho Item 6 2 2 [1,2] [1,2] #> 7 rho Item 7 9 8 [8,10] [8,10] #> 8 rho Item 8 12 12 [11,13] [11,14] #> 9 rho Item 9 1 1 [1,2] [1,2] #> 10 rho Item 10 6 6 [5,6] [5,7] #> 11 rho Item 11 4 4 [3,5] [3,5] #> 12 rho Item 12 13 13 [12,14] [11,14] #> 13 rho Item 13 10 10 [8,10] [8,10] #> 14 rho Item 14 13 14 [12,14] [12,14] #> 15 rho Item 15 5 5 [4,5] [4,6] ``` And we can compute the consensus ranking: ```r compute_consensus(fit) #> cluster ranking item cumprob #> 1 Cluster 1 1 Item 9 0.8691667 #> 2 Cluster 1 2 Item 6 1.0000000 #> 3 Cluster 1 3 Item 3 0.6391667 #> 4 Cluster 1 4 Item 11 0.9404167 #> 5 Cluster 1 5 Item 15 0.9559167 #> 6 Cluster 1 6 Item 10 0.9636667 #> 7 Cluster 1 7 Item 1 1.0000000 #> 8 Cluster 1 8 Item 7 0.5473333 #> 9 Cluster 1 9 Item 5 0.9255833 #> 10 Cluster 1 10 Item 13 1.0000000 #> 11 Cluster 1 11 Item 4 0.6924167 #> 12 Cluster 1 12 Item 8 0.7833333 #> 13 Cluster 1 13 Item 12 0.6158333 #> 14 Cluster 1 14 Item 14 0.9958333 #> 15 Cluster 1 15 Item 2 1.0000000 ``` ```r compute_consensus(fit, type = "MAP") #> cluster map_ranking item probability #> 1 Cluster 1 1 Item 9 0.2683333 #> 2 Cluster 1 2 Item 6 0.2683333 #> 3 Cluster 1 3 Item 3 0.2683333 #> 4 Cluster 1 4 Item 11 0.2683333 #> 5 Cluster 1 5 Item 15 0.2683333 #> 6 Cluster 1 6 Item 10 0.2683333 #> 7 Cluster 1 7 Item 1 0.2683333 #> 8 Cluster 1 8 Item 7 0.2683333 #> 9 Cluster 1 9 Item 5 0.2683333 #> 10 Cluster 1 10 Item 13 0.2683333 #> 11 Cluster 1 11 Item 4 0.2683333 #> 12 Cluster 1 12 Item 8 0.2683333 #> 13 Cluster 1 13 Item 12 0.2683333 #> 14 Cluster 1 14 Item 14 0.2683333 #> 15 Cluster 1 15 Item 2 0.2683333 ``` We can compute the probability of being top-$k$, here for $k=4$: ```r plot_top_k(fit, k = 4) ``` ![Probability of being top-4 for beach preference data.](parallel_top_k-1.png) # References
/scratch/gouwar.j/cran-all/cranData/BayesMallows/vignettes/parallel_chains.Rmd
#' Bayesian Mass Balance #' #' Allows the user to specify the covariance structure for a Bayesian mass balance, simulates draws from reconciled masses and relevant covariance matrix, and approximates the log-marginal likelihood. #' @param X A matrix that maps constrained masses to observed masses. Can be built from the function \code{\link{constrainProcess}}, see documentation for details. #' @param y A list of matrices of observed mass flow rates. Each matrix is a separate sample component. The rows of each matrix index the sampling location, and the columns index the sample set number. Can be specified using the \code{\link{importObservations}} function. #' @param cov.structure Character string. \code{"indep"} allows for no correlation. \code{"component"} indicates correlation within an individual sample component. \code{"location"} indicates correlation within an individual sampling location. Not specifying \code{cov.structure} defaults to the \code{"indep"} structure. #' @param priors List or character string. When the default value \code{priors = "default"} is used, the \code{BMB} uses a set of default conjugate priors. When passing a list to the argument, the list must contain user specified hyperparameter values for each conjugate prior. To see the required list structure run \code{BMB} with \code{BTE = c(1,2,1)} and inspect the output. When \code{priors = "Jeffreys"} the Jeffreys priors for \eqn{\Sigma} and \eqn{\sigma^2} with known mean given in \insertCite{priorlist}{BayesMassBal}. When \code{priors = "Jeffreys"}, the prior used for \eqn{\beta} is proportional to the indicator function \eqn{I\lbrack \beta > 0 \rbrack}. See Details for more information. #' @param BTE Numeric vector giving \code{c(Burn-in, Total-iterations, and Every)} for MCMC approximation of target distributions. The function \code{BMB} produces a total number of samples of \eqn{(T - B)/E}. \eqn{E} specifies that only one of every \eqn{E} draws are saved. \eqn{E > 1} reduces autocorrelation between obtained samples at the expense of computation time. #' @param lml Logical indicating if the log-marginal likelihood should be approximated. Default is \code{FALSE}, which reduces computation time. Log-marginal likelihood is approximated using methods in \insertCite{chib}{BayesMassBal}. #' @param ybal Logical indicating if the mass balanced samples for each \eqn{y} should be returned. Default is \code{TRUE}. Setting \code{ybal=FALSE} results in a savings in RAM and computation time. #' @param diagnostics Logical or list indicating if diagnostic functions \code{\link[coda]{geweke.diag}} and \code{\link[coda]{effectiveSize}} \insertCite{coda}{BayesMassBal} should computed for the obtained samples. The default of \code{TRUE} indicates diagnostics should be run with their default parameters. Alternatively, passing a list of the structure \code{list(frac1 = 0.1, frac2 = 0.5)} will run both diagnostics and allow \code{\link[coda]{geweke.diag}} to be run with parameters other than the default. #' @param verb Numeric indicating verbosity of progress printed to R-console. The default of 1 prints messages and a progress bar to the console during all iterative methods. \code{verb = 0} indicates no messages are printed. #' @return Returns a list of outputs #' @return \item{\code{beta}}{List of matrices of samples from the distribution of reconciled data. Each matrix in the list is a separate sample component. Each column of a matrix in \code{beta} is a draw from the target distribution.} #' @return \item{\code{Sig}}{List of matrices containing draws from the distribution of each covariance matrix. If \code{S.t} is the \eqn{t^{th}} draw from the distribution of covariance matrix \code{S} and: #' \itemize{\item \code{cov.structure = "indep"}, the \eqn{t^{th}} column of a matrix in \code{Sig} is \code{diag(S.t)}. #' \item\code{cov.structure = "component"} or \code{"location"}, the \eqn{t^{th}} column of a matrix in \code{Sig} is equal to \code{S.t[upper.tri(S.t, diag = TRUE)]}.}} #' @return \item{\code{priors}}{List of prior hyperparameters used in generating conditional posterior distributions and approximating log-marginal likelihood. The structure of the input argument \code{priors} is required to be the same as the structure of this returned list slice. See Details.} #' @return \item{\code{cov.structure}}{Character string containing the covariance structure used.} #' @return \item{\code{y.cov}}{List of character matrices indicating details for the structure of each covariance matrix. Only returned when \code{cov.structure = "location"}} #' @return \item{\code{lml}}{Numeric of the log-marginal likelihood approximation. Returns \code{NA} when \code{lml = FALSE}} #' @return \item{\code{diagnostics}}{List containing results from diagnostic functions \code{\link[coda]{geweke.diag}} and \code{\link[coda]{effectiveSize}}} #' @return \item{\code{ybal}}{List of samples from the distribution of reconciled mass flow rates, in the same format as the function argument \code{y}. Produced with argument \code{ybal = TRUE}. Equivalent to \code{lapply(BMB(...)$beta,function(X,x){x \%*\% X} , x = X)} . Viewing this output is more intuitive than viewing samples of \code{beta}, at the expense of RAM and some computation time.} #' @return \item{\code{X}}{The function argument \code{X} is passed to the output so that it can be used with other \code{BayesMassBal} functions.} #' @return \item{\code{type}}{Character string used by \code{\link{plot.BayesMassBal}}. \code{type = "BMB"} for an object returned from the \code{BMB} function.} #' #' @details #' #' See \code{vignette("Two_Node_Process", package = "BayesMassBal")} for further details on how to use function outputs. #' #' When the \code{priors} argument is left unspecified, a set of default conjugate priors are used, which are chosen to allow \code{BMB()} to work well in a general setting. In the current version of the \code{BayesMassBal} package, only the conjugate priors stated below can be used, but hyperparameter values can be specified by the user using the \code{priors} argument. #' #' The prior distribution on \code{beta} is a normal distribution truncated at 0. The mean of this distribution before truncation is the \href{https://en.wikipedia.org/wiki/Ordinary_least_squares}{ordinary least squares} (OLS) estimate of \eqn{\beta}. OLS estimates less than 0, are changed to 0. The prior variance, before truncation, of each element of \eqn{\beta} is set to: #' #' \deqn{10^{\mathrm{number of integer digits of an element of } \beta + 6}} #' #' Currently, there is only support for diagonal prior covariance matrices for \eqn{\beta} #' #' When \code{cov.structure = "indep"} the error of all observations in a sample set are independent. An \href{https://en.wikipedia.org/wiki/Inverse-gamma_distribution}{inverse gamma} prior distribution, with \eqn{\alpha_0 = 0.000001} and \eqn{\beta_0 = 0.000001}, is placed on the variance of the mass flow rate for each sample component at each sample location. #' #' When \code{cov.structure = "component"} or \code{"location"}, the prior distribution on \eqn{\Sigma_i} is \href{https://en.wikipedia.org/wiki/Inverse-Wishart_distribution}{inverse Wishart} \eqn{(\nu_0, \nu_0 \times S_0)}. The degrees of freedom parameter, \eqn{\nu_0}, is equal to the dimension of \eqn{\Sigma_i}. The scale matrix parameter is equal to a matrix, \eqn{S_0}, with the sample variance of the relevant observation on the diagonal, multiplied by \eqn{\nu_0}. #' #' The user is able to specify the prior hyperparameters of the mean and variance of \code{beta}, \eqn{\alpha_0} and \eqn{\beta_0} for each \eqn{\sigma^2}, and the degrees of freedom and scale matrix for each \eqn{\Sigma_i} using the \code{priors} argument. It is advisable for the user to specify their own prior hyperparameters for \eqn{p(\sigma^2)} if the variance of any element is well under 1, or \eqn{p(\beta)} if the there is a wide range in the magnitude of observations. #' #' When \code{priors = "Jeffreys"} \href{https://en.wikipedia.org/wiki/Jeffreys_prior}{Jeffreys} priors are used for the prior distribution of the variance and covariance parameters. Priors used are \eqn{p(\sigma^2) \propto \frac{1}{\sigma^2}} and \eqn{p(\Sigma) \propto |\Sigma|^{-(p+1)/2}}, as listed in \insertCite{priorlist}{BayesMassBal}. The Jeffreys prior for a \eqn{\beta} with infinite support is \eqn{p(\beta) \propto 1}. To preserve the prior information that \eqn{\beta > 0}, \eqn{p(\beta)\propto I\lbrack \beta > 0 \rbrack} is chosen. It is not possible to calculate log-marginal likelihood using the methods in \insertCite{chib}{BayesMassBal} with Jeffreys priors. Therefore, if \code{priors = "Jeffreys"} and \code{lml = TRUE}, the \code{lml} argument will be ignored and a warning will be printed. #' #' \code{lml} is reported in base \eqn{e}. See \href{https://en.wikipedia.org/wiki/Bayes_factor#Interpretation}{here} for some guidance on how to interpret Bayes Factors, but note log base 10 is used on Wikipedia. #' #' @examples #' y <- importObservations(file = system.file("extdata", "twonode_example.csv", #' package = "BayesMassBal"), #' header = TRUE, csv.params = list(sep = ";")) #' #' C <- matrix(c(1,-1,0,-1,0,0,1,-1,0,-1), byrow = TRUE, ncol = 5, nrow = 2) #' X <- constrainProcess(C = C) #' #' BMB_example <- BMB(X = X, y = y, cov.structure = "indep", #' BTE = c(10,300,1), lml = FALSE, verb=0) #' #' summary(BMB_example) #' #' @importFrom Rdpack reprompt #' @importFrom Matrix bdiag #' @importFrom tmvtnorm rtmvnorm dtmvnorm #' @importFrom LaplacesDemon rinvwishart dinvwishart rinvgamma dinvgamma #' @importFrom stats sd #' @importFrom utils txtProgressBar setTxtProgressBar #' @importFrom coda geweke.diag effectiveSize #' @export #' #' @references #' \insertRef{chib}{BayesMassBal} #' \insertRef{gibbsexpl}{BayesMassBal} #' \insertRef{coda}{BayesMassBal} #' \insertRef{priorlist}{BayesMassBal} BMB <- function(X, y, cov.structure = c("indep","component","location"), priors = "default", BTE = c(500,20000,1), lml = FALSE, ybal = TRUE, diagnostics = TRUE, verb = 1){ if(all(cov.structure == c("indep","component","location"))){cov.structure <- "indep"} if(is.null(names(y))){names(y) <- paste0("component",1:length(y))} if(!is.matrix(y[[1]])){y <- lapply(y,as.matrix)} if(priors == "Jeffreys" & lml == TRUE){ lml <- FALSE warning("Methods used do not allow lml approximation when argument: priors = \"Jeffreys\". lml has been set to FALSE\n", immediate. = TRUE) } if(cov.structure == "indep"){ samps <- indep_sig(X = X,y = y,priors = priors,BTE = BTE, verb = verb) chib.out <- NA if(lml == TRUE){chib.out <- chib_indep(s.l = samps, X = X, y = y, verb = verb)$lml} }else if(cov.structure == "component"){ samps <- component_sig(X =X,y = y,priors = priors,BTE = BTE, verb = verb) chib.out <- NA if(lml == TRUE){chib.out <- chib_component(s.l = samps,X = X, y = y, verb = verb)$lml} }else if(cov.structure == "location"){ samps <- location_sig(X = X, y = y, priors = priors, BTE = BTE, verb = verb) chib.out <- NA if(lml == TRUE){chib.out <- chib_location(s.l = samps, X = X, y = y,verb = verb)$lml} }else{warning("Please select a valid covariance structure. See Argument cov.structure in documentation.", immediate. = TRUE)} samps$lml <- chib.out if(any(diagnostics != FALSE)){ diagnostics <- mcmcdiagnostics(samps,diagnostics) samps$diagnostics <- diagnostics } if(ybal == TRUE){ samps$ybal <- lapply(samps$beta,function(X,x){X <- x %*% X; row.names(X) <- paste(rep("y", times = nrow(X)), 1:nrow(X), sep ="_"); return(X)}, x = X) } samps$X <- X samps$type <- "BMB" class(samps) <- "BayesMassBal" if(verb != 0){message("Done!")} return(samps) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/BMB.R
chib_component <- function(s.l,X,y, verb){ beta <- do.call(rbind,s.l$beta) Sig <- s.l$Sig x.unit <- X Bprior <- s.l$priors$beta Sprior <- s.l$priors$Sig$S nu0 <- s.l$priors$Sig$nu0 ### Components M <- length(Sig) ### Locations N <- nrow(y[[1]]) ### Number of tests K <- ncol(y[[1]]) T <- ncol(beta) P <- nrow(beta) p.M <- P/M ### Define Ybar and X Y <- do.call(rbind,y) Ybar <- apply(Y,1,mean) Y <- as.vector(Y) yibar <- lapply(y,rowMeans) X.unit <- bdiag(replicate(M,x.unit,simplify = FALSE)) X <- do.call("rbind", replicate(K, X.unit, simplify=FALSE)) ## Prior Hyperparameters S0 <- Bprior$V0 B0 <- Bprior$mu0 S0i <- solve(S0) S0iB0 <- S0i %*% B0 ### Initialize p.int <- rep(NA, times = M) B.bar <- apply(beta,1,mean) Sig.bar <- list() for(j in 1:M){ Sig.bar[[j]] <- apply(Sig[[j]],1,mean) } Sig.barfull <- list() Si.full <- Bcov.full <- list() S <- matrix(NA, N,N) postB <- rep(NA, times = T) lpost.Sig <- rep(NA, times = M) lprior.Sig <- rep(NA, times = M) if(verb != 0){ message("Approximating integral for log-marignal likelihood") pb <- txtProgressBar(min = 0, max = T/100, initial = 0, style = 3) step <- 0 } for(t in 1:T){ if(verb != 0 & (t/100) %% 1 == 0){ step <- step + 1 setTxtProgressBar(pb,value = step) } for(i in 1:M){ s <- Sig[[i]][,t] S[upper.tri(S, diag = TRUE)] <- s S[lower.tri(S)] <- t(S)[lower.tri(S)] Si <- solve(S) #Si.full[[i]] <- Si xtSix <- t(x.unit) %*% Si %*% x.unit * K #xtSixi <- solve(xtSix) S0i.use <- diag(S0i)[((i-1)*p.M+1):(i*p.M)] S0iB0.use <- S0iB0[((i-1)*p.M+1):(i*p.M)] cov.use <- solve(xtSix + diag(S0i.use)) bhat <- cov.use %*% (S0iB0.use + t(x.unit)%*%Si %*% yibar[[i]] * K) p.int[i] <- dtmvnorm(B.bar[((i-1)*p.M + 1):(i*p.M)], mean = bhat[,1], sigma = cov.use, lower = rep(0, times= p.M), log = TRUE) } postB[t] <- sum(p.int) } if(verb != 0){close(pb)} lpostB <- log(mean(exp(postB))) for(i in 1:M){ s <- Sig.bar[[i]] S[upper.tri(S, diag = TRUE)] <- s S[lower.tri(S)] <- t(S)[lower.tri(S)] Sig.barfull[[i]] <- S B.use <- B.bar[(((i-1)*p.M)+1):(i*p.M)] xB <- x.unit %*% B.use xB <- xB[,1] psi <- matrix(0, ncol = N, nrow = N) for(j in 1:K){ psi <- (y[[i]][,j] - xB) %*% t(y[[i]][,j] - xB) + psi } psi <- psi + Sprior[[i]]*(nu0) lpost.Sig[i] <- dinvwishart(S,nu0+K,psi, log= TRUE) lprior.Sig[i] <- dinvwishart(S,nu0,Sprior[[i]]*nu0, log = TRUE) } lpriorB <- dtmvnorm(B.bar, mean = B0, sigma = S0, lower =rep(0, times = P), log = TRUE) lpost <- sum(lpost.Sig) + lpostB lprior <- sum(lprior.Sig) + lpriorB Omega <- bdiag(rep(Sig.barfull, times = K)) llik <- dtmvnorm(Y,mean = as.numeric(X%*%B.bar), sigma = Omega, lower =rep(0, times = length(Y)), log = TRUE) lml <- llik + lprior - lpost return(list(lpost = lpost, llik = llik, lprior =lprior, lml= lml)) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/chib_component.R
chib_indep <- function(s.l,X,y, verb){ beta <- do.call(rbind,s.l$beta) Sig <- s.l$Sig x.unit <- X Bprior <- s.l$priors$beta Sprior <- s.l$priors$phi ### Components M <- length(Sig) ### Locations N <- nrow(y[[1]]) ### Number of tests K <- ncol(y[[1]]) T <- ncol(beta) P <- nrow(beta) p.M <- P/M ### Define Ybar and X Y <- do.call(rbind,y) Ybar <- apply(Y,1,mean) Y <- as.vector(Y) yibar <- lapply(y,rowMeans) X.unit <- bdiag(replicate(M,x.unit,simplify = FALSE)) X <- do.call("rbind", replicate(K, X.unit, simplify=FALSE)) ## Prior Hyperparameters S0 <- Bprior$V0 B0 <- Bprior$mu0 S0i <- solve(S0) S0iB0 <- S0i %*% B0 ### Initialize p.int <- rep(NA, times = M) B.bar <- apply(beta,1,mean) Sig.bar <- list() for(j in 1:M){ Sig.bar[[j]] <- apply(Sig[[j]],1,mean) } Sig.barfull <- list() Si.full <- Bcov.full <- list() S <- matrix(NA, N,N) postB <- rep(NA, times = T) lpost.Sig <- rep(NA, times = M) lprior.Sig <- rep(NA, times = M) rate <- rep(NA, times = N) if(verb != 0){ message("Approximating integral for log-marignal likelihood") pb <- txtProgressBar(min = 0, max = T/100, initial = 0, style = 3) step <- 0 } for(t in 1:T){ if(verb != 0 & (t/100) %% 1 == 0){ step <- step + 1 setTxtProgressBar(pb,value = step) } for(i in 1:M){ Si <- diag(1/Sig[[i]][,t]) xtSix <- t(x.unit) %*% Si %*% x.unit * K #xtSixi <- solve(xtSix) S0i.use <- diag(S0i)[((i-1)*p.M+1):(i*p.M)] S0iB0.use <- S0iB0[((i-1)*p.M+1):(i*p.M)] cov.use <- solve(xtSix + diag(S0i.use)) bhat <- cov.use %*% (S0iB0.use + t(x.unit)%*%Si %*% yibar[[i]] * K) p.int[i] <- dtmvnorm(B.bar[((i-1)*p.M + 1):(i*p.M)], mean = bhat[,1], sigma = cov.use, lower = rep(0, times= p.M), log = TRUE) } postB[t] <- sum(p.int) } if(verb != 0){close(pb)} lpostB <- log(mean(exp(postB))) for(i in 1:M){ s <- Sig.bar[[i]] B.use <- B.bar[(((i-1)*p.M)+1):(i*p.M)] xB <- x.unit %*% B.use for(j in 1:N){ rate[j] <- 0.5*(t(xB[j] - y[[i]][j,]) %*% (xB[j] - y[[i]][j,]) )+ Sprior[[i]]$b[j] } shape <- N/2 + Sprior[[i]]$a lpost.Sig[i] <- sum(dinvgamma(s,shape = shape, scale = rate, log = TRUE)) lprior.Sig[i] <- sum(dinvgamma(s,shape = Sprior[[i]]$a,scale = Sprior[[i]]$b, log = TRUE)) } lpriorB <- dtmvnorm(B.bar, mean = B0, sigma = S0, lower =rep(0, times = P), log = TRUE) lpost <- sum(lpost.Sig) + lpostB lprior <- sum(lprior.Sig) + lpriorB Omega <- do.call("c",Sig.bar) Omega <- rep(Omega, times = K) Omega <- diag(Omega) llik <- dtmvnorm(Y,mean = as.numeric(X%*%B.bar), sigma = Omega, lower =rep(0, times = length(Y)), log = TRUE) lml <- llik + lprior - lpost return(list(lpost = lpost, llik = llik, lprior =lprior, lml= lml)) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/chib_indep.R
chib_location <- function(s.l,X,y, verb){ beta <- do.call(rbind,s.l$beta) Sig <- s.l$Sig x.unit <- X Bprior <- s.l$priors$beta Sprior <- s.l$priors$Sig$S nu0 <- s.l$priors$Sig$nu0 ### Components M <- length(y) ### Locations N <- nrow(y[[1]]) ### Number of tests K <- ncol(y[[1]]) T <- ncol(beta) P <- nrow(beta) p.M <- P/M X.unit <- bdiag(replicate(M,x.unit,simplify = FALSE)) X.unit <- as.matrix(X.unit) X.shuffle <- matrix(1:(N*M), ncol = N, nrow = M, byrow = TRUE) X.shuffle <- as.vector(X.shuffle) X.unit <- X.unit[X.shuffle,] X <- do.call("rbind", replicate(K, X.unit, simplify=FALSE)) y.reorg <- list() X.N <- list() for(i in 1:N){ y.reorg[[i]] <- matrix(NA,ncol = K, nrow = M) X.N[[i]] <- X.unit[((i-1)*M+1):(i*M),] for(j in 1:M){ y.reorg[[i]][j,] <- y[[j]][i,] } } ybar <- lapply(y.reorg,rowMeans) ybar <- do.call(rbind, ybar) ybar <- as.vector(t(ybar)) Y <- do.call(rbind,y.reorg) Y <- as.vector(Y) ## Prior Hyperparameters S0 <- Bprior$V0 B0 <- Bprior$mu0 S0i <- solve(S0) S0iB0 <- S0i %*% B0 ### Initialize B.bar <- apply(beta,1,mean) Sig.bar <- list() for(j in 1:N){ Sig.bar[[j]] <- apply(Sig[[j]],1,mean) } Sig.barfull <- list() Si.full <- Bcov.full <- list() S <- matrix(NA, M,M) postB <- rep(NA, times = T) lpost.Sig <- rep(NA, times = M) lprior.Sig <- rep(NA, times = M) Si <- list() if(verb != 0){ message("Approximating integral for log-marignal likelihood") pb <- txtProgressBar(min = 0, max = T/100, initial = 0, style = 3) step <- 0 } for(t in 1:T){ if(verb != 0 & (t/100) %% 1 == 0){ step <- step + 1 setTxtProgressBar(pb,value = step) } for(i in 1:N){ s <- Sig[[i]][,t] S[upper.tri(S, diag = TRUE)] <- s S[lower.tri(S)] <- t(S)[lower.tri(S)] Si[[i]] <- solve(S) } Wi <- as.matrix(bdiag(Si)) XtWiX <- t(X.unit) %*% Wi %*% X.unit * K cov.use <- solve(XtWiX + S0i) bhat <- cov.use %*% (S0iB0 + t(X.unit)%*%Wi %*% ybar * K) postB[t] <- dtmvnorm(B.bar, mean = bhat[,1], sigma = cov.use, lower = rep(0, times= P)) } if(verb != 0){close(pb)} lpostB <- log(mean(postB)) for(i in 1:N){ s <- Sig.bar[[i]] S[upper.tri(S, diag = TRUE)] <- s S[lower.tri(S)] <- t(S)[lower.tri(S)] Sig.barfull[[i]] <- S B.use <- B.bar[(((i-1)*p.M)+1):(i*p.M)] xB <- X.N[[i]] %*% B.bar xB <- xB[,1] psi <- matrix(0, ncol = M, nrow = M) for(j in 1:K){ psi <- (y.reorg[[i]][,j] - xB) %*% t(y.reorg[[i]][,j] - xB) + psi } psi <- psi + Sprior[[i]]*(nu0) lpost.Sig[i] <- dinvwishart(S,nu0+K,psi, log= TRUE) lprior.Sig[i] <- dinvwishart(S,nu0,Sprior[[i]]*nu0, log = TRUE) } lpriorB <- dtmvnorm(B.bar, mean = B0, sigma = S0, lower =rep(0, times = P), log = TRUE) lpost <- sum(lpost.Sig) + lpostB lprior <- sum(lprior.Sig) + lpriorB Omega <- bdiag(rep(Sig.barfull, times = K)) llik <- dtmvnorm(Y,mean = as.numeric(X%*%B.bar), sigma = Omega, lower =rep(0, times = length(Y)), log = TRUE) lml <- llik + lprior - lpost return(list(lpost = lpost, llik = llik, lprior =lprior, lml= lml)) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/chib_location.R
component_sig <- function(X,y,priors,BTE = c(3000,100000,1), verb = 1, eps = sqrt(.Machine$double.eps)){ nDigits <- function(x){ truncX <- floor(abs(x)) if(truncX != 0){ floor(log10(truncX)) + 1 } else { 1 } } ## Tests K <- ncol(y[[1]]) ## Sample Locations N <- nrow(y[[1]]) ## Components M <- length(y) burn <- BTE[1] iters <-BTE[2] thin <- BTE[3] x.unit <- X X.unit <- as.matrix(bdiag(replicate(M,x.unit,simplify = FALSE))) X <- do.call("rbind", replicate(K, X.unit, simplify=FALSE)) no.betas <- ncol(X)/M X.M <- list() Sig <- list() Sig.rows <- length(diag(N)[upper.tri(diag(N),diag = TRUE)]) for(i in 1:M){ X.M[[i]] <- X.unit[((i-1)*N+1):(i*N),] Sig[[i]] <- matrix(NA, nrow = Sig.rows, ncol = iters) Sig[[i]][,1] <- diag(N)[upper.tri(diag(N),diag = TRUE)] } ybar <- lapply(y,rowMeans) Y <- do.call(rbind,y) Y <- as.vector(Y) beta <- replicate(M, matrix(NA, nrow = no.betas, ncol = iters), simplify = FALSE) names(beta) <- names(Sig) <- names(y) bhat <- as.vector(solve(t(X) %*% X) %*% t(X) %*% Y) bhat[which(bhat < 0)] <- 0 if(priors == "Jeffreys"){ mu0 <- rep(0, length(bhat)) dgts <- sapply(bhat,nDigits) V0i <- diag(length(dgts))*0 nu0 <- 0 S.prior <- replicate(M, matrix(0, nrow = N, ncol = N), simplify = FALSE) if(verb != 0){message("Jeffreys Priors Used\n")} }else if(is.list(priors)){ nu0 <- priors$Sig$nu0 mu0 <- priors$beta$mu0 V0i <- solve(priors$beta$V0) S.prior <- priors$Sig$S if(verb != 0){message("User Specified Priors Used\n")} }else{ S.prior <- lapply(y,function(X){diag((apply(X,1,sd))^2) + diag(nrow(X))*eps}) nu0 <- N mu0 <- bhat dgts <- sapply(bhat,nDigits) V0i <- diag(1/(10^(dgts+6))) if(verb != 0){message("Default Priors Used\n")} } V0imu0 <- V0i %*% mu0 bhat[which(bhat == 0)] <- eps b.use <- bhat beta.temp <- rep(NA, times = no.betas*M) if(verb != 0){ message("Sampling from posterior distributions") pb <- txtProgressBar(min = 0, max = iters/100, initial = 0, style = 3) step <- 0 } for(t in 2:iters){ if(verb != 0 & (t/100) %% 1 == 0){ step <- step + 1 setTxtProgressBar(pb,value = step) } for(i in 1:M){ xB <- as.vector(X.M[[i]] %*% b.use) psi <- matrix(0, ncol = N, nrow = N) for(k in 1:K){ psi <- (y[[i]][,k] - xB) %*% t(y[[i]][,k] - xB) + psi } psi <- psi + S.prior[[i]]*(nu0) W <- rinvwishart(K + nu0, psi) Sig[[i]][,t] <- W[upper.tri(W, diag = TRUE)] Wi <- solve(W) xtWix <- t(x.unit) %*% Wi %*% x.unit * K precis <- xtWix + diag(diag(V0i)[((i - 1)*no.betas +1):(i*no.betas)]) cov.use <- solve(precis) bhat <- cov.use %*% (V0imu0[((i - 1)*no.betas +1):(i*no.betas)] + t(x.unit)%*%Wi %*% ybar[[i]] * K) beta.temp[((i - 1)*no.betas +1):(i*no.betas)] <- beta[[i]][,t] <- as.vector( rtmvnorm(n = 1, mean = bhat[,1], sigma = cov.use, lower = rep(0, length = no.betas))) } b.use <- beta.temp } if(verb != 0){close(pb)} Sig <- lapply(Sig,function(X, b){X[,-(1:b)]}, b = burn) Sig <- lapply(Sig, function(X,t){X[,seq(from = 1, to = ncol(X), by = t)]}, t = thin) beta <- lapply(beta,function(X, b){X[,-(1:b)]}, b = burn) beta <- lapply(beta, function(X,t){X[,seq(from = 1, to = ncol(X), by = t)]}, t = thin) samps <- list(beta = beta, Sig = Sig, priors= list(beta = list(mu0 = mu0, V0 = diag(1/diag(V0i))), Sig = list(nu0 = nu0, S = S.prior)), cov.structure = "component") return(samps) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/component_sig.R
#' Matrix Constraining Process #' #' Generates matrix \eqn{X} which maps constrained masses \eqn{\beta} to observed masses \eqn{y} for an individual sample component, when given a linear system of constraining equations. #' @param C Matrix of constraints for a process at steady state. See Details below. #' @param file Character string indicating file path for a \code{*.csv} file containing linear constraints. Only values of -1, 0, and 1 are valid. The first row in the \code{file} is required to be a header naming the sampling locations. See Details for an example. #' @details The output of this function is meant to be used as the input parameter \code{X} in the \code{\link{BMB}} function. The matrix \code{C}, or imported matrix from \code{file}, indexes sampling locations via columns, and number of constraints via rows. Only values of -1, 0, and 1 are valid, and indicate mass leaving a node, a location that is not relevant to a node, and mass entering a node respectively. Constraints should only be indicated around each node. No additional, and no less constraints should be specified. Additional constraints are redundant. Each sample component is subject to the same constraints, and therefore the constraints given to \code{constrainProcess} do not need to be repeated for each component. #' #' See \code{vignette("Two_Node_Process")} for an application example. #' #' The file path of a \code{*.csv} file, which could be used to indicate the constraints for the process in \code{vignette("Two_Node_Process")}, can be found by typing \code{system.file("extdata", "twonode_constraints.csv",package = "BayesMassBal")}, and can be used as a template for other processes. #' #' @return Returns the matrix \eqn{X} which maps \eqn{\beta} to observed masses \eqn{y}. No changes need to be made to \code{X} when using with \code{\link{BMB}}. #' @importFrom pracma rref #' @importFrom utils read.csv #' @export #' @examples #' #' ## For a single node process where #' ## y_1 = y_2 + y_3 #' #' C <- matrix(c(1,-1,-1), nrow= 1,ncol = 3) #' constrainProcess(C = C) #' #' ## For a 2 node process with 1 input and 3 outputs #' ## as shown in \dontrun{vignette("Two_Node_Process", package = "BayesMassBal")} #' #' C <- matrix(c(1,-1,0,-1,0,0,1,-1,0,-1), byrow = TRUE, ncol = 5, nrow = 2) #' constrainProcess(C = C) #' #' ## Obtaining the constraints from a .csv file #' #' C <- constrainProcess(file = #' system.file("extdata", "twonode_constraints.csv",package = "BayesMassBal")) #' constrainProcess <- function(C = NULL, file = FALSE){ if(is.character(file)){ C <- as.matrix(read.csv(file)) } if(!all(C %in% c(-1,0,1))){ warning("Constraint matrix must contain only the values -1, 0, or 1, indicating negative, null, or positive mass flows.") } if(nrow(C) == 1){Cc <- C }else{ Cc <- rref(C) } dim.beta <- sum(colSums(Cc) < 0) indep.loc <- which(colSums(Cc) < 0) dep.loc <- which(colSums(Cc) > 0) if((length(dep.loc)+length(indep.loc)) != ncol(C)){ warning("Constraint matrix is not valid. See package documentation.", immediate. = TRUE) } X <- matrix(NA, ncol = length(indep.loc), nrow = ncol(C)) indep.mat <- diag(length(indep.loc)) X[indep.loc,] <- indep.mat X[dep.loc,] <- -Cc[,indep.loc] return(X) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/constrainProcess.R
#' Import Observed Mass Flow Rates #' #' Imports observed mass flow rates stored in a \code{*.csv} file and then organizes the data for use with the \code{\link{BMB}} function. #' @param file Character string containing the name of \code{*.csv} from file which data will be read. See Details below for valid file structures. #' @param header Logical indicating if the first row of \code{file} file contains header information. Current implementation of \code{importObservations} discards this information. #' @param csv.params List of arguments to be passed to \code{\link{read.csv}} #' #' @details The purpose of this function is to make it easy to import and structure loosely organized data contained in a \code{*.csv} into a list for use as the \code{y} argument passed to the \code{\link{BMB}} function. #' The entries in file must be organized as such: #' #' \itemize{ #' \item The first column of \code{file} must contain an integer sample location. The value of this integer must correspond to the column number used to specify linear constraints in \code{\link{constrainProcess}}. For example, data for a given component collected at sampling location \eqn{y_2} should be indicated with a \code{2} in the first column of \code{file} used with \code{importObservations}. In the \code{file} used with \code{\link{constrainProcess}}, the linear constraint(s) on \eqn{y_2} are indicated in the second column. #' \item The second column of \code{file} must contain sample component names. \strong{This field is case sensitive}. Ensure a given sample component is named consistently, including capitalization and spacing. #' \item Columns 3 to \eqn{K+2} of \code{file} must contain observed mass flow rates for the \eqn{K} collected sample sets. All observations located in the same column should be collected at the same time. #' \item Sample components of interest must be specified for each location. If a sample component is not detected at some locations, but is detected at others, this component should be included in \code{file} with a specified mass flow rate of 0, or a very small number. #' } #' #' \code{importObservations} reads the contents of \code{file}, sorts the sampling locations numerically, then creates a list of data frames. Each data frame contains the data for a single sample component. #' #' @return Returns a list of data frames. Each data frame is named according to the unique sample components specified in the second column of \code{file}. This list object is intended to be used as the argument \code{y} for the \code{\link{BMB}} function. #' #' @importFrom utils read.csv write.csv #' @importFrom stats rbeta #' #' #' @export #' #' @examples #' #' y <- importObservations(file = system.file("extdata", "twonode_example.csv", #' package = "BayesMassBal"), #' header = TRUE, csv.params = list(sep = ";")) #' #' ## The linear constraints for this example data set are: #' \donttest{C <- matrix(c(1,-1,0,-1,0,0,1,-1,0,-1), byrow = TRUE, ncol = 5, nrow = 2)} #' #' ## The X matrix for this data set can be found using: #' \donttest{X <- constrainProcess(C = C)} importObservations <- function(file, header = TRUE, csv.params = NULL){ dat <- do.call(read.csv, c(list(file = file, header = header, stringsAsFactors = FALSE), csv.params)) dat[,2] <- as.character(dat[,2]) if(!is.character(dat[,2])){warning(paste("Second column of ", file, " must name the sample component in each row.", sep =""))} if(!is.integer(dat[,1])){warning(paste("First column of ", file, " must be an integer specifing the sampling location of each row.", sep = ""))} u.components <- unique(dat[,2]) u.locations <- unique(dat[,1]) y <- list() for(i in 1:length(u.components)){ dat.temp <- dat[(dat[,2] == u.components[i]),] s <- sort(dat.temp[,1], index.return = TRUE)$ix dat.temp <- dat.temp[s,] y[[u.components[i]]] <- dat.temp[,-(1:2)] y[[u.components[i]]] <- as.matrix(y[[u.components[i]]]) } nrow.check <- lapply(y,nrow) nrow.check <- length(unique(nrow.check)) if(nrow.check != 1){warning(paste("Number of sampling locations differ between sample components. Check row two of ", file, " for spelling errors. See documentation for details", sep =""))} return(y) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/importObservations.R
indep_sig <- function(X,y,priors,BTE = c(3000,100000,1), verb = 1, eps = sqrt(.Machine$double.eps)){ gamdraw <- function(x){ rinvgamma(1,shape = x[1], scale = x[2]) } nDigits <- function(x){ truncX <- floor(abs(x)) if(truncX != 0){ floor(log10(truncX)) + 1 } else { 1 } } # Number of tests K <- ncol(y[[1]]) ## Sample Locations N <- nrow(y[[1]]) ## Components M <- length(y) burn <- BTE[1] iters <- BTE[2] thin <- BTE[3] no.betas <- ncol(X) x.unit <- X X.unit <- as.matrix(bdiag(replicate(M,x.unit,simplify = FALSE))) X <- do.call("rbind", replicate(K, X.unit, simplify=FALSE)) X.M <- list() Sig <- list() for(i in 1:M){ X.M[[i]] <- X.unit[((i-1)*N+1):(i*N),] Sig[[i]] <- matrix(NA, nrow = N, ncol = iters) } ybar <- lapply(y,rowMeans) Y <- do.call(cbind,y) Y <- as.vector(Y) beta <- replicate(M, matrix(NA, nrow = no.betas, ncol = iters), simplify = FALSE) names(beta) <- names(Sig) <- names(y) bhat <- as.vector(solve(t(X) %*% X) %*% t(X) %*% Y) phi.priors <- list() bhat[which(bhat < 0)] <- 0 if(priors == "Jeffreys"){ mu0 <- rep(0, length(bhat)) dgts <- sapply(bhat,nDigits) V0i <- diag(length(dgts))*0 a0 <- 0 b0 <- 0 for(i in 1:M){ phi.priors[[i]] <- cbind.data.frame(a = rep(a0, times = N), b = rep(b0, times = N)) } if(verb != 0){message("Jeffreys Priors Used\n")} }else if(is.list(priors)){ phi.priors <- priors$phi mu0 <- priors$beta$mu0 V0i <- solve(priors$beta$V0) if(verb != 0){message("User Specified Priors Used\n")} }else{ b0 <- 0.000001 a0 <- 0.000001 mu0 <- bhat dgts <- sapply(bhat,nDigits) V0i <- diag(1/(10^(dgts+6))) for(i in 1:M){ phi.priors[[i]] <- cbind.data.frame(a = rep(a0, times = N), b = rep(b0, times = N)) } if(verb != 0){message("Default Priors Used\n")} } bhat[which(bhat == 0)] <- eps b.use <- bhat V0imu0 <- V0i %*% mu0 alpha <- lapply(phi.priors, function(X){X[,1] + K/2}) rate <- rep(NA,N) beta.temp <- rep(NA, times = no.betas*M) if(verb != 0){ message("Sampling from posterior distributions") pb <- txtProgressBar(min = 0, max = iters/100, initial = 0, style = 3) step <- 0 } for(t in 2:iters){ if(verb != 0 & (t/100) %% 1 == 0){ step <- step + 1 setTxtProgressBar(pb,value = step) } for(i in 1:M){ xB <- as.vector(X.M[[i]] %*% b.use) for(j in 1:N){ rate[j] <- 0.5*(t(xB[j] - y[[i]][j,]) %*% (xB[j] - y[[i]][j,]) )+ phi.priors[[i]][j,2] } Sig[[i]][,t] <- s.temp <- apply(cbind(alpha[[i]],rate),1, gamdraw) Wi <- diag(1/s.temp) xtWix <- t(x.unit) %*% Wi %*% x.unit * K precis <- xtWix + diag(diag(V0i)[((i - 1)*no.betas +1):(i*no.betas)]) cov.use <- solve(precis) bhat <- cov.use %*% (V0imu0[((i - 1)*no.betas +1):(i*no.betas)] + t(x.unit)%*%Wi %*% ybar[[i]] * K) beta.temp[((i - 1)*no.betas +1):(i*no.betas)] <- beta[[i]][,t] <- as.vector( rtmvnorm(n = 1, mean = bhat[,1], sigma = cov.use, lower = rep(0, length = no.betas))) } b.use <- beta.temp } if(verb != 0){close(pb)} Sig <- lapply(Sig,function(X, b){X[,-(1:b)]}, b = burn) Sig <- lapply(Sig, function(X,t){X[,seq(from = 1, to = ncol(X), by = t)]}, t = thin) beta <- lapply(beta,function(X, b){X[,-(1:b)]}, b = burn) beta <- lapply(beta, function(X,t){X[,seq(from = 1, to = ncol(X), by = t)]}, t = thin) samps <- list(beta = beta, Sig = Sig, priors= list(beta = list(mu0 = mu0, V0 = diag((1/diag(V0i)))), phi = phi.priors), cov.structure = "indep") return(samps) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/indep_sig.R
location_sig <- function(X,y,priors,BTE = c(3000,100000,1), verb =1, eps = sqrt(.Machine$double.eps)){ nDigits <- function(x){ truncX <- floor(abs(x)) if(truncX != 0){ floor(log10(truncX)) + 1 } else { 1 } } ## Tests K <- ncol(y[[1]]) ## Sample Locations N <- nrow(y[[1]]) ## Components M <- length(y) burn <- BTE[1] iters <-BTE[2] thin <- BTE[3] x.unit <- X X.unit <- bdiag(replicate(M,x.unit,simplify = FALSE)) X.unit <- as.matrix(X.unit) X.shuffle <- matrix(1:(N*M), ncol = N, nrow = M, byrow = TRUE) X.shuffle <- as.vector(X.shuffle) X.unit <- X.unit[X.shuffle,] X <- do.call("rbind", replicate(K, X.unit, simplify=FALSE)) no.betas <- ncol(X)/M y.reorg <- list() cov.names <- list() for(i in 1:N){ y.reorg[[i]] <- matrix(NA,ncol = K, nrow = M) cov.names[[i]] <- matrix(NA, ncol = M, nrow = M) for(j in 1:M){ y.reorg[[i]][j,] <- y[[j]][i,] cov.names[[i]][j,] <- paste(paste(names(y)[rep(j, times = M)],i,sep = "") ,paste(names(y)[1:M],i,sep = ""), sep = ":") } } X.N <- list() Sig <- list() Sig.rows <- length(diag(M)[upper.tri(diag(M),diag = TRUE)]) for(i in 1:N){ X.N[[i]] <- X.unit[((i-1)*M+1):(i*M),] Sig[[i]] <- matrix(NA, nrow = Sig.rows, ncol = iters) Sig[[i]][,1] <- diag(M)[upper.tri(diag(M),diag = TRUE)] } ybar <- lapply(y.reorg,rowMeans) ybar <- do.call(rbind, ybar) ybar <- as.vector(t(ybar)) Y <- do.call(rbind,y.reorg) Y <- as.vector(Y) beta <- matrix(NA, nrow = no.betas*M, ncol = iters) bhat <- as.vector(solve(t(X) %*% X) %*% t(X) %*% Y) bhat[which(bhat < 0)] <- 0 if(priors == "Jeffreys"){ mu0 <- rep(0, length(bhat)) dgts <- sapply(bhat,nDigits) V0i <- diag(length(dgts))*0 nu0 <- 0 S.prior <- replicate(N, matrix(0, nrow = M, ncol = M), simplify = FALSE) if(verb != 0){message("Jeffreys Priors Used\n")} }else if(is.list(priors)){ nu0 <- priors$Sig$nu0 mu0 <- priors$beta$mu0 V0i <- solve(priors$beta$V0) S.prior <- priors$Sig$S if(verb != 0){message("User Specified Priors Used\n")} }else{ S.prior <- lapply(y.reorg,function(X){diag((apply(X,1,sd))^2) + diag(nrow(X))*eps}) nu0 <- M mu0 <- bhat dgts <- sapply(bhat,nDigits) V0i <- diag(1/(10^(dgts+6))) if(verb != 0){message("Default Priors Used\n")} } for(i in 1:N){ S.prior[[i]][S.prior[[i]] == eps] <- 1/nu0 } bhat[which(bhat == 0)] <- eps beta[,1] <- bhat V0imu0 <- V0i %*% mu0 beta.temp <- rep(NA, times = no.betas*N) Wi.temp <- list() if(verb != 0){ message("Sampling from posterior distributions") pb <- txtProgressBar(min = 0, max = iters/100, initial = 0, style = 3) step <- 0 } for(t in 2:iters){ if(verb != 0 & (t/100) %% 1 == 0){ step <- step + 1 setTxtProgressBar(pb,value = step) } b.use <- as.vector(beta[,t-1]) for(i in 1:N){ xB <- as.vector(X.N[[i]] %*% b.use) psi <- matrix(0, ncol = M, nrow = M) for(k in 1:K){ psi <- (y.reorg[[i]][,k] - xB) %*% t(y.reorg[[i]][,k] - xB) + psi } psi <- psi + S.prior[[i]]*(nu0) W <- rinvwishart(K + nu0, psi) Sig[[i]][,t] <- W[upper.tri(W, diag = TRUE)] Wi.temp[[i]] <- solve(W) } Oi <- as.matrix(bdiag(Wi.temp)) XtOiX <- t(X.unit) %*% Oi %*% X.unit * K precis <- XtOiX + V0i cov.use <- solve(precis) bhat <- cov.use %*% (V0imu0 + t(X.unit)%*%Oi %*% ybar * K) beta[,t] <- as.vector(rtmvnorm(n = 1, mean = bhat[,1], sigma = cov.use, lower = rep(0, length = no.betas*M))) } if(verb != 0){close(pb)} Sig <- lapply(Sig,function(X, b){X[,-(1:b)]}, b = burn) Sig <- lapply(Sig, function(X,t){X[,seq(from = 1, to = ncol(X), by = t)]}, t = thin) beta <- beta[,-(1:burn)] beta <- beta[,seq(from = 1, to = ncol(beta), by = thin)] beta.return <- list() for(i in 1:M){ beta.return[[i]] <- beta[((i - 1)*no.betas +1):(i*no.betas),] } names(beta.return) <- names(y) return(list(beta = beta.return, Sig = Sig, priors= list(beta = list(mu0 = mu0, V0 = diag(1/diag(V0i))), Sig = list(nu0 = nu0, S = S.prior)), cov.structure = "location", y.cov = cov.names)) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/location_sig.R
#' Main Effects #' #' Calculates the main effect of a variable, which is independent of process performance, on a function. #' #' @param BMBobj A \code{BayesMassBal} object originally obtained from the \code{\link{BMB}} function. See \code{\link{BMB}}. #' @param fn A character string naming a function with arguments of \code{BMBobj$ybal} and independent random variables \code{X}. See Details and examples for more on function requirements. #' @param rangex A numeric matrix. Each column of \code{rangex} contains the minimum and maximum value of uniformly distributed random values making up vector \eqn{x}. #' @param xj Integer indexing which element in \eqn{x} is used for conditioning for \eqn{E_x\lbrack f(x,y)|x_j\rbrack}. If a vector is supplied the marginal main effect of each element is calculated sequentially. The integers supplied in \code{xj} are equivalent to the indices of the columns in \code{rangex}. #' @param N Integer specifying the length of the sequence used for \code{xj}. Larger \code{N} trades a higher resolution of the main effect of \code{xj} for longer computation time and larger RAM requirements. #' @param res Integer indicating the number of points to be used for each Monte-Carlo integration step. Larger \code{res} reduces Monte-Carlo variance as the expense of computation time. #' @param hdi.params Numeric vector of length two, used to calculate Highest Posterior Density Interval (HPDI) of the main effect \code{xj} using \code{\link[HDInterval]{hdi}}. \code{hdi.params[1] = 1} indicates \code{\link[HDInterval]{hdi}} is used, and the mean and HPDI bounds are returned instead of the every sample from the distribution of \eqn{E_x\lbrack f(x,y)|x_j\rbrack}. The second element of \code{hdi} is passed to the \code{credMass} argument in the \code{\link[HDInterval]{hdi}} function. The default, \code{hdi.params = c(1,0.95)}, returns 95\% HPDI bounds. #' @param ... Extra arguments passed to the named \code{fn} #' #' @details #' #' The \code{mainEff} function returns a distribution of \eqn{E_x\lbrack f(x,y)|x_j\rbrack}, marginalized over the samples of \code{BMBobj$ybal}, giving the distribution of \eqn{E_x\lbrack f(x,y)|x_j\rbrack} which incorporates uncertainty of a chemical or particulate process. #' #' In the current implementation of \code{mainEff} in the \code{BayesMassBal} package, only uniformly distributed values of \eqn{x} are supported. #' #' The \eqn{f(x,y)} is equivalent to the supplied function named in \code{mainEff(fn)}. For the arguments of \code{fn}, \code{ybal} is structured in a similar manner as \code{BMBobj$ybal}. The only difference being individual columns of each matrix are used at a time, and are vectorized. Note the way \code{ybal} is subset in the example function \code{fn_example}. The supplied \code{X} is a matrix, with columns corresponding to each element in \eqn{x}. The output to \code{fn} must be a vector of length \code{nrow(x)}. The first argument of \code{fn} must be \code{X}, the second argument must be \code{BMBobj$ybal}. Order of other arguments passed to \code{fn} through \code{...} does not matter. Look at the example closely for details! #' #' @return A list of \code{length(xj)} list(s). Each list specifies output for the main effect of a \code{xj} #' @return \item{\code{g}}{The grid used for a particular \code{xj}} #' @return \item{\code{fn.out}}{A matrix giving results on \eqn{E_x\lbrack f(x,y)|x_j\rbrack}. If \code{hdi.params[1] = 1}, the mean and Highest Posterior Density Interval (HPDI) bounds of \eqn{E_x\lbrack f(x,y)|x_j\rbrack} are returned. Otherwise, samples of \eqn{E_x\lbrack f(x,y)|x_j\rbrack} are returned. The index of each column of \code{fn.out} corresponds to the the value of \code{g} at the same index.} #' @return \item{\code{fn}}{Character string giving the name of the function used. Same value as argument \code{fn}.} #' @return \item{\code{xj}}{Integer indicating the index of \eqn{x} corresponding to a grouped \code{fn.out} and \code{g}.} #' #' @importFrom HDInterval hdi #' #' @export #' #' @examples #' #' ## Importing Data, generating BMB object #' y <- importObservations(file = system.file("extdata", "twonode_example.csv", #' package = "BayesMassBal"), #' header = TRUE, csv.params = list(sep = ";")) #' #' C <- matrix(c(1,-1,0,-1,0,0,1,-1,0,-1), byrow = TRUE, ncol = 5, nrow = 2) #' X <- constrainProcess(C = C) #' #' BMB_example <- BMB(X = X, y = y, cov.structure = "indep", #' BTE = c(10,200,1), lml = FALSE, verb=0) #' #' fn_example <- function(X,ybal){ #' cu.frac <- 63.546/183.5 #' feed.mass <- ybal$CuFeS2[1] + ybal$gangue[1] #' ## Concentrate mass per ton feed #' con.mass <- (ybal$CuFeS2[3] + ybal$gangue[3])/feed.mass #' ## Copper mass per ton feed #' cu.mass <- (ybal$CuFeS2[3]*cu.frac)/feed.mass #' gam <- c(-1,-1/feed.mass,cu.mass,-con.mass,-cu.mass,-con.mass) #' f <- X %*% gam #' return(f) #' } #' #' rangex <- matrix(c(4.00 ,6.25,1125,1875,3880,9080,20,60,96,208,20.0,62.5), #' ncol = 6, nrow = 2) #' #' mE_example <- mainEff(BMB_example, fn = "fn_example",rangex = rangex,xj = 3, N = 15, res = 4) #' mainEff <- function(BMBobj, fn,rangex,xj,N = 50,res = 100, hdi.params = c(1,0.95),...){ fn.name <- fn fn <- match.fun(fn) out <- list() if(is.null(BMBobj$ybal)){ BMBobj$ybal <- lapply(BMBobj$beta,function(X,x){X <- x %*% X; row.names(X) <- paste(rep("y", times = nrow(X)), 1:nrow(X), sep ="_"); return(X)}, x = BMBobj$X) } if (requireNamespace("tgp", quietly=TRUE)){ LHS <- tgp::lhs }else{ warning("The tgp package is required") } Ts <- ncol(BMBobj$beta[[1]]) for(j in 1:length(xj)){ g <- seq(from = rangex[1,xj[j]], to = rangex[2,xj[j]], length.out = N) exp.y <- matrix(NA, ncol = N, nrow = Ts) r.x <- t(rangex[,-xj[j]]) s.v <- c((1:ncol(rangex))[-xj[j]],xj[j]) s <- sort(s.v, index.return = TRUE)$ix for(i in 1:N){ g.i <- rep(g[i], times = res) exp.int <- rep(NA, times = Ts) for(k in 1:Ts){ U <- LHS(res,r.x) U <- cbind(U,g.i)[,s] ybal <- lapply(BMBobj$ybal,function(X,t){X[,t]},t = k) y <- fn(U,ybal,...) ey <- mean(y) exp.int[k] <- ey } exp.y[,i] <- exp.int } if(hdi.params[1] == 1){ bounds <- t(apply(exp.y,2,hdi,credMass = hdi.params[2])) m <- apply(exp.y,2,mean) exp.y <- rbind(bounds[,1],m,bounds[,2]) } out <- c(out,list(g = g, fn.out = exp.y,fn = fn.name, xj = xj[[j]])) } return(out) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/mainEff.R
mcmcdiagnostics <- function(samps,diagnostics){ if(!is.list(diagnostics)){ if(!is.pairlist(diagnostics)){ diagnostics <- NULL } } SigDim <- c(length(samps$Sig),nrow(samps$Sig[[1]])) betaDim <- c(length(samps$beta),nrow(samps$beta[[1]])) Sigtemp <- data.frame(matrix(c(1:SigDim[2],rep(NA, times = 2*SigDim[2])), ncol = 3, nrow = SigDim[2])) betatemp <- data.frame(matrix(c(1:betaDim[2],rep(NA, times = 2*betaDim[2])), ncol = 3, nrow = betaDim[2])) names(Sigtemp) <- names(betatemp) <- c("index","cd", "ess") betacd <- lapply(samps$beta, function(X, params = diagnostics){apply(X,1,function(X){do.call("geweke.diag", c(list(x = X),params))$z})}) Sigcd <- lapply(samps$Sig, function(X, params = diagnostics){apply(X,1,function(X){do.call("geweke.diag", c(list(x =X),params))$z})}) betaess <- lapply(samps$beta, function(X){apply(X,1,effectiveSize)}) Sigess <- lapply(samps$Sig, function(X){apply(X,1,effectiveSize)}) Sigreport <- list() betareport <- list() for(i in 1:betaDim[1]){ name <- names(betaess)[i] betareport[[name]] <- betatemp betareport[[name]]$cd <- betacd[[i]] betareport[[name]]$ess <- betaess[[i]] } for(i in 1:SigDim[1]){ Sigreport[[i]] <- Sigtemp Sigreport[[i]]$cd <- Sigcd[[i]] Sigreport[[i]]$ess <- Sigess[[i]] } diagnostics <- list(beta = betareport, Sig = Sigreport) return(diagnostics) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/mcmcdiagnostics.R
noise <- function(X,s){ out <- rnorm(n = length(X),mean = X, sd = s) return(out) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/noise_for_sim.R
#' Plots BayesMassBal Object #' #' Visualizes data from a \code{BayesMassBal} class object in a user specified way. Options include trace plots, posterior densities, and main effects plots. Meant to be a quick diagnostic tool, and not to produce publication quality plots. #' #' @param x A \code{BayesMassBal} object returned from the \code{\link{BMB}} function #' @param sample.params List to be used for indicating model parameter samples used for creation of plot(s). See details for required structure. #' @param layout Character string indicating the desired data to be plotted. \code{"trace"} produces trace plots of sequential parameter draws. \code{"dens"} produces densities of posterior draws. Argument ignored when \code{x$type = "time-series"}. #' @param hdi.params Numeric vector of length two, used to draw Highest Posterior Density Intervals (HPDI) using \code{\link[HDInterval]{hdi}}, and otherwise ignored. \code{hdi.params[1] = 1} indicates \code{\link[HDInterval]{hdi}} bounds should be drawn. The second element of \code{hdi} is passed to \code{credMass} in the \code{\link[HDInterval]{hdi}} function. The default, \code{hdi.params = c(1,0.95)}, plots the 95\% HPDI bounds. #' @param ssEst.ylab Character string providing the label for the y-axis of a time series plot when object \code{type == "time-series"}. Argument only useful with the output from the \code{\link{ssEst}} function. #' @param ... Passes extra arguments to \code{plot()} #' #' @details #' #' The list of \code{sample.params} requires a specific structure dependent on the choice of \code{layout} and the desired plots. #' #' If \code{layout = "trace"} or \code{layout = "dens"}, \code{names(list)} must contain each model parameter desired for plotting. The structure under the model parameter names must be the same as to the structure of the relevant subset of the \code{BayesMassBal} object to be used. For example, if a \code{BayesMassBal} object is created using a process with sample components \code{c("CuFeS2","gangue")} and the users wants plots of reconciled masses \eqn{y_1} and \eqn{y_2} for both components to be created, \code{params = list(y.bal = list(CuFeS2 = c(1,2), gangue = c(1,2))} should be used. Note, \code{str(params)} mimics \code{str(x)}, while the vectors listed simply index the desired model parameters to be plotted. #' #' See \code{vignette("Two_Node_Process", package = "BayesMassBal")} for an example of the required structure. #' #' @return Plots \code{BayesMassBal} object based on arguments passed to \code{plot}. #' #' @importFrom HDInterval hdi #' @importFrom graphics plot par abline plot.new legend text #' @importFrom stats density #' #' @export plot.BayesMassBal <- function(x,sample.params = NA,layout = c("trace","dens"),hdi.params = c(1,0.95),ssEst.ylab = "Mass",...){ opar <- par(no.readonly =TRUE) on.exit(par(opar)) if(x$type == "time-series"){ c <- c("#E87722","#75787B") l.wid <- 1 y <- x$y if(!is.null(x$samples$expectation)){ mean.exp <- mean(x$samples$expectation) mean.alpha <- mean(x$samples$alpha) r.alpha<- range(x$samples$alpha) d.alpha <- density(x$samples$alpha, from = r.alpha[1], to = r.alpha[2]) leg.lab <- c("Data","Expected Steady State", NA) leg.col = c("black",c[1],c[2]) leg.lty = c(NA,1,2) leg.pch <- c(19,NA,NA) leg.lab[3] <- paste(hdi.params[2]*100, "% Credible Int.", sep = "", collapse = "") if(hdi.params[1] == 1){ hdi.exp <- hdi(x$samples$expectation, credMass = hdi.params[2]) hdi.alpha <- hdi(x$samples$alpha, credMass = hdi.params[2]) r.exp<- hdi.exp + 0.5*(hdi.exp-mean.exp) }else{ hdi.alpha <- rep(NA, times = 2) leg.lab[3] <- leg.col[3] <- leg.lty[3] <- NULL leg.pch <- leg.pch[1:2] hdi.exp <- hdi(x$samples$expectation) r.exp <- hdi.exp + 0.5*(hdi.exp-mean.exp) hdi.exp <- rep(NA, times = 2) } d.exp <- density(x$samples$expectation, from = r.exp[1], to = r.exp[2]) layout(matrix(c(1,1,2,2,3,3,3,3,4,4,4,4), ncol = 4, nrow = 3, byrow = TRUE),heights = c(5,8,2)) par(mar = c(2,1,2,1))#c(2, 4, 3, 1), cex = 1) plot(d.alpha, main = expression(alpha),xlab = "", ylab = "", yaxt = "n",lwd = l.wid,...) abline(v = c(-1,1), col = "red", lty = 2,lwd = l.wid) par(mar = c(2,1,2,1))#c(2, 1, 3, 2), cex = 1) plot(d.exp,xlab = "", ylab = "", xlim = r.exp, main = expression(paste("E[y|",mu,",",alpha,"]")),lwd = l.wid, yaxt = "n",...) abline(v = mean.exp, col = c[1],lwd = l.wid) abline(v = hdi.exp, col = c[2], lty = 2,lwd = l.wid) par(mar = c(4,4,1,1))#c(5, 4, 2, 2),cex = 1) plot(0:(length(y)-1),y, type = "p", pch = 19, ylab = ssEst.ylab, xlab = "Time Steps", main = "", ylim = r.exp,...) abline(h = mean.exp, col = c[1], lwd = l.wid) abline(h =hdi.exp, col = c[2], lty = 2, lwd = l.wid) par(mar=c(1,1,1,1), xpd = TRUE)#c(1,2,0.5,2), xpd=TRUE) plot(1, type = "n", axes=FALSE, xlab="", ylab="", cex = 1,...) legend("bottom",horiz = TRUE, legend = leg.lab, col = leg.col,pch = leg.pch, lty = leg.lty, lwd = l.wid) }else{ mean.alpha <- mean(x$samples$alpha) r.alpha<- range(x$samples$alpha) d.alpha <- density(x$samples$alpha, from = r.alpha[1], to = r.alpha[2]) layout(matrix(c(1,1,2,2,3,3,3,3), ncol = 4, nrow = 2, byrow = TRUE),heights = c(5,8)) par(mar = c(2, 4, 3, 1), cex = 1) plot(d.alpha, main = expression(alpha),xlab = "", ylab = "", yaxt = "n",lwd = l.wid ) abline(v = c(-1,1), col = "red", lty = 2,lwd = l.wid) if(sum(x$samples$alpha >= 1) > 0){ x1 <- min(which(d.alpha$x >= 1)) x2 <- max(which(d.alpha$x <= r.alpha[2])) with(d.alpha, polygon(x=c(x[c(x1,x1:x2,x2)]), y= c(0, y[x1:x2], 0), density = 50,col="red")) } if(sum(x$samples$alpha <= -1) > 0){ x1 <- max(which(d.alpha$x <= -1)) x2 <- min(which(d.alpha$x >= r.alpha[1])) with(d.alpha, polygon(x=c(x[c(x2,x2:x1,x1)]), y= c(0, y[x2:x1], 0), density = 50,col="red")) } par(mar = rep(0, times = 4)) plot(c(0,1),c(0,1), ann = FALSE, bty = "n", type = "n", xaxt = "n", yaxt = "n") samps.out <- signif(mean(x$samples$alpha < 1 & x$samples$alpha > -1)*100, digits = 4) samps.out <- paste(samps.out,"%", sep = "") text(x = 0.5, y = c(0.6,0.3), c(bquote(.(samps.out) ~ "of the samples of" ~ alpha) , expression("are between (-1,1)"))) par(mar = c(5, 4, 2, 2),cex = 1) plot(0:(length(y)-1),y, type = "p", pch = 19, ylab = ssEst.ylab, xlab = "Time Steps", main = "") } }else if(x$type == "BMB"){ ## Plots from BMB function sample.names <- names(sample.params) samples <- list() plot.names <- character() for(i in 1:length(sample.names)){ sample.subset.names <- names(sample.params[[sample.names[i]]]) sample.subset <- list() for(j in 1:length(sample.subset.names)){ object.use <- sample.params[[sample.names[i]]][[sample.subset.names[j]]] plot.names <- c(plot.names,paste(sample.names[i], object.use, sample.subset.names[j], sep = "_")) sample.subset[[j]] <- x[[sample.names[i]]][[sample.subset.names[j]]][object.use ,] } ## make sure to keep names samples[[sample.names[i]]] <- do.call(rbind, sample.subset) } samples <- do.call(rbind,samples) nplots <- nrow(samples) nrow.layout <- floor(sqrt(nplots)) ncol.layout <- ceiling(nplots/nrow.layout) plot.spaces <- nrow.layout * ncol.layout if(hdi.params[1] == 1){ hdpi <- apply(samples,1,function(X,pct = hdi.params[2]){hdi(X,pct)}) }else{hdpi <- matrix(NA, ncol = 2, nrow = nplots)} if(layout == "trace"){ layout(mat = matrix(1:plot.spaces,nrow = nrow.layout, ncol = ncol.layout, byrow = TRUE)) par(mar = c(4,4,2,1)) for(i in 1:nplots){ plot(samples[i,], type = "l", ylab = plot.names[i], ...) abline(h = hdpi[,i], col = "red") abline(h = mean(samples[i,]), col = "darkgreen") } if((plot.spaces - nplots) > 0){ for(i in 1:(plot.spaces-nplots)){ plot.new() } } }else if(layout == "dens"){ d.use <- apply(samples,1,function(X){density(X, from = mean(X)-3.5*sd(X), to = mean(X) + 3.5*sd(X))}) layout(mat = matrix(1:plot.spaces,nrow = nrow.layout, ncol = ncol.layout, byrow = TRUE)) par(mar = c(4,4,2,1)) for(i in 1:nplots){ plot(d.use[[i]],xlab = plot.names[i], ylab = "", main = "", ...) abline(v = hdpi[,i], col = "red") abline(v = mean(samples[i,], col = "darkgreen")) } if((plot.spaces - nplots) > 0){ for(i in 1:(plot.spaces-nplots)){ plot.new() } } } } }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/plot_BayesMassBal.R
#' Point Estimate Mass Balance #' #' Function conducting a two component, two node, point estimate mass balance from \insertCite{willsbook}{BayesMassBal} on a two node process. This function is provided for the purpose of comparing the performance of a Bayesian mass balance to a point estimate mass balance using any output of \code{\link{twonodeSim}}. #' @param y A list of matrices of observed mass flow rates. Each matrix is a separate sample component. The rows of each matrix index the sampling location, and the columns index the sample set number. Necessary to format exactly the same as the output of \code{twonodeSim()$simulation}. Any arguments to \code{\link{twonodeSim}} can be used. #' @return Returns a list of vectors with mass flow rates \code{yhat} and grades \code{ahat}. Similar format to argument \code{y}. The index of a vector in the output is equivalent to the index of a row in \code{y}. #' #' @examples #' y <- twonodeSim()$simulation #' #' yhat <- pointmassbal(y)$yhat #' #' @importFrom Rdpack reprompt #' @importFrom stats sd #' @export #' #' @references #' \insertRef{willsbook}{BayesMassBal} #' pointmassbal <- function(y){ obs.cu <- t(y[["CuFeS2"]]) obs.gangue <- t(y[["gangue"]]) obs.total <- obs.cu + obs.gangue grade <- obs.cu/obs.total abar <- apply(grade,2,mean) var.a <- apply(grade,2,sd)^2 Psi <- diag(var.a) c.init <- rep(0.5, times = 2) c.hat <- rep(NA, times = 2) V.rk1 <- var.a[1] + var.a[2]*c.init[1]^2 + (1-c.init[1])^2*var.a[4] V.rk2 <- var.a[2] + var.a[3]*c.init[1]^2 + (1-c.init[1])^2*var.a[5] c.hat[1] <- (abar[1]-abar[4])*(abar[2]-abar[4])/V.rk1 c.hat[1] <- c.hat[1]/((abar[2]-abar[4])^2/V.rk1) c.hat[2] <- (abar[2]-abar[5])*(abar[3]-abar[5])/V.rk2 c.hat[2] <- c.hat[2]/((abar[3]-abar[5])^2/V.rk2) tol <- 1e-6 while(all((c.init-c.hat) > tol)){ c.init <- c.hat V.rk1 <- var.a[1] + var.a[2]*c.init[1]^2 + (1-c.init[1])^2*var.a[4] V.rk2 <- var.a[2] + var.a[3]*c.init[1]^2 + (1-c.init[1])^2*var.a[5] c.hat[1] <- (abar[1]-abar[4])*(abar[2]-abar[4])/V.rk1 c.hat[1] <- c.hat[1]/((abar[2]-abar[4])^2/V.rk1) c.hat[2] <- (abar[2]-abar[5])*(abar[3]-abar[5])/V.rk2 c.hat[2] <- c.hat[2]/((abar[3]-abar[5])^2/V.rk2) } C <- matrix(c(1,-c.hat[1],0,-(1-c.hat[1]),0,0,1,-c.hat[2],0,-(1-c.hat[2])), byrow = TRUE, nrow= 2, ncol = 5) yield <- c.hat mean.feed <- mean(obs.total[,1]) ahat <- as.vector(abar - Psi %*% t(C) %*% solve(C %*% Psi %*% t(C)) %*% C %*% abar) flows <- rep(mean.feed, times = 5) flows[2] <- flows[2]*c.hat[1] flows[4] <- flows[1] - flows[2] flows[3] <- flows[2]*c.hat[2] flows[5] <- flows[2]-flows[3] cu.flow <- ahat*flows gangue.flow <- (1-ahat)*flows ahat <- list(CuFeS2 = ahat*100, gangue = (1-ahat)*100) yhat <- list(CuFeS2 = cu.flow, gangue = gangue.flow) out <- list(yhat = yhat, ahat = ahat) return(out) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/pointmassbal.R
scale <- function(y,X, cov.structure){ Xold <- X K <- ncol(y[[1]]) N <- nrow(y[[1]]) M <- length(y) if(cov.structure == "location"){ X.unit <- bdiag(replicate(M,X,simplify = FALSE)) X.unit <- as.matrix(X.unit) X.shuffle <- matrix(1:(N*M), ncol = N, nrow = M, byrow = TRUE) X.shuffle <- as.vector(X.shuffle) X.unit <- X.unit[X.shuffle,] y.reorg <- list() X.N <- list() for(i in 1:N){ X.N[[i]] <- X.unit[((i-1)*M+1):(i*M),] y.reorg[[i]] <- matrix(NA,ncol = K, nrow = M) for(j in 1:M){ y.reorg[[i]][j,] <- y[[j]][i,] } } y.use <- y.reorg }else{y.use <- y X.N <- replicate(M,Xold, simplify = FALSE) } ymeans <- lapply(y.use, function(X){rowMeans(X)}) redfctr <- lapply(ymeans, function(X){ftemp <- signif(X/1000, digits = 1); ftemp[ftemp <= 1] <- 1; return(ftemp)}) incfctr <- lapply(ymeans, function(X){ftemp <- signif(X/0.001, digits = 1); ftemp[ftemp >= 1] <- 1 ftemp[ftemp == 0] <- 1; return(ftemp)}) A <- list() for(i in length(y.use)){ y.use[[i]] <- y.use[[i]]/redfctr[[i]] y.use[[i]] <- y.use[[i]]/incfctr[[i]] X.N[[i]] <- t(t(X.N[[i]])/redfctr[[i]]) X.N[[i]] <- t(t(X.N[[i]])/incfctr[[i]]) Atemp <- diag(nrow(X.N[[i]])) Atemp <- Atemp*redfctr[[i]] Atemp <- Atemp*incfctr[[i]] A[[i]] <- Atemp } return(list(yscale = y.use, Xscale = X.N, A = A)) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/scale.R
#' Steady State Estimate #' #' Allows for the estimation of process steady state of a single stream for a process using flow rate data. #' #' @param y Vector of mass flow rate observations. Must be specified sequentially with \code{y[1]} as the initial observation. #' @param BTE Numeric vector giving \code{c(Burn-in, Total-iterations, and Every)} for MCMC approximation of target distributions. The function \code{BMB} produces a total number of samples of \eqn{(T - B)/E}. \eqn{E} specifies that only one of every \eqn{E} draws are saved. \eqn{E > 1} reduces autocorrelation between obtained samples at the expense of computation time. #' @param stationary Logical indicating if stationarity will be imposed when generating posterior draws. See Details. #' #' @return Returns a list of outputs #' @return \item{\code{samples}}{List of vectors containing posterior draws of model parameters} #' @return \item{\code{stationary}}{Logical indicating the setting of the \code{stationary} argument provided to the \code{ssEst} function} #' @return \item{\code{y}}{Vector of observations initially passed to the \code{ssEst} function.} #' @return \item{\code{type}}{Character string giving details of the model fit. Primarily included for use with \code{\link{plot.BayesMassBal}}} #' #' @details #' #' The model of the following form is fit to the data: #' #' \deqn{y_t = \mu + \alpha y_{t-1} + \epsilon} #' #' Where \eqn{\epsilon \sim \mathcal{N}(0,\sigma^2)} and \eqn{t} indexes the time step. #' #' A time series is stationary, and predictable, when \eqn{|\alpha|< 1}. Stationarity can be enforced, using the argument setting \code{stationary = TRUE}. This setting utilizes the priors \eqn{p(\alpha) \sim \mathcal{N}}(0, 1000) truncated at (-1,1), and \eqn{p(\mu) \sim \mathcal{N}}(0, \code{var(y)*100}) for inference, producing a posterior distribution for \eqn{\alpha} constrained to be within (-1,1). #' #' When fitting a model where stationarity is not enforced, the Jeffreys prior of \eqn{p(\mu,\alpha)\propto 1} is used. #' #' The Jeffreys prior of \eqn{p(\sigma^2)\propto 1/\sigma^2} is used for all inference of \eqn{\sigma^2} #' #' A stationary time series will have an expected value of: #' #' \deqn{\frac{\mu}{1-\alpha}} #' #' Samples of this expectation are included in the output if \code{stationary = TRUE} or if none of the samples of \eqn{\alpha} lie outside of (-1,1). #' #' The output list is a \code{BMB} object, passing the output to \code{\link{plot.BayesMassBal}} allows for observation of the results. #' #' @examples #' #' ## Generating Data #' y <- rep(NA, times = 21) #' #' y[1] <- 0 #' mu <- 3 #' alpha <- 0.3 #' sig <- 2 #' for(i in 2:21){ #' y[i] <- mu + alpha*y[i-1] + rnorm(1)*sig #' } #' #' ## Generating draws of model parameters #' #' fit <- ssEst(y, BTE = c(100,500,1)) #' #' @importFrom stats var #' @importFrom tmvtnorm rtmvnorm #' @importFrom LaplacesDemon rinvgamma #' @export ssEst <- function (y, BTE = c(100, 1000, 1), stationary = FALSE) { burn <- BTE[1] total <- BTE[2] every <- BTE[3] collected <- ceiling((total - burn)/every) y <- drop(y) Y <- y[-1] X <- matrix(1, nrow = length(y) - 1, ncol = 2) X[, 2] <- y[-length(y)] sig <- rep(NA, times = collected) beta <- matrix(NA, nrow = 2, ncol = collected) sigsamp <- var(y) if (stationary == TRUE) { B0 <- c(mean(y), 0) V0i <- diag(c(1/(sigsamp * 100), 1/(1000))) V0iB0 <- V0i %*% B0 XTX <- t(X) %*% X V <- solve((1/sigsamp)*XTX + V0i) bhat <- as.vector(V %*% (V0iB0 + (1/sigsamp) * t(X) %*% Y)) lb <- c(-Inf, -1) ub <- c(Inf, 1) } else if (stationary == FALSE) { bhat <- as.vector(solve(t(X) %*% X) %*% t(X) %*% Y) XTXi <- solve(t(X) %*% X) V <- XTXi * sigsamp lb <- c(-Inf, -Inf) ub <- c(Inf, Inf) } bsamp <- bhat a <- length(Y)/2 for (i in 1:total) { bsamp <- as.vector(rtmvnorm(1, mean = bhat, sigma = V, lower = lb, upper = ub)) ymXB <- Y - X %*% bsamp b <- 0.5 * t(ymXB) %*% ymXB sigsamp <- rinvgamma(1, shape = a, scale = b) if (i > burn & ((i - burn)/every)%%1 == 0) { save.sel <- (i - burn)/every beta[, save.sel] <- bsamp sig[save.sel] <- sigsamp } if (stationary == TRUE) { V <- solve(XTX * (1/sigsamp) + V0i) bhat <- as.vector(V %*% (V0iB0 + (1/sigsamp) * t(X) %*% Y)) } else { V <- XTXi * sigsamp } } if (stationary == TRUE) { expectation <- beta[1, ]/(1 - beta[2, ]) samples <- list(mu = beta[1, ], alpha = beta[2, ], expectation = expectation, s2 = sig) } else if (sum(beta[2, ] <= -1) == 0 & sum(beta[2, ] >= 1) == 0) { expectation <- beta[1, ]/(1 - beta[2, ]) samples <- list(mu = beta[1, ], alpha = beta[2, ], expectation = expectation, s2 = sig) } else { samples <- list(mu = beta[1, ], alpha = beta[2, ], s2 = sig)#, expectation = "Unable to compute expected value of y") } out <- list(samples = samples, stationary = stationary, y = y, type = "time-series") class(out) <- "BayesMassBal" return(out) }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/ssEst.R
#' Summary of BayesMassBal Object #' #' Prints a summary table containing mean values and 95% HPDI intervals for the mass flow rates, as well as the log-marginal likelihood for a \code{BayesMassBal} class object. Options include trace plots, posterior densities, and main effects plots. #' #' @param object A \code{BayesMassBal} object returned from the \code{\link{BMB}} function #' @param export Optional character string specifying location to save a \code{*.csv} file containing summary data. Only data related to mass flow rates is printed. #' @param ... Additional arguments affecting the summary produced. Not used for a \code{BayesMassBal} object. #' #' @details Current implementation only returns statistics for balanced mass flow rates, taken from \code{x$ybal}, and not statistics on \eqn{\beta} or variance parameters of \eqn{\sigma^2} and \eqn{\Sigma}. #' #' The header entry of the table \code{95\% LB} should be interpreted as the lower bound of the 95% HPDI. Similarly, the header entry of the table \code{95\% UB} should be interpreted as the upper bound of the 95% HPDI. #' #' @return A summary table printed to the console, and optionally a saved \code{*.csv} file saved within the path as specified. #' #' @importFrom HDInterval hdi #' @importFrom utils write.csv #' #' @export #' summary.BayesMassBal <- function(object, export = NA,...){ ybal <- object$ybal ans <- list() components <- names(ybal) components <- c(components, "Total") locations <- nrow(ybal[[1]]) ybal_total <- Reduce("+", ybal) ybal[["Total"]] <- ybal_total ans$`Mass Flow Rates` <- list() template_df <- data.frame(matrix(NA, ncol = 4, nrow = locations)) names(template_df) <- c("Sampling Location", "Expected Value", "95% LB", "95% UB") template_df[,1] <- 1:locations cat("Mass Flow Rates:\n") for(i in 1:(length(components))){ cat(paste("\n",components[i],":\n", sep = "")) temp <- template_df temp[,2] <- apply(ybal[[components[i]]],1,mean) temp[,3:4] <- unname(t(apply(ybal[[components[[i]]]],1,hdi))) cat(paste(c(rep("-", times = 20),"\n"), sep = "", collapse = "")) print(temp, row.names = FALSE) ans[[1]][[components[i]]] <- temp } cat("\n\nlog-marginal likelihood:\n") cat(paste(c(object$lml,"\n"))) if(is.character(export)){ csv.check <- strsplit(export, split ="[.]")[[1]] if(length(csv.check) == 1){ export <- paste(export,"csv", sep = ".") csv.check <- strsplit(export, split ="[.]")[[1]] } if(csv.check[[2]] != "csv"){warning("Only a .csv format can be output. Output not saved. Check spelling of export argument", immediate. = TRUE)} if(length(csv.check) == 2 & csv.check[2] == "csv"){ export.df <- do.call("rbind",ans[[1]]) export.df <- cbind.data.frame(`Sample Component` = rep(components, each = locations), export.df) write.csv(export.df, file = export, row.names = FALSE) } }else if(!is.na(export) & !is.character(export)){ warning("\nPlease specify a character string or NA for the export argument.") } }
/scratch/gouwar.j/cran-all/cranData/BayesMassBal/R/summary_BayesMassBal.R